DB2 AdministrationGuide PDF
DB2 AdministrationGuide PDF
Version 8
Administration Guide
SC18-7413-03
DB2 DB2 Universal Database for z/OS
®
Version 8
Administration Guide
SC18-7413-03
Note
Before using this information and the product it supports, be sure to read the general information under “Notices” on page
1237.
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
iv Administration Guide
Altering user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
| Moving from index-controlled to table-controlled partitioning . . . . . . . . . . . . . . . . . . 97
Changing the high-level qualifier for DB2 data sets . . . . . . . . . . . . . . . . . . . . . . 98
Defining a new integrated catalog alias . . . . . . . . . . . . . . . . . . . . . . . . . 99
Changing the qualifier for system data sets . . . . . . . . . . . . . . . . . . . . . . . . 99
Changing qualifiers for other databases and user data sets . . . . . . . . . . . . . . . . . . 102
Moving DB2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Tools for moving DB2 data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Moving a DB2 data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Copying a relational database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Copying an entire DB2 subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Contents v
# Example of roles and authorizations for a routine . . . . . . . . . . . . . . . . . . . . . 157
Which IDs can exercise which privileges . . . . . . . . . . . . . . . . . . . . . . . . . 161
Authorization for dynamic SQL statements . . . . . . . . . . . . . . . . . . . . . . . 164
Composite privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Multiple actions in one statement. . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Matching job titles with privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Examples of granting and revoking privileges . . . . . . . . . . . . . . . . . . . . . . . 173
# Examples using the GRANT statement . . . . . . . . . . . . . . . . . . . . . . . . . 174
Examples with secondary IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
# Examples using the REVOKE statement . . . . . . . . . . . . . . . . . . . . . . . . 180
Finding catalog information about privileges . . . . . . . . . . . . . . . . . . . . . . . . 187
Retrieving information in the catalog . . . . . . . . . . . . . . . . . . . . . . . . . 187
# Creating views of the DB2 catalog tables . . . . . . . . . . . . . . . . . . . . . . . . 190
| Multilevel security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
| Introduction to multilevel security . . . . . . . . . . . . . . . . . . . . . . . . . . 191
| Implementing multilevel security with DB2 . . . . . . . . . . . . . . . . . . . . . . . 195
| Working with data in a multilevel-secure environment . . . . . . . . . . . . . . . . . . . 198
| Implementing multilevel security in a distributed environment . . . . . . . . . . . . . . . . . 206
| Data encryption through built-in functions . . . . . . . . . . . . . . . . . . . . . . . . 206
| Defining columns for encrypted data . . . . . . . . . . . . . . . . . . . . . . . . . 207
| Defining encryption at the column level . . . . . . . . . . . . . . . . . . . . . . . . 208
| Defining encryption at the value level . . . . . . . . . . . . . . . . . . . . . . . . . 209
| Ensuring accurate predicate evaluation for encrypted data . . . . . . . . . . . . . . . . . . 210
| Encrypting non-character values . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
# Performance recommendations for data encryption . . . . . . . . . . . . . . . . . . . . . 211
vi Administration Guide
Translating outbound IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Sending passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Establishing RACF protection for DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Defining DB2 resources to RACF . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Permitting RACF access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
| Issuing DB2 commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Establishing RACF protection for stored procedures . . . . . . . . . . . . . . . . . . . . 275
Establishing RACF protection for TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . 279
Establishing Kerberos authentication through RACF . . . . . . . . . . . . . . . . . . . . . 279
Other methods of controlling access . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Contents vii
Auditing payroll operations and payroll management . . . . . . . . . . . . . . . . . . . . 307
Securing administrator, owner, and other access . . . . . . . . . . . . . . . . . . . . . . . 308
Securing access by IDs with database administrator authority . . . . . . . . . . . . . . . . . 308
Securing access by IDs with system administrator authority . . . . . . . . . . . . . . . . . . 308
Securing access by owners with implicit privileges on objects . . . . . . . . . . . . . . . . . 309
Securing access by other users . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Chapter 16. Monitoring and controlling DB2 and its connections . . . . . . . . . . 333
Controlling DB2 databases and buffer pools . . . . . . . . . . . . . . . . . . . . . . . . 333
Starting databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Monitoring databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Stopping databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Altering buffer pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Monitoring buffer pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Controlling user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Starting user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Monitoring user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Stopping user-defined functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Controlling DB2 utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Starting online utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Monitoring online utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
Stand-alone utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Controlling the IRLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
Starting the IRLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Modifying the IRLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Monitoring the IRLM connection . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Stopping the IRLM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Monitoring threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Controlling TSO connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Connecting to DB2 from TSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Monitoring TSO and CAF connections . . . . . . . . . . . . . . . . . . . . . . . . . 351
Disconnecting from DB2 while under TSO. . . . . . . . . . . . . . . . . . . . . . . . 353
Controlling CICS connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Connecting from CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Controlling CICS connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Disconnecting from CICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Controlling IMS connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Connecting to the IMS control region . . . . . . . . . . . . . . . . . . . . . . . . . 358
Controlling IMS dependent region connections . . . . . . . . . . . . . . . . . . . . . . 363
Chapter 17. Managing the log and the bootstrap data set . . . . . . . . . . . . . 393
How database changes are made . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Units of recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Rolling back work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
Establishing the logging environment . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Creation of log records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Retrieval of log records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Writing the active log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Writing the archive log (offloading) . . . . . . . . . . . . . . . . . . . . . . . . . . 396
Controlling the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Archiving the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Dynamically changing the checkpoint frequency. . . . . . . . . . . . . . . . . . . . . . 402
| Monitoring the system checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Setting limits for archive log tape units . . . . . . . . . . . . . . . . . . . . . . . . . 403
Displaying log information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
Managing the bootstrap data set (BSDS) . . . . . . . . . . . . . . . . . . . . . . . . . 403
BSDS copies with archive log data sets . . . . . . . . . . . . . . . . . . . . . . . . . 404
Changing the BSDS log inventory . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Discarding archive log records. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Deleting archive logs automatically . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Locating archive log data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Contents ix
Maintaining consistency after termination or failure . . . . . . . . . . . . . . . . . . . . 425
Termination for multiple systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
Normal restart and recovery for multiple systems . . . . . . . . . . . . . . . . . . . . . 426
Restarting multiple systems with conditions . . . . . . . . . . . . . . . . . . . . . . . 427
Resolving indoubt units of recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
Resolution of IMS indoubt units of recovery . . . . . . . . . . . . . . . . . . . . . . . 428
Resolution of CICS indoubt units of recovery . . . . . . . . . . . . . . . . . . . . . . . 429
| Resolution of WebSphere Application Server indoubt units of recovery . . . . . . . . . . . . . . 429
Resolution of remote DBMS indoubt units of recovery . . . . . . . . . . . . . . . . . . . . 431
Resolution of RRS indoubt units of recovery . . . . . . . . . . . . . . . . . . . . . . . 434
Consistency across more than two systems . . . . . . . . . . . . . . . . . . . . . . . . 435
Commit coordinator and multiple participants . . . . . . . . . . . . . . . . . . . . . . 435
Illustration of multi-site update . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
x Administration Guide
CICS attachment facility failure recovery . . . . . . . . . . . . . . . . . . . . . . . . 498
Subsystem termination recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
Resource failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Active log failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
Archive log failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
Temporary resource failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 505
BSDS failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
Recovering the BSDS from a backup copy . . . . . . . . . . . . . . . . . . . . . . . . 507
DB2 database failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510
Recovery from down-level page sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Procedure for recovering invalid LOBs . . . . . . . . . . . . . . . . . . . . . . . . . . 512
Table space input/output error recovery . . . . . . . . . . . . . . . . . . . . . . . . . 513
DB2 catalog or directory input/output errors . . . . . . . . . . . . . . . . . . . . . . . . 514
Integrated catalog facility failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . 515
Recovery when the VSAM volume data set (VVDS) is destroyed . . . . . . . . . . . . . . . . 515
Out of disk space or extent limit recovery . . . . . . . . . . . . . . . . . . . . . . . . 516
Violation of referential constraint recovery . . . . . . . . . . . . . . . . . . . . . . . . . 520
Distributed data facility failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . 520
Conversation failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
Communications database failure recovery . . . . . . . . . . . . . . . . . . . . . . . 521
Database access thread failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . 522
VTAM failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
TCP/IP failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Remote logical unit failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Indefinite wait condition recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 524
Security failure recovery for database access threads . . . . . . . . . . . . . . . . . . . . 524
Remote site recovery from a disaster at the local site . . . . . . . . . . . . . . . . . . . . . 525
Restoring data from image copies and archive logs . . . . . . . . . . . . . . . . . . . . . 525
Using a tracker site for disaster recovery . . . . . . . . . . . . . . . . . . . . . . . . 535
Using data mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 544
Resolving indoubt threads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Description of the recovery environment . . . . . . . . . . . . . . . . . . . . . . . . 550
Communication failure recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
Making a heuristic decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
Recovery from an IMS outage that results in an IMS cold start . . . . . . . . . . . . . . . . . 553
Recovery from a DB2 outage at a requester that results in a DB2 cold start . . . . . . . . . . . . . 554
Recovery from a DB2 outage at a server that results in a DB2 cold start . . . . . . . . . . . . . . 557
Correcting a heuristic decision . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Chapter 22. Recovery from BSDS or log failure during restart . . . . . . . . . . . 559
Log initialization or current status rebuild failure recovery . . . . . . . . . . . . . . . . . . . 561
Description of failure during log initialization . . . . . . . . . . . . . . . . . . . . . . 562
Description of failure during current status rebuild . . . . . . . . . . . . . . . . . . . . . 563
Restart by truncating the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
Failure during forward log recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . 570
Understanding forward log recovery failure . . . . . . . . . . . . . . . . . . . . . . . 571
Starting DB2 by limiting restart processing . . . . . . . . . . . . . . . . . . . . . . . 571
Failure during backward log recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 575
Understanding backward log recovery failure . . . . . . . . . . . . . . . . . . . . . . 576
Bypassing backout before restarting . . . . . . . . . . . . . . . . . . . . . . . . . . 576
Failure during a log RBA read request . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Unresolvable BSDS or log data set problem during restart . . . . . . . . . . . . . . . . . . . 578
Preparing for recovery or restart . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
Performing fall back to a prior shutdown point . . . . . . . . . . . . . . . . . . . . . . 579
Failure resulting from total or excessive loss of log data . . . . . . . . . . . . . . . . . . . . 581
Total loss of the log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
Excessive loss of data in the active log . . . . . . . . . . . . . . . . . . . . . . . . . 582
Resolving inconsistencies resulting from a conditional restart . . . . . . . . . . . . . . . . . . 584
Inconsistencies in a distributed environment . . . . . . . . . . . . . . . . . . . . . . . 584
Procedures for resolving inconsistencies . . . . . . . . . . . . . . . . . . . . . . . . 584
Method 1. Recover to a prior point of consistency . . . . . . . . . . . . . . . . . . . . . 585
Contents xi
Method 2. Re-create the table space . . . . . . . . . . . . . . . . . . . . . . . . . . 586
Method 3. Use the REPAIR utility on the data . . . . . . . . . . . . . . . . . . . . . . 586
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools . . . . . . . . . . . . . 633
Tuning database buffer pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Terminology: Types of buffer pool pages . . . . . . . . . . . . . . . . . . . . . . . . 634
Read operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
Write operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 634
Assigning a table space or index to a buffer pool . . . . . . . . . . . . . . . . . . . . . 635
Buffer pool thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
Determining size and number of buffer pools . . . . . . . . . . . . . . . . . . . . . . 639
Choosing a page-stealing algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 642
| Long-term page fix option for buffer pools . . . . . . . . . . . . . . . . . . . . . . . 643
Monitoring and tuning buffer pools using online commands . . . . . . . . . . . . . . . . . 643
Using OMEGAMON to monitor buffer pool statistics . . . . . . . . . . . . . . . . . . . . 645
| Tuning EDM storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 647
| EDM storage space handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 648
Contents xiii
Using z/OS workload management to set performance objectives . . . . . . . . . . . . . . . . 705
CICS design options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
IMS design options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
TSO design options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
DB2 QMF design options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
Contents xv
| Materialized query table examples shipped with DB2 . . . . . . . . . . . . . . . . . . . . . 862
Chapter 36. Monitoring and tuning stored procedures and user-defined functions . . . 983
Controlling address space storage . . . . . . . . . . . . . . . . . . . . . . . . . . . 983
Assigning procedures and functions to WLM application environments . . . . . . . . . . . . . . . 984
Providing DB2 cost information for accessing user-defined table functions . . . . . . . . . . . . . . 985
Monitoring stored procedures with the accounting trace . . . . . . . . . . . . . . . . . . . . 986
Accounting for nested activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 988
Comparing the types of stored procedure address spaces . . . . . . . . . . . . . . . . . . . . 989
Contents xvii
Appendix A. DB2 sample tables . . . . . . . . . . . . . . . . . . . . . . . . 995
Activity table (DSN8810.ACT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
Department table (DSN8810.DEPT) . . . . . . . . . . . . . . . . . . . . . . . . . . . 996
Employee table (DSN8810.EMP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 998
Employee photo and resume table (DSN8810.EMP_PHOTO_RESUME) . . . . . . . . . . . . . . . 1001
Project table (DSN8810.PROJ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
Project activity table (DSN8810.PROJACT) . . . . . . . . . . . . . . . . . . . . . . . . 1003
Employee to project activity table (DSN8810.EMPPROJACT) . . . . . . . . . . . . . . . . . . 1004
| Unicode sample table (DSN8810.DEMO_UNICODE) . . . . . . . . . . . . . . . . . . . . . 1005
Relationships among the sample tables . . . . . . . . . . . . . . . . . . . . . . . . . 1006
Views on the sample tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1006
Storage of sample application tables . . . . . . . . . . . . . . . . . . . . . . . . . . 1011
Storage group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
Table spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1012
Contents xix
Registers and return codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
Stand-alone log OPEN request . . . . . . . . . . . . . . . . . . . . . . . . . . . 1095
Stand-alone log GET request . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097
Stand-alone log CLOSE request . . . . . . . . . . . . . . . . . . . . . . . . . . . 1098
Sample application that uses stand-alone log services . . . . . . . . . . . . . . . . . . . 1099
Reading log records with the log capture exit routine. . . . . . . . . . . . . . . . . . . . . 1100
xx Administration Guide
Using z/OS, CICS, and IMS tools . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153
Monitoring system resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154
Monitoring transaction manager throughput. . . . . . . . . . . . . . . . . . . . . . . 1155
DB2 trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155
Types of traces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1156
Effect on DB2 performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159
Recording SMF trace data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159
Activating SMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
Allocating additional SMF buffers . . . . . . . . . . . . . . . . . . . . . . . . . . 1160
Reporting data in SMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
Recording GTF trace data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1161
| OMEGAMON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1162
| Tivoli Decision Support for OS/390 . . . . . . . . . . . . . . . . . . . . . . . . . . 1163
Monitoring application plans and packages . . . . . . . . . . . . . . . . . . . . . . . . 1163
Contents xxi
DSNACICS restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223
DSNACICS debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1223
# The SYSIBM.USERNAMES encryption stored procedure . . . . . . . . . . . . . . . . . . . . 1223
# Environment for DSNLEUSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
# Authorization required for DSNLEUSR . . . . . . . . . . . . . . . . . . . . . . . . 1224
# DSNLEUSR syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
# DSNLEUSR option descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1224
# Example of DSNLEUSR invocation . . . . . . . . . . . . . . . . . . . . . . . . . . 1225
# DSNLEUSR output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1226
# IMS transactions stored procedure (DSNAIMS) . . . . . . . . . . . . . . . . . . . . . . . 1227
# Environment for DSNAIMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
# Authorization required for DSNAIMS . . . . . . . . . . . . . . . . . . . . . . . . . 1227
# DSNAIMS syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
# DSNAIMS option descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227
# Examples of DSNAIMS invocation . . . . . . . . . . . . . . . . . . . . . . . . . . 1230
# Connecting to multiple IMS subsystems with DSNAIMS . . . . . . . . . . . . . . . . . . 1230
# The DB2 EXPLAIN stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
# Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
# Authorization required . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
# DSN8EXP syntax diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1231
# DSN8EXP option descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
# Example of DSN8EXP invocation . . . . . . . . . . . . . . . . . . . . . . . . . . 1232
# DSN8EXP output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1233
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237
Programming Interface Information . . . . . . . . . . . . . . . . . . . . . . . . . . 1238
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1240
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1241
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1275
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . X-1
.
Important
In this version of DB2 UDB for z/OS, the DB2 Utilities Suite is available as an
optional product. You must separately order and purchase a license to such
utilities, and discussion of those utility functions in this publication is not
intended to otherwise imply that you have a license to them. See Part 1 of
DB2 Utility Guide and Reference for packaging details.
When referring to a DB2 product other than DB2 UDB for z/OS, this information
uses the product’s full name to avoid ambiguity.
Accessibility
Accessibility features help a user who has a physical disability, such as restricted
mobility or limited vision, to use software products. The major accessibility
features in z/OS products, including DB2 UDB for z/OS, enable users to:
v Use assistive technologies such as screen reader and screen magnifier software
v Operate specific or equivalent features by using only a keyboard
v Customize display attributes such as color, contrast, and font size
Assistive technology products, such as screen readers, function with the DB2 UDB
for z/OS user interfaces. Consult the documentation for the assistive technology
products for specific information when you use assistive technology to access these
interfaces.
Online documentation for Version 8 of DB2 UDB for z/OS is available in the
Information management software for z/OS solutions information center, which is
an accessible format when used with assistive technologies such as screen reader
or screen magnifier software. The Information management software for z/OS
solutions information center is available at the following Web site:
http://publib.boulder.ibm.com/infocenter/dzichelp
www.ibm.com/software/db2zos/library.html
This Web site has a feedback page that you can use to send comments.
v Print and fill out the reader comment form located at the back of this book. You
can give the completed form to your local IBM branch office or IBM
representative, or you can send it to the address printed on the reader comment
form.
If you are new DB2 UDB for z/OS, begin with The Official Introduction to DB2 UDB
for z/OS for extensive conceptual information.
General information about DB2 UDB for z/OS is available from the DB2 UDB for
z/OS World Wide Web page:
http://www.software.ibm.com/data/db2/os390/
Data structures
DB2 data structures described in this section include:
“Databases” on page 5
“Storage groups” on page 5
“Table spaces” on page 5
“Tables” on page 6
“Indexes” on page 6
“Views” on page 7
The brief descriptions here show how the structures fit into an overall view of
DB2.
Figure 1 on page 4 shows how some DB2 structures contain others. To some extent,
the notion of “containment” provides a hierarchy of structures. This section
introduces those structures from the most to the least inclusive.
4 Administration Guide
Databases
A single database can contain all the data associated with one application or with a
group of related applications. Collecting that data into one database allows you to
start or stop access to all the data in one operation and grant authorization for
access to all the data as a single unit. Assuming that you are authorized to do so,
you can access data stored in different databases.
If you create a table space or a table and do not specify a database, the table or
table space is created in the default database, DSNDB04. DSNDB04 is defined for
you at installation time. All users have the authority to create table spaces or tables
in database DSNDB04. The system administrator can revoke those privileges and
grant them only to particular users as necessary.
When you migrate to Version 8, DB2 adopts the default database and default
storage group you used in Version 7. You have the same authority for Version 8 as
you did in Version 7.
Storage groups
The description of a storage group names the group and identifies its volumes and
the VSAM (virtual storage access method) catalog that records the data sets. The
default storage group, SYSDEFLT, is created when you install DB2.
All volumes of a given storage group must have the same device type. But, as
Figure 1 on page 4 suggests, parts of a single database can be stored in different
storage groups.
Table spaces
A table space can consist of a number of VSAM data sets. Data sets are VSAM
linear data sets (LDSs). Table spaces are divided into equal-sized units, called pages,
which are written to or read from disk in one operation. You can specify page sizes
for the data; the default page size is 4 KB.
When you create a table space, you can specify the database to which the table
space belongs and the storage group it uses. If you do not specify the database and
storage group, DB2 assigns the table space to the default database and the default
storage group.
Sample tables: The examples in this book are based on the set of tables described
in Appendix A, “DB2 sample tables,” on page 995. The sample tables are part of
the DB2 licensed program and represent data related to the activities of an
imaginary computer services company, the Spiffy Computer Services Company.
Table 1 shows an example of a DB2 sample table.
Table 1. Example of a DB2 sample table (Department table)
DEPTNO DEPTNAME MGRNO ADMRDEPT
A00 SPIFFY COMPUTER SERVICE DIV. 000010 A00
B01 PLANNING 000020 A00
C01 INFORMATION CENTER 000030 A00
D01 DEVELOPMENT CENTER A00
E01 SUPPORT SERVICES 000050 A00
D11 MANUFACTURING SYSTEMS 000060 D01
D21 ADMINISTRATION SYSTEMS 000070 D01
E11 OPERATIONS 000090 E01
E21 SOFTWARE SUPPORT 000100 E01
Indexes
Each index is based on the values of data in one or more columns of a table. After
you create an index, DB2 maintains the index, but you can perform necessary
maintenance such as reorganizing it or recovering the index.
Indexes take up physical storage in index spaces. Each index occupies its own index
space.
6 Administration Guide
v To improve performance. Access to data is often faster with an index than
without.
v To ensure that a row is unique. For example, a unique index on the employee
table ensures that no two employees have the same employee number.
Except for changes in performance, users of the table are unaware that an index is
in use. DB2 decides whether to use the index to access the table. There are ways to
influence how indexes affect performance when you calculate the storage size of an
index and determine what type of index to use. An index can be partitioning,
nonpartitioning, or clustered. For example, you can apportion data by last names,
maybe using one partition for each letter of the alphabet. Your choice of a
partitioning scheme is based on how an application accesses data, how much data
you have, and how large you expect the total amount of data to grow.
Views
Views allow you to shield some table data from end users. A view can be based on
other views or on a combination of views and tables.
When you define a view, DB2 stores the definition of the view in the DB2 catalog.
However, DB2 does not store any data for the view itself, because the data already
exists in the base table or tables.
System structures
DB2 system structures described in this section include:
“DB2 catalog”
“DB2 directory” on page 8
“Active and archive logs” on page 8
“Bootstrap data set (BSDS)” on page 9
“Buffer pools” on page 9
“Data definition control support database” on page 9
“Resource limit facility database” on page 10
“Work file database” on page 10
“TEMP database” on page 10
DB2 catalog
The DB2 catalog consists of tables of data about everything defined to the DB2
system, including table spaces, indexes, tables, copies of table spaces and indexes,
storage groups, and so forth. The system database DSNDB06 contains the DB2
catalog.
When you create, alter, or drop any structure, DB2 inserts, updates, or deletes rows
of the catalog that describe the structure and tell how the structure relates to other
structures. For example, SYSIBM.SYSTABLES is one catalog table that records
information when a table is created. DB2 inserts a row into SYSIBM.SYSTABLES
that includes the table name, its owner, its creator, and the name of its table space
and its database.
Because the catalog consists of DB2 tables in a DB2 database, authorized users can
use SQL statements to retrieve information from it.
DB2 directory
The DB2 directory contains information that DB2 uses during normal operation.
You cannot access the directory using SQL, although much of the same information
is contained in the DB2 catalog, for which you can submit queries. The structures
in the directory are not described in the DB2 catalog.
The directory consists of a set of DB2 tables stored in five table spaces in system
database DSNDB01. Each of the table spaces listed in Table 2 is contained in a
VSAM linear data set.
Table 2. Directory table spaces
Table space name Description
SCT02 Contains the internal form of SQL statements
Skeleton cursor (SKCT) contained in an application. When you bind a
plan, DB2 creates a skeleton cursor table in SCT02.
SPT01 Similar to SCT02 except that the skeleton package
Skeleton package table is created when you bind a package.
SYSLGRNX Tracks the opening and closing of table spaces,
Log range indexes, or partitions. By tracking this information
and associating it with relative byte addresses
(RBAs) as contained in the DB2 log, DB2 can
reduce recovery time by reducing the amount of
log that must be scanned for a particular table
space, index, or partition.
SYSUTILX Contains a row for every utility job that is
System utilities running. The row stays until the utility is finished.
If the utility terminates without completing, DB2
uses the information in the row when you restart
the utility.
DBD01 Contains internal information, called database
Database descriptor (DBD) descriptors (DBDs), about the databases that exist
within DB2.
DB2 writes each log record to a disk data set called the active log. When the active
log is full, DB2 copies the contents of the active log to a disk or magnetic tape data
set called the archive log.
8 Administration Guide
| v A single active log contains between up to 93 active log data sets.
| v With dual logging, the active log has twice the capacity for active log data sets,
because two identical copies of the log records are kept.
Each active log data set is a single-volume, single-extent VSAM LDS.
Because the BSDS is essential to recovery in the event of subsystem failure, during
installation DB2 automatically creates two copies of the BSDS and, if space permits,
places them on separate volumes.
Buffer pools
Buffer pools are areas of virtual storage in which DB2 temporarily stores pages of
table spaces or indexes. When an application program accesses a row of a table,
DB2 retrieves the page containing that row and places the page in a buffer. If the
needed data is already in a buffer, the application program does not have to wait
for it to be retrieved from disk, significantly reducing the cost of retrieving the
page.
Buffer pools require monitoring and tuning. The size of buffer pools is critical to
the performance characteristics of an application or group of applications that
access data in those buffer pools.
When you use Parallel Sysplex data sharing, buffer pools map to structures called
group buffer pools. These structures reside in a special PR/SM™ LPAR logical
partition called a coupling facility, which enables several DB2s to share information
and control the coherency of data.
| Buffer pools reside in DB2’s DBM1 primary address space. This option offers the
| best performance. In Version 8, the maximum size of a buffer pool increases to 1
| TB.
You can establish a single limit for all users, different limits for individual users, or
both. You can choose to have these limits applied before the statement is executed
(this is called predictive governing), or while a statement is running (sometimes
called reactive governing). You can even use both modes of governing. You define
these limits in one or more resource limit specification tables (RLST).
TEMP database
The TEMP database is for declared temporary tables only. DB2 stores all declared
temporary tables in this database. You can create one TEMP database for each DB2
subsystem or data sharing member.
10 Administration Guide
Table 3. More information about DB2 structures (continued)
For more information about... See...
CDB DB2 Installation Guide
Directory, data set naming conventions DB2 Installation Guide
Logs Chapter 17, “Managing the log and the
bootstrap data set,” on page 393
BSDS usage, functions “Managing the bootstrap data set (BSDS)” on
page 403
Buffer pools, tuning v Chapter 26, “Tuning DB2 buffer, EDM,
RID, and sort pools,” on page 633
v DB2 Command Reference
Group buffer pools DB2 Data Sharing: Planning and Administration
Data definition control support database Chapter 10, “Controlling access through a
closed application,” on page 215
RLST “Resource limit facility (governor)” on page
683
Work file and TEMP database, defining Volume 2 of DB2 SQL Reference
Commands
The commands are divided into the following categories:
v DSN command and subcommands
v DB2 commands
v IMS commands
v CICS attachment facility commands
v IRLM commands
v TSO CLIST commands
| To enter a DB2 command from an authorized z/OS consolee, you use a subsystem
command prefix (composed of 1 to 8 characters) at the beginning of the command.
The default subsystem command prefix is -DSN1, which you can change when you
install or migrate DB2.
Example: The following command starts the DB2 subsystem that is associated with
the command prefix -DSN1:
-DSN1 START DB2
Utilities
You use utilities to perform many of the tasks required to maintain DB2 data.
Those tasks include loading a table, copying a table space, or recovering a database
to a previous point in time.
High availability
It is not necessary to start or stop DB2 often. DB2 continually adds function to
improve availability, especially in the following areas:
v “Daily operations and tuning”
v “Backup and recovery”
v “Restart” on page 13
A lot of factors affect the availability of the databases. Here are some key points to
be aware of:
v You should limit your use of, and understand the options of, utilities such as
COPY and REORG.
– You can recover online such structures as table spaces, partitions, data sets, a
range of pages, a single page, and indexes.
– You can recover table spaces and indexes at the same time to reduce recovery
time.
– With some options on the COPY utility, you can read and update a table
space while copying it.
v I/O errors have the following affects:
– I/O errors on a range of data do not affect availability to the rest of the data.
– If an I/O error occurs when DB2 is writing to the log, DB2 continues to
operate.
– If an I/O error is on the active log, DB2 moves to the next data set. If the
error is on the archive log, DB2 dynamically allocates another data set.
v Documented disaster recovery methods are crucial in the case of disasters that
might cause a complete shutdown of your local DB2 system.
v If DB2 is forced to a single mode of operations for the bootstrap data set or logs,
you can usually restore dual operation while DB2 continues to run.
12 Administration Guide
Restart
A key to the perception of high availability is getting the DB2 subsystem back up
and running quickly after an unplanned outage.
v Some restart processing can occur concurrently with new work. Also, you can
choose to postpone some processing.
v During a restart, DB2 applies data changes from the log. This technique ensures
that data changes are not lost, even if some data was not written at the time of
the failure. Some of the process of applying log changes can run in parallel
| v You can register DB2 to the Automatic Restart Manager of z/OS. This facility
automatically restarts DB2 should it go down as a result of a failure.
Address spaces
DB2 uses several different address spaces for the following purposes:
Database services
| ssnmDBM1 manipulates most of the structures in user-created databases. In
| Version 8, storage areas such as buffer pools reside above the 2GB bar in
| the ssnmDBM1 address space. With 64-bit virtual addressing to access these
| storage areas, buffer pools to scale to extremely large sizes.
System services
ssnmMSR performs a variety of system-related functions.
Distributed data facility
ssnmDIST provides support for remote requests.
IRLM (Internal resource lock manager)
IRLMPROC controls DB2 locking.
DB2-established
ssnmSPAS, for stored procedures, provides an isolated execution
environment for user-written SQL programs at a DB2 server.
You cannot share IRLM between DB2s or between DB2 and IMS. (IRLM is also
shipped with IMS.) If you are running a DB2 data sharing group, there is a
corresponding IRLM group.
Administering IRLM
IRLM requires some control and monitoring. The external interfaces to the IRLM
include:
v Installation
Install IRLM when you install DB2. Consider that locks take up storage, and
adequate storage for IRLM is crucial to the performance of your system.
Another important performance item is to make the priority of the IRLM
address space above all the DB2 address spaces.
v Commands
| Some z/OS commands specifically for IRLM let you modify parameters, display
information about the status of the IRLM and its storage use, and start and stop
IRLM.
v Tracing
DB2’s trace facility gives you the ability to trace lock interactions.
| You can use z/OS trace commands or IRLMPROC options to control diagnostic
| traces for IRLM. You normally use these traces under the direction of IBM
| Service.
14 Administration Guide
An attachment facility provides the interface between DB2 and another environment.
Figure 2 shows the z/OS attachment facilities with interfaces to DB2.
TSO WebSphere
TSO
DB2
CICS
RRS
RRS
CAF
IMS IMS
CAF
| WebSphere
| WebSphere products that are integrated with DB2 include WebSphere Application
| Server, WebSphere Studio, and Transaction Servers & Tools. In the WebSphere
| environment, you can use the RRS attachment facility.
CICS
The Customer Information Control System (CICS) attachment facility provided with
the CICS transaction server lets you access DB2 from CICS. After you start DB2,
you can operate DB2 from a CICS terminal. You can start and stop CICS and DB2
independently, and you can establish or terminate the connection between them at
any time. You also have the option of allowing CICS to connect to DB2
automatically.
The CICS attachment facility also provides CICS applications with access to DB2
data while operating in the CICS environment. CICS applications, therefore, can
access both DB2 data and CICS data. In case of system failure, CICS coordinates
recovery of both DB2 and CICS data.
Examples:
EXEC CICS WAIT
EXEC CICS ABEND
With proper planning, you can include DB2 in a CICS XRF recovery scenario.
To a CICS terminal user, application programs that access both CICS and DB2 data
appear identical to application programs that access only CICS data.
Even though you perform DB2 functions through CICS, you need to have the TSO
attachment facility and ISPF to take advantage of the online functions supplied
with DB2 to install and customize your system. You also need the TSO attachment
to bind application plans and packages.
IMS
The Information Management System (IMS) attachment facility allows you to access
DB2 from IMS. The IMS attachment facility receives and interprets requests for
access to DB2 databases using exits provided by IMS subsystems. Usually, IMS
connects to DB2 automatically with no operator intervention.
In addition to Data Language I (DL/I) and Fast Path calls, IMS applications can
make calls to DB2 using embedded SQL statements. In case of system failure, IMS
coordinates recovery of both DB2 and IMS data.
With proper planning, you can include DB2 in an IMS XRF recovery scenario.
Application programming with IMS: With the IMS attachment facility, DB2
provides database services for IMS dependent regions. DL/I batch support allows
users to access both IMS data (DL/I) and DB2 data in the IMS batch environment,
which includes:
v Access to DB2 and DL/I data from application programs.
v Coordinated recovery through a two-phase commit process.
v Use of the IMS extended restart (XRST) and symbolic checkpoint (CHKP) calls
by application programs to coordinate recovery with IMS, DB2, and generalized
sequential access method (GSAM) files.
16 Administration Guide
IMS programmers writing the data communication portion of application programs
do not need to alter their coding technique to write the data communication
portion when accessing DB2; only the database portions of the application
programs change. For the database portions, programmers code SQL statements to
retrieve or modify data in DB2 tables.
To an IMS terminal user, IMS application programs that access DB2 appear
identical to IMS.
Even though you perform DB2 functions through IMS, you need the TSO
attachment facility and ISPF to take advantage of the online functions supplied
with DB2 to install and customize your system. You also need the TSO attachment
facility to bind application plans and packages.
TSO
The Time Sharing Option (TSO) attachment facility is required for binding
application plans and packages and for executing several online functions that are
provided with DB2.
Using the TSO attachment facility, you can access DB2 by running in either
foreground or batch. You gain foreground access through a TSO terminal; you gain
batch access by invoking the TSO terminal monitor program (TMP) from a batch
job.
Whether you access DB2 in foreground or batch, attaching through the TSO
attachment facility and the DSN command processor makes access easier. DB2
subcommands that execute under DSN are subject to the command size limitations
as defined by TSO. TSO allows authorized DB2 users or jobs to create, modify, and
maintain databases and application programs. You invoke the DSN processor from
the foreground by issuing a command at a TSO terminal. From batch, first invoke
TMP from within a batch job, and then pass commands to TMP in the SYSTSIN
data set.
CAF
Most TSO applications must use the TSO attachment facility, which invokes the
DSN command processor. Together, DSN and TSO provide services such as
automatic connection to DB2, attention key support, and translation of return codes
into error messages. However, when using DSN services, your application must
run under the control of DSN.
The call attachment facility (CAF) provides an alternative connection for TSO and
batch applications needing tight control over the session environment. Applications
using CAF can explicitly control the state of their connections to DB2 by using
connection functions that CAF supplies.
RRS
| z/OS Resource Recovery Services is a newer implementation of CAF with
| additional capabilities. RRS is a feature of z/OS that coordinates commit
processing of recoverable resources in a z/OS system. DB2 supports use of these
services for DB2 applications that use the RRS attachment facility provided with
DB2. Use the RRS attachment to access resources such as SQL tables, DL/I
databases, MQSeries® messages, and recoverable VSAM files within a single
transaction scope.
DDF also enables applications that run in a remote environment that supports
DRDA. These applications can use DDF to access data in DB2 servers. Examples of
application requesters include IBM DB2 Connect and other DRDA-compliant client
products.
With DDF, you can have up to 150 000 distributed threads connect to a DB2 server
at the same time. A thread is a DB2 structure that describes an application's
connection and traces its progress.
18 Administration Guide
Use stored procedures to reduce processor and elapsed time costs of distributed
access. A stored procedure is user-written SQL program that a requester can invoke
at the server. By encapsulating the SQL, many fewer messages flow across the
wire.
Local DB2 applications can use stored procedures as well to take advantage of the
ability to encapsulate SQL that is shared among different applications.
The decision to access distributed data has implications for many DB2 activities:
application programming, data recovery, authorization, and so on.
Sharing DB2s must belong to a DB2 data sharing group. A data sharing group is a
collection of one or more DB2 subsystems accessing shared DB2 data. Each DB2
subsystem belonging to a particular data sharing group is a member of that group.
All members of a group use the same shared DB2 catalog and directory.
With data sharing, you can grow your system incrementally by adding additional
central processor complexes and DB2s to the data sharing group. You don’t have to
move part of the workload onto another system, alleviating the need to manage
copies of the data or to use distributed processing to access the data.
You can configure your environment flexibly. For example, you can tailor each
| z/OS image to meet the requirements for the user set on that image. For
processing that occurs during peak workload periods, you can bring up a dormant
DB2 to help process the work.
Recommendation: Use the Security Server check the security of DB2 users and to
protect DB2 resources. The Security Server provides effective protection for DB2
data by permitting only DB2-mediated access to DB2 data sets.
Consult with your site’s storage administrator about using SMS for DB2 private
data, image copies, and archive logs. For data that is especially
performance-sensitive, there might need to be more manual control over data set
placement.
Table spaces or indexes with data sets larger than 4 gigabytes require
SMS-managed data sets.
Extended partitioned data sets (PDSE), a feature of DFSMSdfp, are useful for
managing stored procedures that run in a stored procedures address space. PDSE
enables extent information for the load libraries to be dynamically updated,
reducing the need to start and stop the stored procedures address space.
20 Administration Guide
Table 5. More information about the z/OS environment (continued)
For more information about... See...
CICS connections Chapter 16, “Monitoring and controlling DB2
and its connections,” on page 333
CICS administration DB2 Installation Guide
IMS XRF v “Extended recovery facility (XRF)
toleration” on page 440
v IMS Administration Guide: System
DL/I batch Volume 2 of DB2 Application Programming and
SQL Guide
DataPropagator NonRelational IMS DataPropagator: An Introduction
ISPF Volume 2 of DB2 Application Programming and
SQL Guide
Distributed data Volume 1 of DB2 Application Programming and
SQL Guide
Parallel Sysplex data sharing DB2 Data Sharing: Planning and Administration
24 Administration Guide
Chapter 2. Introduction to designing a database: advanced
topics
Part 2, “Designing a database: advanced topics,” on page 23 presents information
about the current release of DB2 UDB for z/OS and selected advanced topics. The
Official Introduction to DB2 UDB for z/OS covers basic information about designing
and implementing a database.
Table 6 shows where you can find more information about topics related to
designing a database.
Table 6. More information about designing a database
For more information about... See...
Basic database design concepts for DB2 The Official Introduction to DB2 UDB for z/OS
Universal Database for z/OS, including:
v Designing tables and views
v Designing columns
v Designing indexes
v Designing table spaces
Maintaining data integrity, including: Part 2 of DB2 Application Programming and
v Maintaining referential constraints SQL Guide
v Defining table check constraints
v Planning to use triggers
Maintaining data integrity, including Chapter 5 of DB2 SQL Reference
implications for the following SQL
statements: INSERT, UPDATE, DELETE, and
DROP
Maintaining data integrity, including Part 2 of DB2 Utility Guide and Reference
implications for the following utilities: COPY,
QUIESCE, RECOVER, and REPORT
Compressing data in a table space or a Part 5 (Volume 2) of DB2 Administration
partition Guide
Designing and using materialized query Part 5 (Volume 2) of DB2 Administration
tables Guide
| DB2 can manage the auxiliary storage requirements of a database by using DB2
storage groups. Data sets in these DB2 storage groups are called DB2-managed data
sets.
Note: These DB2 storage groups are not the same as storage groups that are
| defined by the DFSMS storage management subsystem (DFSMSsms).
Recommendation: Use DB2 storage groups whenever you can, either specifically
or by default.
Here are some of the things that DB2 does for you in managing your auxiliary
storage requirements:
v When a table space is created, DB2 defines the necessary VSAM data sets using
VSAM Access Method Services. After the data sets are created, you can process
| DB2 page sets are defined as VSAM linear data sets. Prior to Version 8, DB2
| defined all data sets with VSAM control intervals that were 4 KB in size. Beginning
| in Version 8, DB2 can define data sets with variable VSAM control intervals. One
| of the biggest benefits of this change is an improvement in query processing
| performance.
28 Administration Guide
| v A value of NO indicates that a DB2–managed data set is created with a fixed
| VSAM control interval of 4 KB, regardless of the size of the buffer pool that is
| used for the table space.
| Table 7 shows the default and compatible control interval sizes for each table space
| page size. For example, a table space with pages 16 KB in size can have a VSAM
| control interval of 4 KB or 16 KB. Control interval sizing has no impact on indexes;
| index pages are always 4 KB in size.
Table 7. Default and compatible control interval sizes
Compatible control interval
Table space page size Default control interval size sizes
4 KB 4 KB 4 KB
8 KB 8 KB 4 KB, 8 KB
16 KB 16 KB 4 KB, 16 KB
32 KB 32 KB 4 KB, 32 KB
| DB2 storage group names are unqualified identifiers of up to 128 characters. A DB2
| storage group name cannot be the same as any other storage group name in the
| DB2 catalog.
After you define a storage group, DB2 stores information about it in the DB2
catalog. (This catalog is not the same as the integrated catalog facility catalog that
describes DB2 VSAM data sets). The catalog table SYSIBM.SYSSTOGROUP has a
row for each storage group, and SYSIBM.SYSVOLUMES has a row for each
volume. With the proper authorization, you can retrieve the catalog information
about DB2 storage groups by using SQL statements. See Appendix F of DB2 SQL
Reference for more information about using SQL statements to retrieve catalog
information about DB2 storage groups.
When you create table spaces and indexes, you name the storage group from
which space is to be allocated. You can also assign an entire database to a storage
group. Try to assign frequently accessed objects (indexes, for example) to fast
devices, and assign seldom-used tables to slower devices. This approach to
choosing storage groups improves performance.
A default storage group, SYSDEFLT, is defined when DB2 is installed. If you are
authorized and do not take specific steps to manage your own storage, you can
still define tables, indexes, table spaces, and databases; DB2 uses SYSDEFLT to
allocate the necessary auxiliary storage. Information about SYSDEFLT, as with any
other storage group, is kept in the catalog tables SYSIBM.SYSSTOGROUP and
SYSIBM.SYSVOLUMES.
For both user-managed and DB2-managed data sets, you need at least one
integrated catalog facility (ICF) catalog—either user or master—that is created with
the ICF. You must identify the catalog of the ICF when you create a storage group
or when you create a table space that does not use storage groups.
If you use DB2 to allocate data to specific volumes, you must assign an SMS
storage class with guaranteed space, and you must manage free space for each
volume to prevent failures during the initial allocation and extension. Using
guaranteed space reduces the benefits of SMS allocation, requires more time for
space management, and can result in more space shortages. You should only use
guaranteed space when space needs are relatively small and do not change.
Recommendation: Let SMS manage your DB2 storage groups; to do this, use
asterisks (nonspecific volume IDs) in the VOLUMES clause.
For example, you might be installing a software program that requires that many
table spaces be created, but your company might not need to use some of those
table spaces; you might prefer not to allocate data sets for the table spaces you will
not be using.
To defer the physical allocation of DB2-managed data sets, use the DEFINE NO
clause of the CREATE TABLESPACE statement. When you specify the DEFINE NO
clause, the table space is created, but DB2 does not allocate (that is, define) the
associated data sets until a row is inserted or loaded into a table in that table
space. The DB2 catalog table SYSIBM.SYSTABLESPART contains a record of the
created table space and an indication that the data sets are not yet allocated.
The DEFINE NO clause is not allowed for LOB table spaces, for table spaces in a
work file database or a TEMP database, or for user-defined data sets. (In the case
of user-defined data sets, the table space is created with the USING VCAT clause
of the CREATE TABLESPACE statement).
Do not use the DEFINE NO clause on a table space if you plan to use a tool
outside of DB2 to propagate data into a data set in the table space. When you use
DEFINE NO, the DB2 catalog indicates that the data sets have not yet been
allocated for that table space. Then, if data is propagated from a tool outside of
DB2 into a data set in the table space, the DB2 catalog information does not reflect
the fact that the data set has been allocated. The resulting inconsistency causes DB2
to deny application programs access to the data until the inconsistency is resolved.
30 Administration Guide
Extending DB2-managed data sets
When a data set is created, DB2 allocates a primary allocation space on a volume
that has space available and that is specified in the DB2 storage group. Any
extension to a data set always gets a secondary allocation space.
If new extensions reach the end of the volume, DB2 accesses all candidate volumes
from the DB2 storage group and issues the Access Method Services command
ALTER ADDVOLUMES to add these volumes to the integrated catalog facility
(ICF) catalog as candidate volumes for the data set. DB2 then makes a request to
allocate a secondary extent on any one of the candidate volumes that has space
available. After the allocation is successful, DB2 issues the command ALTER
REMOVEVOLUMES to remove all candidate volumes from the ICF catalog for the
data set.
| DB2 extends data sets when either of the following conditions occurs:
| v The requested space exceeds the remaining space in the data set.
| v 10% of the secondary allocation space (but not over 10 allocation units, based on
| either tracks or cylinders) exceeds the remaining space.
| If DB2 fails to extend a data set with a secondary allocation space because of
| insufficient available space on any single candidate volume of a DB2 storage
| group, DB2 tries again to extend with the requested space if the requested space is
| smaller than the secondary allocation space. Typically, DB2 requests only one
| additional page. In this case, a small amount of two units (tracks or cylinders,
| based on the unit that is specified by SECQTY) is allocated. To monitor data set
| extension activity, use IFCID 258 in statistics class 3.
When data extension fails: If a data set uses all possible extents, DB2 cannot
extend that data set. For a partitioned page set, the extension fails only for the
particular partition that DB2 is trying to extend. For nonpartitioned page sets, DB2
cannot extend to a new data set piece, which means that the extension for the
entire page set fails.
| To avoid extension failures, allow DB2 to use the default value for primary space
| allocation and use a sliding scale algorithm for secondary extent allocations.
Thereafter:
v On CREATE TABLESPACE and CREATE INDEX statements, do not specify a
value for the PRIQTY option.
v On ALTER TABLESPACE and ALTER INDEX statements, specify a value of -1
for the PRIQTY option.
For those situations in which the default primary quantity value is not large
enough, you can specify a larger value for the PRIQTY option when creating or
altering table spaces and indexes. DB2 always uses a PRIQTY value if one is
explicitly specified.
If you want to prevent DB2 from using the default value for primary space
allocation of table spaces and indexes, specify a non-zero value for the TABLE
SPACE ALLOCATION and INDEX SPACE ALLOCATION parameters on
installation panel DSNTIP7.
Maximum allocation is shown in Table 9 on page 33. This table assumes that the
initial extent that is allocated is one cylinder in size.
32 Administration Guide
Table 9. Maximum allocation of secondary extents
Maximum data set size, in Maximum allocation, in Extents required to reach
GB cylinders full size
1 127 54
2 127 75
4 127 107
8 127 154
16 127 246
32 559 172
64 559 255
DB2 uses a sliding scale for secondary extent allocations of table spaces and
indexes when:
v You do not specify a value for the SECQTY option of a CREATE TABLESPACE
or CREATE INDEX statement
v You specify a value of -1 for the SECQTY option of an ALTER TABLESPACE or
ALTER INDEX statement.
Otherwise, DB2 always uses a SECQTY value for secondary extent allocations, if
one is explicitly specified.
Exception: For those situations in which the calculated secondary quantity value is
not large enough, you can specify a larger value for the SECQTY option when
creating or altering table spaces and indexes. However, in the case where the
OPTIMIZE EXTENT SIZING parameter is set to YES and you specify a value for
the SECQTY option, DB2 uses the value of the SECQTY option to allocate a
secondary extent only if the value of the option is larger than the value that is
derived from the sliding scale algorithm. The calculation that DB2 uses to make
this determination is:
Actual secondary extent size = max ( min ( ss_extent, MaxAlloc ), SECQTY )
In this calculation, ss_extent represents the value that is derived from the sliding
scale algorithm, and MaxAlloc is either 127 or 559 cylinders, depending on the
maximum potential data set size. This approach allows you to reach the maximum
page set size faster. Otherwise, DB2 uses the value that is derived from the sliding
scale algorithm.
If you do not provide a value for the secondary space allocation quantity, DB2
calculates a secondary space allocation value equal to 10% of the primary space
allocation value and subject to the following conditions:
v The value cannot be less than 127 cylinders for data sets that range in initial size
from less than 1 GB to 16 GB, and cannot be less than 559 cylinders for 32 GB
and 64 GB data sets.
v The value cannot be more than the value that is derived from the sliding scale
algorithm.
The calculation that DB2 uses for the secondary space allocation value is:
Actual secondary extent size = max ( 0.1 × PRIQTY, min ( ss_extent, MaxAlloc ) )
In this calculation, ss_extent represents the value that is derived from the sliding
scale algorithm, and MaxAlloc is either 127 or 559 cylinders, depending on the
maximum potential data set size.
If you do not want DB2 to extend a data set, you can specify a value of 0 for the
SECQTY option. Specifying 0 is a useful way to prevent DSNDB07 work files from
growing out of proportion.
If you want to prevent DB2 from using the sliding scale for secondary extent
allocations of table spaces and indexes, specify a value of NO for the OPTIMIZE
EXTENT SIZING parameter on installation panel DSNTIP7.
You can use DFSMShsm to move data sets that have not been recently used to
slower, less expensive storage devices. Moving the data sets helps to ensure that
disk space is managed efficiently. For more information about using DFSMShsm to
manage DB2 data sets, see MVS Storage Management Library: Storage Management
Subsystem Migration Planning Guide and z/OS DFSMShsm Managing Your Own Data.
Migrating to DFSMShsm
If you decide to use DFSMShsm for your DB2 data sets, you should develop a
migration plan with your system administrator. With user-managed data sets, you
can specify DFSMShsm classes on the Access Method Services DEFINE command.
With DB2 storage groups, you need to develop automatic class selection routines.
34 Administration Guide
General-use Programming Interface
To allow DFSMShsm to manage your DB2 storage groups, you can use one or
more asterisks as volume IDs in your CREATE STOGROUP or ALTER STOGROUP
statement, as shown here:
CREATE STOGROUP G202
VOLUMES (’*’)
VCAT DB2SMST;
This example causes all database data set allocations and definitions to use
nonspecific selection through DFSMShsm filtering services.
When you use DFSMShsm and DB2 storage groups, you can use the system
parameters SMSDCFL and SMSDCIX to assign table spaces and indexes to
different DFSMShsm data classes.
v SMSDCFL specifies a DFSMShsm data class for table spaces. If you assign a
value to SMSDCFL, DB2 specifies that value when it uses Access Method
Services to define a data set for a table space.
v SMSDCIX specifies a DFSMShsm data class for indexes. If you assign a value to
SMSDCIX, DB2 specifies that value when it uses Access Method Services to
define a data set for an index.
Before you set the data class system parameters, you need to do two things:
v Define the data classes for your table space data sets and index data sets.
v Code the SMS automatic class selection (ACS) routines to assign indexes to one
SMS storage class and to assign table spaces to a different SMS storage class.
For more information about creating data classes, see z/OS DFSMS: Implementing
System-Managed Storage.
For processes that read more than one archive log data set, such as the RECOVER
utility, DB2 anticipates a DFSMShsm recall of migrated archive log data sets. When
a DB2 process finishes reading one data set, it can continue with the next data set
without delay, because the data set might already have been recalled by
DFSMShsm.
If you accept the default value YES for the RECALL DATABASE parameter on the
Operator Functions panel (DSNTIPO), DB2 also recalls migrated table spaces and
index spaces. At data set open time, DB2 waits for DFSMShsm to perform the
recall. You can specify the amount of time DB2 waits while the recall is being
performed with the RECALL DELAY parameter, which is also on panel DSNTIPO.
If RECALL DELAY is set to zero, DB2 does not wait, and the recall is performed
asynchronously.
You can use System Managed Storage (SMS) to archive DB2 subsystem data sets,
including the DB2 catalog, DB2 directory, active logs, and work file databases
If a volume has a STOGROUP specified, you must recall that volume only to
volumes of the same device type as others in the STOGROUP.
In addition, you must coordinate the DFSMShsm automatic purge period, the DB2
log retention period, and MODIFY utility usage. Otherwise, the image copies or
logs that you might need during a recovery could already have been deleted.
The DFSMSdss RESTORE command extends a data set differently than DB2, so
you must alter the page set to contain extents defined by DB2. To do this, use
ALTER TABLESPACE to enlarge the primary and secondary space allocation
values for DB2–managed data sets. After you use ALTER TABLESPACE, the new
values take effect only when you use REORG or LOAD REPLACE. Using
RECOVER again does not resolve the extent definition.
For user-defined data sets, define the data sets with larger primary and secondary
space allocation values (see “Managing your own data sets” on page 37).
| The BACKUP SYSTEM utility uses copy pools, which are new constructs in z/OS
| DFSMShsm Version 1 Release 5. A copy pool is a named set of storage groups that
| can be backed up and restored as a unit; DFSMShsm processes the storage groups
| collectively for fast replication. Each DB2 subsystem has up to two copy pools, one
| for databases and one for logs.
| Copy pools are also referred to as source storage groups. Each source storage
| group contains the name of an associated copy-pool backup storage group, which
| contains eligible volumes for the backups. The storage administrator must define
| both the source and target storage groups, and use the following DB2 naming
| convention:
| DSN$locn-name$cp-type
| The variables that are used in this naming convention are described in Table 11 on
| page 37:
36 Administration Guide
| Table 11. Naming convention variables
| Variable Meaning
| DSN The unique DB2 product identifier
| $ A delimiter. You must use the dollar sign ($) character.
| locn-name The DB2 location name
| cp-type The copy pool type. Use DB for database and LG for log
|
| For detailed instructions on how to create storage groups, see the z/OS DFSMSdss
| Storage Administration Reference.
| The DB2 BACKUP SYSTEM and RESTORE SYSTEM utilities invoke DFSMShsm to
| back up and restore the copy pools. DFSMShsm interacts with DFSMSsms to
| determine the volumes that belong to a given copy pool so that the volume-level
| backup and restore functions can be invoked.
| For information about the BACKUP SYSTEM and RESTORE SYSTEM utilities, see
| the DB2 Utility Guide and Reference. For information about recovery procedures that
| use these utilities, see “System-level point in time recovery” on page 479.
You might choose to manage your own VSAM data sets for reasons such as these:
v You have a large linear table space on several data sets. If you manage your own
data sets, you can better control the placement of individual data sets on the
volumes (although you can keep a similar type of control by using
single-volume DB2 storage groups).
v You want to prevent deleting a data set within a specified time period, by using
the TO and FOR options of the Access Method Services DEFINE and ALTER
commands. You can create and manage the data set yourself, or you can create
the data set with DB2 and use the ALTER command of Access Method Services
to change the TO and FOR options.
v You are concerned about recovering dropped table spaces. Your own data set is
not automatically deleted when a table space is dropped, making it easier to
reclaim the data.
To define the required data sets, use DEFINE CLUSTER; to add secondary volumes
to expanding data sets, use ALTER ADDVOLUMES; and to delete data sets, use
DELETE CLUSTER.
38 Administration Guide
Define two data sets if you plan to run REORG, using the
FASTSWITCH YES option, with SHRLEVEL CHANGE or SHRLEVEL
REFERENCE. Define one data set with a value of I for y, and one with
a value of J for y.
For more information about defining data sets for REORG, see Part 2 of
DB2 Utility Guide and Reference.
| znnn Data set number. The first digit z of the data set number is represented
by the letter A, B, C, D, or E, which corresponds to the value 0, 1, 2, 3,
or 4 as the first digit of the partition number.
| For partitioned table spaces, if the partition number is less than 1000,
| the data set number is Annn in the data set name (for example, A999
| represents partition 999). For partitions 1000 to 1999, the data set
| number is Bnnn (for example, B000 represents partition 1000). For
| partitions 2000 to 2999, the data set number is Cnnn. For partitions 3000
| to 3999, the data set number is Dnnn. For partitions 4000 up to a
| maximum of 4096, the data set number is Ennn.
| The naming convention for data sets that you define for a partitioned
| index is the same as the naming convention for other partitioned
| objects.
| For simple or segmented table spaces, the number is 001 (preceded by
| A) for the first data set. When little space is available, DB2 issues a
| warning message. If the size of the data set for a simple or a segmented
| table space approaches the maximum limit, define another data set
| with the same name as the first data set and the number 002. The next
| data set will be 003, and so on.
| You can reach the VSAM extent limit for a data set before you reach
| the size limit for a partitioned or a nonpartitioned table space. If this
| happens, DB2 does not extend the data set.
3. Use the DEFINE CLUSTER command to define the size of the primary and
secondary extents of the VSAM cluster. If you specify zero for the secondary
extent size, data set extension does not occur.
4. Define the data sets as LINEAR. Do not use RECORDSIZE or
CONTROLINTERVALSIZE; these attributes are invalid.
5. Use the REUSE option. You must define the data set as REUSE before running
the DSN1COPY utility.
6. Use SHAREOPTIONS(3,3).
The DEFINE CLUSTER command has many optional parameters that do not apply
when DB2 uses the data set. If you use the parameters SPANNED,
EXCEPTIONEXIT, SPEED, BUFFERSPACE, or WRITECHECK, VSAM applies them
to your data set, but DB2 ignores them when it accesses the data set.
The value of the OWNER parameter for clusters that are defined for storage
groups is the first SYSADM authorization ID specified at installation.
When you drop indexes or table spaces for which you defined the data sets, you
must delete the data sets unless you want to reuse them. To reuse a data set, first
commit, and then create a new table space or index with the same name. When
DB2 uses the new object, it overwrites the old information with new information,
which destroys the old data.
DEFINE CLUSTER -
(NAME(DSNCAT.DSNDBC.DSNDB06.SYSUSER.I0001.A001) -
LINEAR -
REUSE -
VOLUMES(DSNV01) -
RECORDS(100 100) -
SHAREOPTIONS(3 3) ) -
DATA -
(NAME(DSNCAT.DSNDBD.DSNDB06.SYSUSER.I0001.A001) -
CATALOG(DSNCAT)
Figure 3. Defining a VSAM data set for the SYSUSER table space
For user-managed data sets, you must pre-allocate shadow data sets prior to
running REORG with SHRLEVEL CHANGE, REORG with SHRLEVEL
| REFERENCE, or CHECK INDEX with SHRLEVEL CHANGE against the table
space. You can specify the MODEL option for the DEFINE CLUSTER command so
that the shadow is created like the original data set, as shown in Figure 4.
In Figure 4, the instance qualifiers x and y are distinct and are equal to either I or
DEFINE CLUSTER -
(NAME(’DSNCAT.DSNDBC.DSNDB06.SYSUSER.x0001.A001’) -
MODEL(’DSNCAT.DSNDBC.DSNDB06.SYSUSER.y0001.A001’)) -
DATA -
(NAME(’DSNCAT.DSNDBD.DSNDB06.SYSUSER.x0001.A001’) -
MODEL(’DSNCAT.DSNDBD.DSNDB06.SYSUSER.y0001.A001’)) -
J. You must determine the correct instance qualifier to use for a shadow data set by
querying the DB2 catalog for the database and table space.
For more information about defining data sets for REORG, see Chapter 2 of DB2
Utility Guide and Reference. For more information about defining and managing
VSAM data sets, see DFSMS/MVS: Access Method Services for the Integrated Catalog.
40 Administration Guide
Defining index space storage
Generally, the CREATE INDEX statement creates an index space in the same DB2
database that contains the table on which the index is defined. This is true even if
you defer building the index.
Exceptions:
v If you specify the USING VCAT clause, you create and manage the data sets
yourself.
v If you specify the DEFINE NO clause on a CREATE INDEX statement that uses
the USING STOGROUP clause, DB2 defers the allocation of the data sets for the
index space.
When you use CREATE INDEX, always specify a USING clause. When you specify
USING, you declare whether you want DB2-managed or user-managed data sets.
For DB2-managed data sets, you specify the primary and secondary space
allocation parameters on the CREATE INDEX statement. If you do not specify
USING, DB2 assigns the index data sets to the default storage groups using default
space attributes. For information about how space allocation can affect the
performance of mass inserts, see “Formatting early and speed up formatting” on
page 626.
You can specify the USING clause to allocate space for the entire index, or if the
| index is a partitioned index, you can allocate space for each partition. Information
about space allocation for the index is kept in the SYSIBM.SYSINDEXPART table of
the DB2 catalog. Other information about the index is in SYSIBM.SYSINDEXES.
For more information about determining the space required for an index, see
“Calculating the space required for an index” on page 117. For more information
about CREATE INDEX clauses, see Chapter 5 of DB2 SQL Reference.
42 Administration Guide
Chapter 4. Implementing your database design
The information in this chapter is General-use Programming Interface and
Associated Guidance Information, as defined in “Notices” on page 1237.
Table 12 shows where you can find more information about topics that are related
to implementing a database design.
Table 12. More information about implementing a database design
For more information about... See...
Basic concepts in implementing a database The Official Introduction to DB2 UDB for z/OS
design for DB2 Universal Database for z/OS,
including:
v Choosing names for DB2 objects
v Implementing databases
v Implementing table spaces, including
reorganizing data
v Implementing tables
v Implementing indexes
v Implementing referential constraints
v Implementing views
Details about SQL statements used to DB2 SQL Reference
implement a database design (CREATE and
DECLARE, for example)
Loading tables with referential constraints DB2 Utility Guide and Reference
Using the catalog in database design Appendix G of DB2 SQL Reference
Implementing databases
In DB2 UDB for z/OS, a database is a logical collection of table spaces and index
spaces. Consider the following factors when deciding whether to define a new
database for a new set of objects:
v You can start and stop an entire database as a unit; you can display the statuses
of all its objects by using a single command that names only the database.
Therefore, place a set of tables that are used together into the same database.
(The same database holds all indexes on those tables.)
v Some operations lock an entire database. For example, some phases of the
LOAD utility prevent some SQL statements (CREATE, ALTER, and DROP) from
using the same database concurrently. Therefore, placing many unrelated tables
in a single database is often inconvenient.
If you use declared temporary tables, you must define a database that is defined
AS TEMP (the TEMP database). DB2 stores all declared temporary tables in the
TEMP database. The majority of the factors described in this section do not apply
to the TEMP database. For details about declared temporary tables, see
“Distinctions between DB2 base tables and temporary tables” on page 48.
# Table spaces are divided into units called pages that are 4 KB, 8 KB, 16 KB, or 32
# KB in size. As a general rule, you should have no more than 50 to 100 table spaces
# in one DB2 database. Following this guideline helps to minimize maintenance,
# increase concurrency, and decrease log volume.If you are using declared temporary
# tables, you must create at least one table space in your TEMP database with pages
# that are 8 KB in size.
You need to create additional table spaces if your database contains LOB data. For
more information about creating table spaces for LOB data, see Chapter 5 of DB2
SQL Reference.
Data in most table spaces can be compressed, which can allow you to store more
data on each data page. For more information, see “Compressing your data” on
page 670.
Generally, when you use the CREATE TABLESPACE statement with the USING
STOGROUP clause, DB2 allocates data sets for the table space. However, if you
also specify the DEFINE NO clause, you can defer the allocation of data sets until
data is inserted or loaded into a table in the table space. For more information
about deferring data set allocation, see “Deferring allocation of DB2-managed data
sets” on page 30.
44 Administration Guide
You can create simple, segmented, partitioned, and LOB table spaces. For detailed
information about CREATE TABLESPACE, see Chapter 5 of DB2 SQL Reference.
| If you create a table space implicitly, DB2 uses defaults for the space allocation
| attributes. The default values of PRIQTY and SECQTY specify the space allocation
| for the table space. If the value of the TSQTY subsystem parameter is nonzero, it
| determines the default values for PRIQTY and SECQTY. If the value of TSQTY is
| zero, the default values for PRIQTY and SECQTY are determined as described in
| the CREATE TABLESPACE statement in Chapter 5 of DB2 SQL Reference.
When you do not specify a table space name in a CREATE TABLE statement (and
the table space is created implicitly), DB2 derives the table space name from the
name of your table according to these rules:
v The table space name is the same as the table name if these conditions apply:
– No other table space or index space in the database already has that name.
– The table name has no more than eight characters.
– The characters are all alphanumeric, and the first character is not a digit.
v If some other table space in the database already has the same name as the table,
DB2 assigns a name of the form xxxxnyyy, where xxxx is the first four characters
of the table name, and nyyy is a single digit and three letters that guarantees
uniqueness.
DB2 stores this name in the DB2 catalog in the SYSIBM.SYSTABLESPACE table
along with all your other table space names. The rules for LOB table spaces are
explained in Chapter 5 of DB2 SQL Reference.
Data in table spaces is stored and allocated in 4-KB record segments. Thus, an
8-KB page size means two 4-KB records, and a 32-KB page size means eight 4-KB
records. A good starting point is to use the default of 4-KB page sizes when access
to the data is random and only a few rows per page are needed. If row sizes are
very small, using the 4-KB page size is recommended.
However, there are situations in which larger page sizes are needed or
recommended:
v When the size of individual rows is greater than 4 KB. In this case, you must
use a larger page size. When considering the size of work file table spaces,
| The maximum number of partitions for a table space depends on the page size and
| on the DSSIZE. The size of the table space depends on how many partitions are in
| the table space and on the DSSIZE. For specific information about the maximum
| number of partitions and the total size of the table space, given the page size and
| the DSSIZE, see the CREATE TABLESPACE statement in Chapter 5 of DB2 SQL
| Reference.
For example, if you have a 17-KB LOB, the 4-KB page size is the most efficient for
storage. A 17-KB LOB requires five 4-KB pages for a total of 20 KB of storage
space. Pages that are 8 KB, 16 KB, and 32 KB in size waste more space, because
they require 24 KB, 32 KB, and 32 KB, respectively, for the LOB.
Table 13 on page 47 shows that the number of data pages is lower for larger page
sizes, but larger page sizes might have more unused space.
46 Administration Guide
Table 13. Relationship between LOB size and data pages based on page size
% Non-LOB data or
LOB size Page size LOB data pages unused space
262 144 bytes 4 KB 64 1.6
8 KB 32 3.0
16 KB 16 5.6
32 KB 8 11.1
4 MB 4 KB 1029 0.78
8 KB 513 0.39
16 KB 256 0.39
32 KB 128 0.78
33 MB 4 KB 8234 0.76
8 KB 4106 0.39
16 KB 2050 0.19
32 KB 1024 0.10
Choosing a page size based on average LOB size: If you know that all of your
LOBs are not the same size, you can still make an estimate of what page size to
choose. To estimate the average size of a LOB, you need to add a percentage to
account for unused space and control information. To estimate the average size of
a LOB value, use the following formula:
LOB size = (average LOB length) × 1.05
Table 14 contains some suggested page sizes for LOBs with the intent to reduce the
amount of I/O (getpages).
Table 14. Suggested page sizes based on average LOB length
Average LOB size (n) Suggested page size
n ≤ 4 KB 4 KB
4 KB < n ≤ 8 KB 8 KB
8 KB < n ≤ 16 KB 16 KB
16 KB < n 32 KB
General guidelines for LOBs of same size: If your LOBs are all the same size, you
can fairly easily choose a page size that uses space efficiently without sacrificing
performance. For LOBs that are all the same size, consider the alternative in
Table 15 to maximize your space savings.
Table 15. Suggested page sizes when LOBs are the same size
LOB size (y) Suggested page size
y ≤ 4 KB 4 KB
4 KB < y ≤ 8 KB 8 KB
8 KB < y ≤ 12 KB 4 KB
12 KB < y ≤ 16 KB 16 KB
Implementing tables
This section discusses the following topics:
v “Distinctions between DB2 base tables and temporary tables”
v “Implementing table-controlled partitioning” on page 50
48 Administration Guide
Table 16. Important distinctions between DB2 base tables and DB2 temporary tables (continued)
Area of distinction Base tables Created temporary tables Declared temporary tables
Table instantiation and CREATE TABLE statement CREATE GLOBAL DECLARE GLOBAL
ability to share data creates one empty instance TEMPORARY TABLE TEMPORARY TABLE
of the table, and all statement does not create an statement creates an empty
application processes use instance of the table. The first instance of the table for the
that one instance of the implicit or explicit reference application process. Each
table. The table and data to the table in an OPEN, application process has its
are persistent. SELECT, INSERT, or DELETE own unique instance of the
operation executed by any table, and the instance is
program in the application not persistent beyond the
process creates an empty life of the application
instance of the given table. process.
Each application process has
its own unique instance of
the table, and the instance is
not persistent beyond the life
of the application process.
References to the table in References to the table References to the table name References to that table
application processes name in multiple in multiple application name in multiple
application processes refer processes refer to the same application processes refer
to the same single persistent single persistent table to a distinct description and
table description and same description but to a distinct instance of the table for
instance at the current instance of the table for each each application process at
server. application process at the the current server.
current server.
If the table name being References to the table
referenced is not qualified, If the table name being name in an SQL statement
DB2 implicitly qualifies the referenced is not qualified, (other than the DECLARE
name using the standard DB2 implicitly qualifies the GLOBAL TEMPORARY
DB2 qualification rules name using the standard TABLE statement) must
applied to the SQL DB2 qualification rules include SESSION as the
statements. The name can applied to the SQL qualifier (the first part in a
be a two-part or three-part statements. The name can be two-part table name or the
name. a two-part or three-part second part in a three-part
name. name). If the table name is
not qualified with SESSION,
DB2 assumes the reference
is to a base table.
Table privileges and The owner implicitly has all The owner implicitly has all PUBLIC implicitly has all
authorization table privileges on the table table privileges on the table table privileges on the table
and the authority to drop and the authority to drop the without GRANT authority
the table. The owner’s table table. The owner’s table and has the authority to
privileges can be granted privileges can be granted and drop the table. These table
and revoked, either revoked, but only with the privileges cannot be granted
individually or with the ALL clause; individual table or revoked.
ALL clause. privileges cannot be granted
or revoked. Any authorization ID can
Another authorization ID access the table without a
can access the table only if Another authorization ID can grant of any privileges for
it has been granted access the table only if it has the table.
appropriate privileges for been granted ALL privileges
the table. for the table.
Indexes and other SQL Indexes and SQL statements Indexes, UPDATE (searched Indexes and SQL statements
statement support that modify data (INSERT, or positioned), and DELETE that modify data (INSERT,
UPDATE, DELETE, and so (positioned only) are not UPDATE, DELETE, and so
on) are supported. supported. on) are supported
| In DB2 Version 8, the partitioning index is no longer required. You can now specify
| the partitioning key and the limit key values for a table in a partitioned table space
| by using the PARTITION BY clause and the PARTITION ENDING AT clause of the
| CREATE TABLE statement. This type of partitioning is called table-controlled
| partitioning.
50 Administration Guide
| Example: Assume that you need to create a large transaction table that includes the
| date of the transaction in a column named POSTED. You want the transactions for
| each month in a separate partition. To create the table, issue the following
| statement:
| CREATE TABLE TRANS
| (ACCTID ...,
| STATE ...,
| POSTED ...,
| ... , ...)
| PARTITION BY (POSTED)
| (PARTITION 1 ENDING AT (’01/31/2003’),
| PARTITION 2 ENDING AT (’02/28/2003’),
| ...
| PARTITION 13 ENDING AT (’01/31/2004’));
| Automatic conversion
| DB2 automatically converts an index-controlled partitioned table space to a
| table-controlled partitioned table space if you perform any of the following
| operations:
| v Use CREATE INDEX with the PARTITIONED clause to create a partitioned
| index on an index-controlled partitioned table space.
| v Use CREATE INDEX with a PART VALUES clause and without a CLUSTER
| clause to create a partitioning index.
| DB2 stores the specified high limit key value instead of the default high limit
| key value.
| v Use ALTER INDEX with the NOT CLUSTER clause on a partitioning index that
| is on an index-controlled partitioned table space.
| v Use DROP INDEX to drop a partitioning index on an index-controlled
| partitioned table space.
| After the conversion to table-controlled partitioning, the SQL statements that you
| used to create the tables and indexes are no longer valid. For example, after
| dropping a partitioning index on an index-controlled partitioned table space, an
| attempt to recreate the index by issuing the same CREATE INDEX statement that
| you originally used would fail because the boundary partitions are now under the
| control of the table.
| With table-controlled partitioning, DB2 can restrict the insertion of null values into
| a table with nullable partitioning columns, depending on the order of the
| partitioning key:
# v If the partitioning key is ascending and the highest value of the key column is
# not MAXVALUE, DB2 prevents the INSERT of a row with a null value for the
# key column If the highest value of the key column is MAXVALUE, the row is
# inserted into the last partition.
| v If the partitioning key is descending, DB2 allows the INSERT of a row with a
| null value for the key column. The row is inserted into the first partition.
| Example: Assume that a partitioned table space is created with the following SQL
| statements:
| CREATE TABLESPACE TS IN DB
| USING STOGROUP SG
| NUMPARTS 4 BUFFERPOOL BP0;
|
| CREATE TABLE TB (C01 CHAR(5),
| C02 CHAR(5) NOT NULL,
| C03 CHAR(5) NOT NULL)
| IN DB.TS
| PARTITION BY (C01)
| PARTITION 1 ENDING AT (’10000’),
| PARTITION 2 ENDING AT (’20000’),
| PARTITION 3 ENDING AT (’30000’),
| PARTITION 4 ENDING AT (’40000’));
52 Administration Guide
| Because the CREATE TABLE statement does not specify the order in which to put
| entries, DB2 puts them in ascending order by default. DB2 subsequently prevents
| any INSERT into the TB table of a row with a null value for partitioning column
| C01. If the CREATE TABLE statement had specified the key as descending, DB2
| would subsequently have allowed an INSERT into the TB table of a row with a
| null value for partitioning column C01. DB2 would have inserted the row into
| partition 1.
| With index-controlled partitioning, DB2 does not restrict the insertion of null
| values into a value with nullable partitioning columns.
| Example: Assume that a partitioned table space is created with the following SQL
| statements:
| CREATE TABLESPACE TS IN DB
| USING STOGROUP SG
| NUMPARTS 4 BUFFERPOOL BP0;
|
| CREATE TABLE TB (C01 CHAR(5),
| C02 CHAR(5) NOT NULL,
| C03 CHAR(5) NOT NULL)
| IN DB.TS;
|
| CREATE INDEX PI ON TB(C01) CLUSTER
| (PARTITION 1 ENDING AT (’10000’),
| PARTITION 2 ENDING AT (’20000’),
| PARTITION 3 ENDING AT (’30000’),
| PARTITION 4 ENDING AT (’40000’));
| Regardless of the entry order, DB2 allows an INSERT into the TB table of a row
| with a null value for partitioning column C01. If the entry order is ascending, DB2
| inserts the row into partition 4; if the entry order is descending, DB2 inserts the
| row into partition 1. Only if the table space is created with the LARGE keyword
# does DB2 prevent the insertion of a null value into the C01 column.
| Implementing indexes
| DB2 uses indexes not only to enforce uniqueness on column values, as for parent
| keys, but also to cluster data, to partition tables, to provide access paths to data for
| queries, and to order retrieved data without a sort.
| Types of indexes
| This section describes the various types of indexes and summarizes what you
| should consider when creating a particular type:
| v “Unique indexes” on page 54
| v “Clustering indexes” on page 54
| v “Partitioning indexes” on page 54
| v “Secondary indexes” on page 55
| This section uses a transaction table named TRANS to illustrate the various types
| of indexes. Assume that the table has many columns, but you are interested in
| only the following columns:
| Unique indexes
| When you define a unique index on a DB2 table, you ensure that no duplicate
| values of the index key exist in the table. For example, creating a unique index on
| the ACCTID column ensures that no duplicate values of the customer account ID
| are in the TRANS table:
| CREATE UNIQUE INDEX IX1
| ON TRANS (ACCTID);
| If an index key allows nulls for some of its column values, you can use the
| WHERE NOT NULL clause to ensure that the nonnull values of the index key are
| unique.
| Clustering indexes
| When you define a clustering index on a DB2 table, you direct DB2 to insert rows
| into the table in the order of the clustering key values. The first index that you
| define on the table serves implicitly as the clustering index unless you explicitly
| specify CLUSTER when you create or alter another index. For example, if you first
| define a unique index on the ACCTID column of the TRANS table, DB2 inserts
| rows into the TRANS table in the order of the customer account number unless
| you explicitly define another index to be the clustering index.
| You can specify CLUSTER for any index, whether or not it is a partitioning index.
| For example, suppose that you want the rows of the TRANS table to be ordered by
| the POSTED column. Issue the statement:
| CREATE INDEX IX2
| ON TRANS (POSTED)
| CLUSTER;
| For more information, see “Altering the clustering index” on page 93 and The
| Official Introduction to DB2 UDB for z/OS.
| Partitioning indexes
| Before DB2 Version 8, when you defined a partitioning index on a table in a
| partitioned table space, you specified the partitioning key and the limit key values
| in the PART VALUES clause of the CREATE INDEX statement. This type of
| partitioning is referred to as index-controlled partitioning.
| Beginning with DB2 Version 8, you can define table-controlled partitioning with
| the CREATE TABLE statement (which is described in “Implementing
| table-controlled partitioning” on page 50). A partitioning index is then defined to be
| an index where the leftmost columns are the partitioning columns of the table; the
| index can, but need not, be partitioned. If it is partitioned, a partitioning index is
| partitioned in the same way as the underlying data in the table.
| For example, assume that the partitioning scheme for the TRANS table is defined
| by the CREATE TABLE statement in “Implementing table-controlled partitioning”
| on page 50
54 Administration Guide
| on page 50. The rows in the table are partitioned by the transaction date in the
| POSTED column. The following statement defines an index where the column of
| the index is the same as the partitioning column of the table:
| CREATE INDEX IX3
| ON TRANS (POSTED);
| When you create a partitioning index, DB2 puts the last partition of the table space
| into a REORG-pending (REORP) state.
| Secondary indexes
| A secondary index is any index that is not a partitioning index. You can create an
| index on a table to enforce a uniqueness constraint, to cluster data, or most
| typically to provide access paths to data for queries.
| The usefulness of an index depends on the columns in its key and on the
| cardinality of the key. Columns that you use frequently in performing selection,
| join, grouping, and ordering operations are good candidates for keys. In addition,
| the number of distinct values in an index key for a large table must be sufficient
| for DB2 to use the index for data retrieval; otherwise, DB2 could choose to perform
| a table space scan.
| A secondary index can be partitioned or not. This section discusses the two types
| of secondary indexes:
| v “Nonpartitioned secondary index (NPSI)”
| v “Data-partitioned secondary index (DPSI)”
| DB2 can use this index to access data with a particular value for STATE. However,
| if the query includes a predicate that references only a single partition of the table,
| the keys for that partition are scattered throughout the index. A better solution to
| accessing single partitions is data-partitioned secondary indexes (DPSIs).
| However, the use of data-partitioned secondary indexes does not always improve
| the performance of queries. For example, for queries with predicates that reference
| only the columns in the key of the DPSI, DB2 must probe each partition of the
| index for values that satisfy the predicate.
| Example: Assume that the transaction date in the POSTED column is the
| partitioning key of the table and that the rows are ordered by the transaction date.
| You want an index on the STATE column that is partitioned the same as the data
| in the table. Issue the following statement:
| CREATE INDEX IX5 ON TRANS(STATE)
| PARTITIONED;
| DB2 can use this index to access data with a particular value for STATE within
| partitions that are specified in a predicate of a query.
| Example: Assume that the transaction date in the POSTED column is the
| partitioning key of the table. You want a clustering index on the ACCTID column
| that is partitioned the same as the data in the table. Issue the following statement:
| CREATE INDEX IX6 ON TRANS(ACCTID)
| PARTITIONED CLUSTER;
| DB2 orders the rows of the table by the values of the columns in the clustering key
| and partitions the rows by the values of the limit key that is defined for the
| underlying table. The data rows are clustered within each partition by the key of
| the clustering index instead of by the partitioning key.
56 Administration Guide
| However, using the NOT PADDED clause might also have several disadvantages:
| v Index key comparisons are slower because DB2 must compare each pair of
| corresponding varying-length columns individually instead of comparing the
| entire key when the columns are padded to their maximum length.
| v DB2 stores an additional 2-byte length field for each varying-length column.
| Therefore, if the length of the padding (to the maximum length) is less than or
| equal to 2 bytes, the storage requirements could actually be greater for
| varying-length columns that are not padded.
| Tip: Use the PAD INDEXES BY DEFAULT option on installation panel DSNTIPE to
| control whether varying length columns are padded by default.
| Example: For example, if you define an index by specifying DATE DESC, TIME
| ASC as the column names and order, DB2 can use this same index for both of the
| following ORDER BY clauses:
| v Forward scan for ORDER BY DATE DESC, TIME ASC
| v Backward scan for ORDER BY DATE ASC, TIME DESC
| You do not need to create two indexes for the two ORDER BY clauses. DB2 can
| use the same index for both forward index scan and backward index scan.
| Optimization: Suppose that the query includes a WHERE clause with a predicate
| of the form COL=constant. For example:
| ...
| WHERE CODE = ’A’
| ORDER BY CODE, DATE DESC, TIME ASC
| DB2 can use any of the following index keys to satisfy the ordering:
| v CODE, DATE DESC, TIME ASC
| v CODE, DATE ASC, TIME DESC
| v DATE DESC, TIME ASC
| v DATE ASC, TIME DESC
| DB2 can ignore the CODE column in the ORDER BY clause and the index because
| the value of the CODE column in the result table of the query has no effect on the
| order of the data. If the CODE column is included, it can be in any position in the
| ORDER BY clause and in the index.
Using schemas
A schema is a collection of named objects. The objects that a schema can contain
| include tables, indexes, table spaces, distinct types, functions, stored procedures,
and triggers. An object is assigned to a schema when it is created.
Chapter 4. Implementing your database design 57
| When a table, index, table space, distinct type, function, stored procedure, or
trigger is created, it is given a qualified two-part name. The first part is the schema
name (or the qualifier), which is either implicitly or explicitly specified. The default
schema is the authorization ID of the owner of the plan or package. The second
part is the name of the object.
Creating a schema
You can create a schema with the schema processor by using the CREATE
SCHEMA statement. CREATE SCHEMA cannot be embedded in a host program or
executed interactively. To process the CREATE SCHEMA statement, you must use
the schema processor, as described in “Processing schema definitions” on page 59.
The ability to process schema definitions is provided for conformance to ISO/ANSI
standards. The result of processing a schema definition is identical to the result of
executing the SQL statements without a schema definition.
Outside of the schema processor, the order of statements is important. They must
be arranged so that all referenced objects have been previously created. This
restriction is relaxed when the statements are processed by the schema processor if
the object table is created within the same CREATE SCHEMA. The requirement
that all referenced objects have been previously created is not checked until all of
the statements have been processed. For example, within the context of the schema
processor, you can define a constraint that references a table that does not exist yet
or GRANT an authorization on a table that does not exist yet.
58 Administration Guide
Processing schema definitions
Run the schema processor (DSNHSP) as a batch job; use the sample JCL provided
in member DSNTEJ1S of the SDSNSAMP library. The schema processor accepts
only one schema definition in a single job. No statements that are outside the
schema definition are accepted. Only SQL comments can precede the CREATE
SCHEMA statement; the end of input ends the schema definition. SQL comments
can also be used within and between SQL statements.
The processor takes the SQL from CREATE SCHEMA (the SYSIN data set),
dynamically executes it, and prints the results in the SYSPRINT data set.
You can use several methods to fill DB2 tables, but you will probably load most of
your tables by using the LOAD utility. This utility loads data into DB2 persistent
tables from sequential data sets by using BSAM. You can also use a cursor that is
declared with an EXEC SQL utility control statement to load data from another
SQL table with the DB2 UDB family cross-loader function. The LOAD utility
| cannot be used to load data into DB2 temporary tables or system-maintained
| materialized query tables. For information about the DB2 UDB family cross-loader
| function, see Chapter 15, Load of the DB2 Utility Guide and Reference.
You can also use an SQL INSERT statement to copy all or selected rows of another
table, either by using the INSERT statement in an application program or
interactively through SPUFI.
To reformat data from IMS DL/I databases and VSAM and SAM loading for the
LOAD utility, use DB2 DataPropagator. See “Loading data from DL/I” on page 65.
For general guidance about running DB2 utility jobs, see DB2 Utility Guide and
Reference. For information about DB2 DataPropagator, see DB2 Universal Database
Replication Guide and Reference.
The LOAD utility loads records into the tables and builds or extends any indexes
defined on them. If the table space already contains data, you can choose whether
you want to add the new data to the existing data or replace the existing data.
Using the INCURSOR option: The INCURSOR option of the LOAD utility
specifies a cursor for the input data set. Use the EXEC SQL utility control
statement to declare the cursor before running the LOAD utility. You define the
cursor so that it selects data from another DB2 table. The column names in the
SELECT statement must be identical to the column names of the table that is being
loaded. The INCURSOR option uses the DB2 UDB family cross-loader function.
Using the CCSID option: You can load input data into ASCII, EBCDIC, or Unicode
tables. The ASCII, EBCDIC, and UNICODE options on the LOAD utility statement
let you specify whether the format of the data in the input file is ASCII, EBCDIC,
or Unicode. The CCSID option of the LOAD utility statement lets you specify the
CCSIDs of the data in the input file. If the CCSID of the input data does not match
the CCSID of the table space, the input fields are converted to the CCSID of the
table space before they are loaded.
Availability during load: For nonpartitioned table spaces, data in the table space
that is being loaded is unavailable to other application programs during the load
operation with the exception of LOAD SHRLEVEL CHANGE. In addition, some
SQL statements, such as CREATE, DROP, and ALTER, might experience contention
when they run against another table space in the same DB2 database while the
table is being loaded.
Default values for columns: When you load a table and do not supply a value for
one or more of the columns, the action DB2 takes depends on the circumstances.
v If the column is not a ROWID or identity column, DB2 loads the default value
of the column, which is specified by the DEFAULT clause of the CREATE or
ALTER TABLE statement.
v If the column is a ROWID column that uses the GENERATED BY DEFAULT
option, DB2 generates a unique value.
v If the column is an identity column that uses the GENERATED BY DEFAULT
option, DB2 provides a specified value.
For ROWID or identity columns that use the GENERATED ALWAYS option, you
cannot supply a value because this option means that DB2 always provides a
value.
LOB columns: The LOAD utility treats LOB columns as varying-length data. The
length value for a LOB column must be 4 bytes. The LOAD utility can be used to
load LOB data if the length of the row, including the length of the LOB data, does
not exceed 32 KB. The auxiliary tables are loaded when the base table is loaded.
You cannot specify the name of the auxiliary table to load.
Replacing or adding data: You can use LOAD REPLACE to replace data in a
single-table table space or in a multiple-table table space. You can replace all the
data in a tale space (using the REPLACE option), or you can load new records into
a table space without destroying the rows that are already there (using the
RESUME option).
62 Administration Guide
Making corrections after LOAD
LOAD can place a table space or index space into one of several kinds of restricted
status. Your use of a table space in restricted status is severely limited. In general,
you cannot access its data through SQL; you can only drop the table space or one
of its tables, or perform some operation that resets the status.
COPY-pending status: LOAD places a table space in the COPY-pending state if you
load with LOG NO, which you might do to save space in the log. Immediately
after that operation, DB2 cannot recover the table space. However, you can recover
the table space by loading it again. Prepare for recovery, and remove the
restriction, by making a full image copy using SHRLEVEL REFERENCE. (If you
end the COPY job before it is finished, the table space is still in COPY-pending
status.)
When you use REORG or LOAD REPLACE with the COPYDDN keyword, a full
image copy data set (SHRLEVEL REF) is created during the execution of the
REORG or LOAD utility. This full image copy is known as an inline copy. The table
space is not left in COPY-pending state regardless of which LOG option is
specified for the utility.
The inline copy is valid only if you replace the entire table space or partition. If
you request an inline copy by specifying COPYDDN in a LOAD utility statement,
an error message is issued, and the LOAD terminates if you specify LOAD
RESUME YES or LOAD RESUME NO without REPLACE.
REBUILD-pending status: LOAD places all the index spaces for a table space in
the REBUILD-pending status if you end the job (using -TERM UTILITY) before it
completes the INDEXVAL phase. It places the table space itself in
RECOVER-pending status if you end the job before it completes the RELOAD
phase.
Another way to load data into tables is with the SQL INSERT statement. You can
issue the statement interactively or embed it in an application program.
Example: Suppose that you create a test table, TEMPDEPT, with the same
characteristics as the department table:
CREATE TABLE SMITH.TEMPDEPT
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) NOT NULL,
ADMRDEPT CHAR(3) NOT NULL)
IN DSN8D81A.DSN8S81D;
If you write an application program to load data into tables, you use that form of
INSERT, probably with host variables instead of the actual values shown in this
example.
The statement loads TEMPDEPT with data from the department table about all
departments that report to department D01.
| If you embed the INSERT statement in an application program, you can use a form
| of the statement that inserts multiple rows into a table from the values that are
| provided in host variable arrays. In this form, you specify the table name, the
| columns into which the data is to be inserted, and the arrays that contain the data.
| Each array corresponds to a column. For example, you can load TEMPDEPT with
| the number of rows in the host variable num-rows by using the following
| embedded INSERT statement:
| EXEC SQL
| INSERT INTO SMITH.TEMPDEPT
| FOR :num-rows ROWS
| VALUES (:hva1, :hva2, :hva3, :hva4);
| Assume that the host variable arrays hva1, hva2, hva3, and hva4 are populated with
| the values that are to be inserted. The number of rows to insert must be less than
| or equal to the dimension of each host variable array.
64 Administration Guide
v If you are inserting a large number of rows, you can use the LOAD utility.
Alternatively, use multiple INSERT statements with predicates that isolate the
data that is to be loaded, and then commit after each insert operation.
v When a table, whose indexes are already defined, is populated by using the
INSERT statement, both the FREEPAGE and the PCTFREE parameters are
ignored. FREEPAGE and PCTFREE are in effect only during a LOAD or REORG
operation.
v You can load a value for a ROWID column with an INSERT and fullselect only
if the ROWID column is defined as GENERATED BY DEFAULT. If you have a
table with a column that is defined as ROWID GENERATED ALWAYS, you can
propagate non-ROWID columns from a table with the same definition.
| v You cannot use an INSERT statement on system-maintained materialized query
| tables. For information about materialized query tables, see “Registering or
| changing materialized query tables” on page 84.
| v When you insert a row into a table that resides in a partitioned table space, DB2
| puts the last partition of the table space into a REORG-pending (REORP) state.
| v When you insert a row into a table that resides in a partitioned table space and
| the value of the first column of the limit key is null, the result of the INSERT
| depends on whether DB2 enforces the limit key of the last partition:
# – When DB2 enforces the limit key of the last partition, the INSERT fails if the
# first column is ascending and MAXVALUE was not specified as the highest
# value of the key in the last partition. If the first column is ascending and
# MAXVALUE was specified as the highest value of the key in the last
# partition, DB2 places NULLs into the last partition.
| – When DB2 enforces the limit key of the last partition, the rows are inserted
| into the first partition (if the first column is descending).
| – When DB2 does not enforce the limit key of the last partition, the rows are
| inserted into the last partition (if the first column is ascending) or the first
| partition (if the first column is descending).
| DB2 enforces the limit key of the last partition for the following table spaces:
| – Table spaces using table-controlled or index-controlled partitioning that are
| large (DSSIZE greater than, or equal to, 4 GB)
| – Tables spaces using table-controlled partitioning that are large or non-large
| (any DSSIZE)
| For the complete syntax of the INSERT statement, see Chapter 5 of DB2 SQL
| Reference.
In addition, this chapter includes techniques for making the following changes:
| v “Moving from index-controlled to table-controlled partitioning” on page 97
v “Changing the high-level qualifier for DB2 data sets” on page 98
v “Moving DB2 data” on page 105
If you want to migrate to another device type or change the catalog name of the
integrated catalog facility, you need to move the data (see “Moving DB2 data” on
page 105).
SMS manages every new data set that is created after the ALTER STOGROUP
statement is executed; SMS does not manage data sets that are created before the
execution of the statement. See “Migrating to DFSMShsm” on page 34 for more
considerations for using SMS to manage data sets.
The changes you make to the volume list by ALTER STOGROUP have no effect on
existing storage. Changes take effect when new objects are defined or when the
REORG, RECOVER, or LOAD REPLACE utilities are used on those objects. For
example, if you use ALTER STOGROUP to remove volume 22222 from storage
group DSN8G810, the DB2 data on that volume remains intact. However, when a
new table space is defined using DSN8G810, volume 22222 is not available for
space allocation.
To force a volume off and add a new volume, follow these steps:
1. Use the SYSIBM.SYSTABLEPART catalog table to determine which table spaces
are associated with the storage group. For example, the following query
indicates which table spaces use storage group DSN8G810:
SELECT TSNAME, DBNAME
FROM SYSIBM.SYSTABLEPART
WHERE STORNAME =’DSN8G810’ AND STORTYPE = ’I’;
2. Make an image copy of each table space; for example, COPY TABLESPACE
dbname.tsname DEVT SYSDA.
| 3. Ensure that the table space is not being updated in such a way that the data set
| might need to be extended. For example, you can stop the table space with the
| DB2 command STOP DATABASE (dbname) SPACENAM (tsname).
4. Use the ALTER STOGROUP statement to remove the volume that is associated
with the old storage group and to add the new volume:
ALTER STOGROUP DSN8G810
REMOVE VOLUMES (VOL1)
ADD VOLUMES (VOL2);
68 Administration Guide
INDEXBP
Lets you change the name of the default buffer pool for the indexes within
the database. The new default buffer pool is used only for new indexes;
existing definitions do not change.
You cannot use ALTER TABLESPACE to change some attributes of your table
space. For example, you must use other methods to change SEGSIZE or the
number of partitions or to convert it to a large table space. The following topics
describe these methods:
v “Changing the space allocation for user-managed data sets”
v “Dropping, re-creating, or converting a table space”
Use of the REORG utility to extend data sets causes the newly acquired free space
to be distributed throughout the table space rather than to be clustered at the end.
The compression dictionary for the table space is dropped, if one exists. All
tables in TS1 are dropped automatically.
6. Commit the DROP statement.
7. Create the new table space, TS1, and grant the appropriate user privileges.
You can also create a partitioned table space. You could use the following
statements:
CREATE TABLESPACE TS1
IN DSN8D81A
USING STOGROUP DSN8G810
PRIQTY 4000
SECQTY 130
ERASE NO
NUMPARTS 95
| (PARTITION 45 USING STOGROUP DSN8G810
PRIQTY 4000
SECQTY 130
COMPRESS YES,
| PARTITION 62 USING STOGROUP DSN8G810
PRIQTY 4000
SECQTY 130
COMPRESS NO)
LOCKSIZE PAGE
BUFFERPOOL BP1
CLOSE NO;
8. Create new tables TA1, TA2, TA3, ....
9. Re-create indexes on the tables, and re-grant user privileges on those tables.
See “Implications of dropping a table” on page 89 for more information.
10. Execute an INSERT statement for each table. For example:
INSERT INTO TA1
SELECT * FROM TB1;
70 Administration Guide
If a table in the table space has been created with RESTRICT ON DROP, you
must alter that table to remove the restriction before you can drop the table
space.
12. Notify users to re-create any synonyms they had on TA1, TA2, TA3, ....
13. REBIND plans and packages that were invalidated as a result of dropping the
table space.
Altering tables
When you alter a table, you do not change the data in the table; you merely
change the specifications that you used in creating the table.
In addition, this section includes techniques for making the following changes:
v “Changing an edit procedure or a field procedure” on page 87
v “Altering the subtype of a string column” on page 87
v “Altering the attributes of an identity column” on page 88
v “Changing data types by dropping and re-creating the table” on page 88
v “Moving a table to a table space of a different page size” on page 91
Access time to the table is not affected immediately, unless the record was
previously fixed length. If the record was fixed length, the addition of a new
column causes DB2 to treat the record as variable length and then access time is
affected immediately. To change the records to fixed length, follow these steps:
1. Run REORG with COPY on the table space, using the inline copy.
2. Run the MODIFY utility with the DELETE option to delete records of all image
copies that were made before the REORG you ran in step 1.
3. Create a unique index if you add a column that specifies PRIMARY KEY.
Inserting values in the new column might also degrade performance by forcing
rows onto another physical page. You can avoid this situation by creating the table
space with enough free space to accommodate normal expansion. If you already
have this problem, run REORG on the table space to fix it.
You can define the new column as NOT NULL by using the DEFAULT clause
unless the column has a ROWID data type or is an identity column. If the column
has a ROWID data type or is an identity column, you must specify NOT NULL
without the DEFAULT clause. You can let DB2 choose the default value, or you can
specify a constant or the value of the CURRENT SQLID or USER special register as
the value to be used as the default. When you retrieve an existing row from the
table, a default value is provided for the new column. Except in the following
cases, the value for retrieval is the same as the value for insert:
v For columns of data type DATE, TIME, and TIMESTAMP, the retrieval defaults
are:
Data type Default for retrieval
DATE 0001-01-01
TIME 00.00.00
TIMESTAMP 0001-01-01-00.00.00.000000
v For DEFAULT USER and DEFAULT CURRENT SQLID, the retrieved value for
rows that existed before the column was added is the value of the special
register when the column was added.
If the new column is a ROWID column, DB2 returns the same, unique row ID
value for a row each time you access that row. Reorganizing a table space does not
affect the values on a ROWID column. You cannot use the DEFAULT clause for
ROWID columns.
If the new column is an identity column (a column that is defined with the AS
IDENTITY clause), DB2 places the table space in REORG-pending (REORP) status,
and access to the table space is restricted until the table space is reorganized. When
the REORG utility is run, DB2
v Generates a unique value for the identity column of each existing row
v Physically stores these values in the database
v Removes the REORP status
You cannot use the DEFAULT clause for identity columns. For more information
about identity columns, see DB2 SQL Reference.
72 Administration Guide
If the new column is a short string column, you can specify a field procedure for
it; see “Field procedures” on page 1052. If you do specify a field procedure, you
cannot also specify NOT NULL.
The following example adds a new column to the table DSN8810.DEPT, which
contains a location code for the department. The column name is
LOCATION_CODE, and its data type is CHAR (4).
ALTER TABLE DSN8810.DEPT
ADD LOCATION_CODE CHAR (4);
| In general, DB2 can alter a data type if the data can be converted from the old type
| to the new type without truncation or without losing arithmetic precision.
| Restrictions: The column that you alter cannot be a part of a referential constraint,
| have a field procedure, be defined as an identity column, or be defined as a
| column of a materialized query table.
| When you alter the data type of a column in a table, DB2 creates a new version for
| the table space that contains the data rows of the table, as described in “Table
| space versions” on page 75.
| For specific information about valid conversions of data types, see the ALTER
| TABLE statement in Chapter 5 of DB2 SQL Reference. For information about other
| changes to data types, see “Changing data types by dropping and re-creating the
| table” on page 88.
| Example: Assume that a table contains basic account information for a small bank.
| The initial account table was created many years ago in the following manner:
| CREATE TABLE ACCOUNTS (
| ACCTID DECIMAL(4,0) NOT NULL,
| NAME CHAR(20) NOT NULL,
| ADDRESS CHAR(30) NOT NULL,
| BALANCE DECIMAL(10,2) NOT NULL)
| IN dbname.tsname;
| By altering the column data types in the following ways, you can make the
| columns more appropriate for the data that they contain. The INSERT statement
| that follows shows the kinds of values that you can now store in the ACCOUNTS
| table.
| ALTER TABLE ACCOUNTS ALTER COLUMN NAME SET DATA TYPE VARCHAR(40);
| ALTER TABLE ACCOUNTS ALTER COLUMN ADDRESS SET DATA TYPE VARCHAR(60);
| ALTER TABLE ACCOUNTS ALTER COLUMN BALANCE SET DATA TYPE DECIMAL(15,2);
| ALTER TABLE ACCOUNTS ALTER COLUMN ACCTID SET DATA TYPE INTEGER;
| COMMIT;
|
| INSERT INTO ACCOUNTS (ACCTID, NAME, ADDRESS, BALANCE)
| VALUES (123456, ’LAGOMARSINO, MAGDALENA’,
| ’1275 WINTERGREEN ST, SAN FRANCISCO, CA, 95060’, 0);
| COMMIT;
| The NAME and ADDRESS columns can now handle longer values without
| truncation, and the shorter values are no longer padded. The BALANCE column is
| extended to allow for larger dollar amounts. DB2 saves these new formats in the
| catalog and stores the inserted row in the new formats.
| Recommendation: If you change both the length and the type of a column from
| fixed-length to varying-length by using one or more ALTER statements, issue the
| ALTER statements within the same unit of work. Reorganize immediately so that
| the format is consistent for all of the data rows in the table.
| Example: Assume that the following indexes are defined on the ACCOUNTS table.
| CREATE INDEX IX1 ON ACCOUNTS(ACCTID);
| CREATE INDEX IX2 ON ACCOUNTS(NAME);
| When the data type of the ACCTID column is altered from DECIMAL(4,0) to
| INTEGER, the IX1 index is placed in a REBUILD-pending (RBDP) state. Similarly,
| when the data type of the NAME column is altered from CHAR(20) to
| VARCHAR(40), the IX2 index is placed in an RBDP state. These indexes cannot be
| accessed until they are rebuilt from the data.
| Index inaccessibility and data availability. Whenever possible, DB2 tries to avoid using
| inaccessible indexes in an effort to increase data availability. Beginning in Version
| 8, DB2 allows you to insert into, and delete from, tables that have non-unique
| indexes which are currently in an RBDP state. DB2 also allows you to delete from
| tables that have unique indexes which are currently in an RBDP state.
| In certain situations, when an index is inaccessible, DB2 can bypass the index to
| allow applications access to the underlying data. In these situations, DB2 offers
| accessibility at the expense of possibly-degraded performance. In making its
| determination of the best access path, DB2 can bypass an index under the
| following circumstances:
| v Dynamic PREPAREs
74 Administration Guide
| DB2 avoids choosing an index that is in an RBDP state. Bypassing the index
| typically degrades performance, but provides availability that would not be
| possible otherwise.
| v Cached PREPAREs
| DB2 avoids choosing an index that is both in an RBDP state and within a cached
| PREPARE statement because the dynamic cache for the table is invalidated
| whenever an index is put into, or taken out of, an RBDP state. Avoiding indexes
| that are within cached PREPARE statements ensures that after an index is
| rebuilt, DB2 uses the index for all future queries.
| v PADDED and NOT PADDED indexes
| DB2 avoids choosing a PADDED index or a NOT PADDED index that is
| currently in an RBDP state for any static or dynamic SQL statements.
| In the case of static BINDs, DB2 might choose an index that is currently in an
| RBDP state as the best access path. It does so by making the optimistic assumption
| that the index will be available by the time it is actually used. (If the index is not
| available at that time, an application can receive a resource unavailable message.)
| Padding. The IX2 index retains its current padding attribute, the value of which is
| stored in the PADDED column of the SYSINDEXES table. The padding attribute of
| an index is altered only if it is inconsistent with the current state of the index. This
| can occur, for example, if you change the value of the PADDED column after
| creating a table.
| Whether the IX2 index is padded, in this example, depends on when the table was
| created.
| v If the table was migrated from a pre-Version 8 release, the index is padded by
| default. In this case, the value of the PADDED column of the SYSINDEXES table
| is Y.
| v If the table was created in either compatibility mode or enabling-new-function
| mode of Version 8, the index is padded by default. In this case, the value of the
| PADDED column of the SYSINDEXES table is Y.
| v If the table was created in new-function mode of Version 8 and the default value
| (NO) of the PAD INDEX BY DEFAULT parameter on the DSNTIPE installation
| panel was not changed, the index is not padded by default. In this case, the
| value of the PADDED column of the SYSINDEXES table is N.
| v If the table was created in Version 8 after the value of the PAD INDEX BY
| DEFAULT parameter on the DSNTIPE installation panel was changed to YES,
| the index is padded by default. In this case, the value of the PADDED column of
| the SYSINDEXES table is Y.
| DB2 creates only one table space version if, in the same unit of work, you make
| multiple schema changes. If you make these same schema changes in separate
| units of work, each change results in a new table space version. For example, the
| first three ALTER TABLE statements in the following example are all associated
| with the same table space version. The scope of the first COMMIT statement
| encompasses all three schema changes. The last ALTER TABLE statement is
| associated with the next table space version. The scope of the second COMMIT
| statement encompasses a single schema change.
| ALTER TABLE ACCOUNTS ALTER COLUMN NAME SET DATA TYPE VARCHAR(40);
| ALTER TABLE ACCOUNTS ALTER COLUMN ADDRESS SET DATA TYPE VARCHAR(60);
| ALTER TABLE ACCOUNTS ALTER COLUMN BALANCE SET DATA TYPE DECIMAL(15,2);
| COMMIT;
|
| ALTER TABLE ACCOUNTS ALTER COLUMN ACCTID SET DATA TYPE INTEGER;
| COMMIT;
| Reorganizing table spaces: After you commit a schema change, DB2 puts the
| affected table space into an advisory REORG-pending (AREO*) state. The table
| space stays in this state until you run the REORG TABLESPACE utility, which
| reorganizes the table space and applies the schema changes.
| DB2 uses table space versions to maximize data availability. Table space versions
| enable DB2 to keep track of schema changes and simultaneously, provide users
| with access to data in altered table spaces. When users retrieve rows from an
| altered table, the data is displayed in the format that is described by the most
| recent schema definition, even though the data is not currently stored in this
| format. The most recent schema definition is associated with the current table space
| version.
| Recycling table space version numbers: DB2 can store up to 256 table space
| versions, numbered sequentially from 0 to 255. (The next consecutive version
| number after 255 is 1. Version number 0 is never reused; it is reserved for the
| original version of the table space.) The versions that are associated with schema
| changes that have not been applied yet are considered to be “in use,” and the
| range of used versions is stored in the catalog. In-use versions can be recovered
| from image copies of the table space, if necessary. To determine the range of
76 Administration Guide
| version numbers currently in use for a table space, query the OLDEST_VERSION
| and CURRENT_VERSION columns of the SYSIBM.SYSTABLESPACE catalog table.
| To prevent DB2 from running out of table space version numbers (and to prevent
| subsequent ALTER statements from failing), you must recycle unused table space
| version numbers regularly by running the MODIFY RECOVERY utility. Version
| numbers are considered to be “unused” if the schema changes that are associated
| with them have been applied and there are no image copies that contain data at
| those versions. If all reusable version numbers (1 to 255) are currently in use, you
| must reorganize the table space by running REORG TABLESPACE before you can
| recycle the version numbers. For more information about managing table space
| version numbers, see DB2 Utility Guide and Reference.
Suppose that you want to define relationships among the sample tables by adding
primary and foreign keys with the ALTER TABLE statement. The rules for these
relationships are as follows:
v An existing table must have a unique index on its primary key columns before
you can add the primary key. The index becomes the primary index.
v The parent key of the parent table must be added before the corresponding
foreign key of the dependent table.
You can build the same referential structure in several different ways. The
following example sequence does not have the fewest number of possible
operations, but it is perhaps the simplest to understand.
1. Create a unique index on the primary key columns for any table that does not
already have one.
2. For each table, issue an ALTER TABLE statement to add its primary key.
In the next steps, you issue an ALTER TABLE statement to add foreign keys for
each table except the activity table. This leaves the table space in
CHECK-pending status, which you reset by running the CHECK DATA utility
with the DELETE(YES) option.
CHECK DATA deletes are not bound by delete rules; they cascade to all
descendents of a deleted row, which can be disastrous. For example, if you
delete the row for department A00 from the department table, the delete might
propagate through most of the referential structure. The remaining steps
prevent deletion from more than one table at a time.
Adding a primary key: To add a primary key to an existing table, use the
PRIMARY KEY clause in an ALTER TABLE statement. For example, if the
department table and its index XDEPT1 already exist, create its primary key by
issuing the following statement:
ALTER TABLE DSN8810.DEPT
ADD PRIMARY KEY (DEPTNO);
Adding a unique key: To add a unique key to an existing table, use the UNIQUE
clause of the ALTER TABLE statement. For example, if the department table has a
unique index defined on column DEPTNAME, you can add a unique key
constraint, KEY_DEPTNAME, consisting of column DEPTNAME by issuing the
following statement:
ALTER TABLE DSN8810.DEPT
ADD CONSTRAINT KEY_DEPTNAME UNIQUE (DEPTNAME);
Adding a foreign key: To add a foreign key to an existing table, use the FOREIGN
KEY clause of the ALTER TABLE statement. The parent key must exist in the
parent table before you add the foreign key. For example, if the department table
has a primary key defined on the DEPTNO column, you can add a referential
constraint, REFKEY_DEPTNO, on the DEPTNO column of the project table by
issuing the following statement:
ALTER TABLE DSN8810.PROJ
ADD CONSTRAINT REFKEY_DEPTNO FOREIGN KEY (DEPTNO) REFERENCES DSN8810.DEPT
ON DELETE RESTRICT;
Considerations: Adding a parent key or a foreign key to an existing table has the
following restrictions and implications:
v If you add a primary key, the table must already have a unique index on the key
columns. If multiple unique indexes include the primary key columns, the index
that was most recently created on the key columns becomes the primary index.
78 Administration Guide
Because of the unique index, no duplicate values of the key exist in the table;
therefore you do not need to check the validity of the data.
v If you add a unique key, the table must already have a unique index with a key
that is identical to the unique key. If multiple unique indexes include the
primary key columns, DB2 arbitrarily chooses a unique index on the key
columns to enforce the unique key. Because of the unique index, no duplicate
values of the key exist in the table; therefore you do not need to check the
validity of the data.
v You can use only one FOREIGN KEY clause in each ALTER TABLE statement; if
you want to add two foreign keys to a table, you must execute two ALTER
TABLE statements.
v If you add a foreign key, the parent key and unique index of the parent table
must already exist. Adding the foreign key requires the ALTER privilege on the
dependent table and either the ALTER or REFERENCES privilege on the parent
table.
v Adding a foreign key establishes a referential constraint relationship, with the
many implications that are described in Part 2 of DB2 Application Programming
and SQL Guide. DB2 does not validate the data when you add the foreign key.
Instead, if the table is populated (or, in the case of a nonsegmented table space,
if the table space has ever been populated), the table space that contains the
table is placed in CHECK-pending status, just as if it had been loaded with
ENFORCE NO. In this case, you need to execute the CHECK DATA utility to
clear the CHECK-pending status.
| v You can add a foreign key with the NOT ENFORCED option to create an
| informational referential constraint. This action does not leave the table space in
| CHECK-pending status, and you do not need to execute CHECK DATA.
Dropping a foreign key: When you drop a foreign key using the DROP FOREIGN
KEY clause of the ALTER TABLE statement, DB2 drops the corresponding
referential relationships. (You must have the ALTER privilege on the dependent
table and either the ALTER or REFERENCES privilege on the parent table.) If the
referential constraint references a unique key that was created implicitly, and no
other relationships are dependent on that unique key, the implicit unique key is
also dropped.
Dropping a unique key: When you drop a unique key using the DROP UNIQUE
clause of the ALTER TABLE statement, DB2 drops all the referential relationships
in which the unique key is a parent key. The dependent tables no longer have
foreign keys. (You must have the ALTER privilege on any dependent tables.) The
table’s unique index that enforced the unique key no longer indicates that it
enforces a unique key, although it is still a unique index.
Dropping a primary key: When you drop a primary key using the DROP
PRIMARY KEY clause of the ALTER TABLE statement, DB2 drops all the
referential relationships in which the primary key is a parent key. The dependent
If the table is not empty, what happens when you define the check constraint
depends on the value of the CURRENT RULES special register, which can be either
STD or DB2.
v If the value is STD, the check constraint is enforced immediately when it is
defined. If a row does not conform, the table check constraint is not added to
the table and an error occurs.
v If the value is DB2, the check constraint is added to the table description but its
enforcement is deferred. Because some rows in the table might violate the check
constraint, the table is placed in check-pending status.
The ALTER TABLE statement that is used to define a check constraint always fails
if the table space or partition that contains the table is in a CHECK-pending status,
the CURRENT RULES special register value is STD, and the table is not empty.
To remove a check constraint from a table, use the DROP CONSTRAINT or DROP
CHECK clauses of the ALTER TABLE statement. You must not use DROP
CONSTRAINT on the same ALTER TABLE statement as DROP FOREIGN KEY,
DROP CHECK, DROP PRIMARY KEY, or DROP UNIQUE.
| Adding a partition
| You can use the ALTER TABLE statement to add a partition to an existing
| partitioned table space and to each partitioned index in the table space.
| Restriction: You cannot add a new partition to an existing partitioned table space
| if the table has LOB columns. Additionally, you cannot add or alter a partition for
| a materialized query table.
| When you add a partition, DB2 uses the next physical partition that is not already
| in use until you reach the maximum number of partitions for the table space.
| When DB2 manages your data sets, the next available data set is allocated for the
| table space and for each partitioned index. When you manage your own data sets,
| you must first define the data sets for the table space and the partitioned indexes
| before issuing the ALTER TABLE ADD PARTITION statement.
| Example: Assume that a table space that contains a transaction table named
| TRANS is divided into 10 partitions, and each partition contains one year of data.
| Partitioning is defined on the transaction date, and the limit key value is the end
| of the year. Table 18 shows a representation of the table space.
| Table 18. Initial table space with 10 partitions
| Partition Limit value Data set name that backs the partition
| P001 12/31/1994 catname.DSNDBx.dbname.psname.I0001.A001
| P002 12/31/1995 catname.DSNDBx.dbname.psname.I0001.A002
| P003 12/31/1996 catname.DSNDBx.dbname.psname.I0001.A003
80 Administration Guide
| Table 18. Initial table space with 10 partitions (continued)
| Partition Limit value Data set name that backs the partition
| P004 12/31/1997 catname.DSNDBx.dbname.psname.I0001.A004
| P005 12/31/1998 catname.DSNDBx.dbname.psname.I0001.A005
| P006 12/31/1999 catname.DSNDBx.dbname.psname.I0001.A006
| P007 12/31/2000 catname.DSNDBx.dbname.psname.I0001.A007
| P008 12/31/2001 catname.DSNDBx.dbname.psname.I0001.A008
| P009 12/31/2002 catname.DSNDBx.dbname.psname.I0001.A009
| P010 12/31/2003 catname.DSNDBx.dbname.psname.I0001.A010
|
| Assume that you want to add a new partition to handle the transactions for the
| next year. To add a partition, issue the following statement:
| ALTER TABLE TRANS ADD PARTITION ENDING AT (’12/31/2004’);
| What happens:
| v DB2 adds a new partition to the table space and to each partitioned index on the
| TRANS table. For the table space, DB2 uses the existing table space PRIQTY and
| SECQTY attributes of the previous partition for the space attributes of the new
| partition. For each partitioned index, DB2 uses the existing PRIQTY and
| SECQTY attributes of the previous index partition.
| v When the ALTER completes, you can use the new partition immediately if the
| table space is a large table space. In this case, the partition is not placed in a
| REORG-pending (REORP) state because it extends the high-range values that
| were not previously used. For non-large table spaces, the partition is placed in a
| REORG-pending (REORP) state because the last partition boundary was not
| previously enforced.
| Table 19 shows a representative excerpt of the table space after the partition for the
| year 2004 was added.
| Table 19. An excerpt of the table space after adding a new partition (P011)
| Partition Limit value Data set name that backs the partition
| P008 12/31/2001 catname.DSNDBx.dbname.psname.I0001.A008
| P009 12/31/2002 catname.DSNDBx.dbname.psname.I0001.A009
| P010 12/31/2003 catname.DSNDBx.dbname.psname.I0001.A010
| P011 12/31/2004 catname.DSNDBx.dbname.psname.I0001.A011
|
| Specifying space attributes: If you want to specify the space attributes for a new
| partition, use the ALTER TABLESPACE and ALTER INDEX statements. For
| example, suppose the new partition is PARTITION 11 for the table space and the
| index. Issue the following statements to specify quantities for the PRIQTY,
| SECQTY, FREEPAGE, and PCTFREE attributes:
| ALTER TABLESPACE tsname ALTER PARTITION 11
| USING STOGROUP stogroup-name
| PRIQTY 200 SECQTY 200
| FREEPAGE 20 PCTFREE 10;
|
| ALTER INDEX index-name ALTER PARTITION 11
| USING STOGROUP stogroup-name
| PRIQTY 100 SECQTY 100
| FREEPAGE 25 PCTFREE 5;
| Altering partitions
| You can use the ALTER TABLE statement to change the boundary between
| partitions, to rotate the first partition to be the last partition, or to extend the
| boundary of the last partition.
| Example: Assume that a table space that contains a transaction table named
| TRANS is divided into 10 partitions, and each partition contains one year of data.
| Partitioning is defined on the transaction date, and the limit key value is the end
| of the year. Table 18 on page 80 shows a representation of the table space.
| Now the data in the first quarter of the year 2003 is part of partition 9. The
| partitions on either side of the new boundary (partitions 9 and 10) are placed in
| REORG-pending (REORP) status and are not available until the partitions are
| reorganized.
| Alternatively, you can rebalance the data in partitions 9 and 10 by using the
| REBALANCE option of the REORG utility:
| REORG TABLESPACE dbname.tsname PART(9:10) REBALANCE
| This method avoids leaving the partitions in a REORP state. When you use the
| REBALANCE option on partitions, DB2 automatically changes the limit key values.
| Rotating partitions
| Assume that the partition structure of the table space is sufficient through the year
| 2004. Table 19 on page 81 shows a representation of the table space through the
| year 2004. When another partition is needed for the year 2005, you determine that
| the data for 1994 is no longer needed. You want to recycle the partition for the
| year 1994 to hold the transactions for the year 2005.
| To rotate the first partition to be the last partition, issue the following statement:
| ALTER TABLE TRANS ROTATE PARTITION FIRST TO LAST
| ENDING AT (’12/31/2005’) RESET;
| For a table with limit values in ascending order, the data in the ENDING AT clause
| must be higher than the limit value for previous partitions. DB2 chooses the first
| partition to be the partition with the lowest limit value.
| For a table with limit values in descending order, the data must be lower than the
| limit value for previous partitions. DB2 chooses the first partition to be the
| partition with the highest limit value.
82 Administration Guide
| The RESET keyword specifies that the existing data in the oldest partition is
| deleted, and no delete triggers are activated.
| What happens:
| v Because the oldest (or first) partition is P001, DB2 assigns the new limit value to
| P001. This partition holds all rows in the range between the new limit value of
| 12/31/2005 and the previous limit value of 12/31/2004.
| v The RESET operation deletes all existing data. You can use the partition
| immediately after the ALTER completes. The partition is not placed in REORG-
| pending (REORP) status because it extends the high-range values that were not
| previously used.
| Table 20 shows a representation of the table space after the first partition is rotated
| to become the last partition.
| Table 20. Rotating the low partition to the end
| Partition Limit value Data set name that backs the partition
| P002 12/31/1995 catname.DSNDBx.dbname.psname.I0001.A002
| P003 12/31/1996 catname.DSNDBx.dbname.psname.I0001.A003
| P004 12/31/1997 catname.DSNDBx.dbname.psname.I0001.A004
| P005 12/31/1998 catname.DSNDBx.dbname.psname.I0001.A005
| P006 12/31/1999 catname.DSNDBx.dbname.psname.I0001.A006
| P007 12/31/2000 catname.DSNDBx.dbname.psname.I0001.A007
| P008 12/31/2001 catname.DSNDBx.dbname.psname.I0001.A008
| P009 12/31/2002 catname.DSNDBx.dbname.psname.I0001.A009
| P010 12/31/2003 catname.DSNDBx.dbname.psname.I0001.A010
| P011 12/31/2004 catname.DSNDBx.dbname.psname.I0001.A011
| P001 12/31/2005 catname.DSNDBx.dbname.psname.I0001.A001
|
| Recommendation: When you create a partitioned table space, you do not need to
| allocate extra partitions for expected growth. Instead, use either ALTER TABLE
| ADD PARTITION to add partitions as needed or, if rotating partitions is
| appropriate for your application, use ALTER TABLE ROTATE PARTITION to avoid
| adding another partition.
| Nullable partitioning columns: DB2 lets you use nullable columns as partitioning
| columns. But with table-controlled partitioning, DB2 can restrict the insertion of
| null values into a table with nullable partitioning columns, depending on the order
| of the partitioning key. After a rotate operation:
| v If the partitioning key is ascending, DB2 prevents an INSERT of a row with a
| null value for the key column.
| v If the partitioning key is descending, DB2 allows an INSERT of a row with a
| null value for the key column. The row is inserted into the first partition.
| To extend the boundary of the last partition, issue the following statement:
Chapter 6. Altering your database design 83
| ALTER TABLE TRANS ALTER PARTITION 1 ENDING AT (’12/31/2006’);
| You can use the partition immediately after the ALTER completes. The partition is
| not placed in REORG-pending (REORP) status because it extends the high-range
| values that were not previously used.
| To change the limit key back to 12/31/2005, issue the following statement:
| ALTER TABLE TRANS ALTER PARTITION 1 ENDING AT (’12/31/2005’);
| Adding a partition when the last partition is in REORP: Assume that you
| extended the boundary of the last partition and then changed back to the previous
| boundary for that partition. Table 20 on page 83 shows a representation of the table
| space through the year 2005. The last partition is in REORP.
| You want to add a new partition with a limit key value of 12/31/2006. You can
| use ALTER TABLE ADD PARTITION because this limit key value is higher than
| the previous limit key value of 12/31/2005.
| Materialized query tables enable DB2 to use automatic query rewrite to optimize
| queries. Automatic query rewrite is a process that DB2 uses to examine a query
| and, if appropriate, to rewrite the query so that it executes against a materialized
| query table that has been derived from the base tables in the submitted query. For
| additional information about the use of automatic query rewrite in query
| optimization, see Part 5 (Volume 2) of DB2 Administration Guide.
84 Administration Guide
| Registering an existing table as a materialized query table
| Assume that you have a very large transaction table named TRANS that contains
| one row for each transaction. The table has many columns, but you are interested
| in only the following columns:
| v ACCTID, which is the customer account ID
| v LOCID, which is the customer location ID
| v YEAR, which holds the year of the transaction
| You created another base table named TRANSCOUNT that consists of these
| columns and a count of the number of rows in TRANS that are grouped by the
| account, location, and year of the transaction. Suppose that you repopulate
| TRANSCOUNT periodically by deleting the rows and then by using the following
| INSERT statement:
| INSERT INTO TRANSCOUNT (ACCTID, LOCID, YEAR, CNT)
| SELECT ACCTID, LOCID, YEAR, COUNT(*)
| FROM TRANS
| GROUP BY ACCTID, LOCID, YEAR;
| You can still maintain the data, as specified by the MAINTAINED BY USER
| option, which means that you can continue to load, insert, update, or delete data.
| You can also use the REFRESH TABLE statement to populate the table. REFRESH
| DEFERRED indicates that the data in the table is the result of your most recent
| update or, if more recent, the result of a REFRESH TABLE statement.
| The REFRESH TABLE statement deletes all the rows in a materialized query table,
| executes the fullselect in its definition, inserts the result into the table, and updates
| the catalog with the refresh timestamp and cardinality of the table. For more
| information about the REFRESH TABLE statement, see Chapter 5 of DB2 SQL
| Reference.
| After you issue this statement, DB2 can no longer use the table for query
| optimization, and you cannot populate the table by using the REFRESH TABLE
| statement.
To ensure that the rows of a table conform to a new validation routine, you must
run the validation routine against the old rows. One way to accomplish this is to
use the REORG and LOAD utilities as shown in the following steps:
1. Use REORG to reorganize the table space that contains the table with the new
validation routine. Specify UNLOAD ONLY, as in this example:
REORG TABLESPACE DSN8D81A.DSN8S81E
UNLOAD ONLY
86 Administration Guide
This step creates a data set that is used as input to the LOAD utility.
2. Run LOAD with the REPLACE option, and specify a discard data set to hold
any invalid records. For example:
LOAD INTO TABLE DSN8810.EMP
REPLACE
FORMAT UNLOAD
DISCARDDN SYSDISC
The EMPLNEWE validation routine validates all rows after the LOAD step has
completed. DB2 copies any invalid rows into the SYSDISC data set.
LOB values are not available for DATA CAPTURE CHANGES. To return a table
back to normal logging, use DATA CAPTURE NONE.
To change an edit procedure or a field procedure for a table space in which the
maximum record length is less than 32 KB, use the following procedure:
1. Run the UNLOAD utility or run the REORG TABLESPACE utility with the
UNLOAD EXTERNAL option to unload the data and decode it using the
existing edit procedure or field procedure.
These utilities generate a LOAD statement in the data set (specified by the
PUNCHDDN option of the REORG TABLESPACE utility) that you can use to
reload the data into the original table space.
If you are using the same edit procedure or field procedure for many tables,
unload the data from all the table spaces that have tables that use the
procedure.
2. Modify the code of the edit procedure or the field procedure.
3. After the unload operation is completed, stop DB2.
4. Link-edit the modified procedure, using its original name.
5. Start DB2.
6. Use the LOAD utility to reload the data. LOAD then uses the modified
procedure or field procedure to encode the data.
To change an edit procedure or a field procedure for a table space in which the
maximum record length is greater than 32 KB, use the DSNTIAUL sample program
to unload the data.
If the MIXED DATA install option on installation panel DSNTIPF is YES, use one
of the following values in the column:
B for bit data
S for SBCS data
Any other value for MIXED data
If the MIXED DATA install option is NO, use one of the following values in the
column:
B for bit data
Any other value for SBCS data
Entering an M in the column when the MIXED DATA install option is NO specifies
SBCS data, not MIXED data.
Changing the data type of an identity column, like changing some other data
types, requires that you drop and then re-create the table (see “Changing data
types by dropping and re-creating the table”).
See Chapter 5 of DB2 SQL Reference for more information about identity column
attributes.
88 Administration Guide
4. Re-create the table.
If the table has an identity column:
v Choose carefully the new value for the START WITH attribute of the identity
column in the CREATE TABLE statement if you want the first generated
value for the identity column of the new table to resume the sequence after
the last generated value for the table that was saved by the unload in step 1.
v Define the identity column as GENERATED BY DEFAULT so that the
previously generated identity values can be reloaded into the new table.
5. Reload the table.
The statement deletes the row in the SYSIBM.SYSTABLES catalog table that
contains information about DSN8810.PROJ. It also drops any other objects that
depend on the project table. As a result:
v The column names of the table are dropped from SYSIBM.SYSCOLUMNS.
v If the dropped table has an identity column, the sequence attributes of the
identity column are removed from SYSIBM.SYSSEQUENCES.
v If triggers are defined on the table, they are dropped, and the corresponding
rows are removed from SYSIBM.SYSTRIGGERS and SYSIBM.SYSPACKAGES.
v Any views based on the table are dropped.
v Application plans or packages that involve the use of the table are invalidated.
v Cached dynamic statements that involve the use of the table are removed from
the cache.
v Synonyms for the table are dropped from SYSIBM.SYSSYNONYMS.
v Indexes created on any columns of the table are dropped.
v Referential constraints that involve the table are dropped. In this case, the project
table is no longer a dependent of the department and employee tables, nor is it a
parent of the project activity table.
v Authorization information that is kept in the DB2 catalog authorization tables is
updated to reflect the dropping of the table. Users who were previously
authorized to use the table, or views on it, no longer have those privileges,
because catalog rows are deleted.
v Access path statistics and space statistics for the table are deleted from the
catalog.
v The storage space of the dropped table might be reclaimed.
If the table space containing the table is:
– Implicitly created (using CREATE TABLE without the TABLESPACE clause),
the table space is also dropped. If the data sets are in a storage group,
dropping the table space reclaims the space. For user-managed data sets, you
must reclaim the space yourself.
– Partitioned, or contains only the one table, you can drop the table space.
– Segmented, DB2 reclaims the space.
– Simple, and contains other tables, you must run the REORG utility to reclaim
the space.
Finding dependent views: The following example query lists the views, with their
creators, that are affected if you drop the project table:
SELECT DNAME, DCREATOR
FROM SYSIBM.SYSVIEWDEP
WHERE BNAME = ’PROJ’
AND BCREATOR = ’DSN8810’
AND BTYPE = ’T’;
Finding dependent packages: The next example lists the packages, identified by the
package name, collection ID, and consistency token (in hexadecimal
representation), that are affected if you drop the project table:
SELECT DNAME, DCOLLID, HEX(DCONTOKEN)
FROM SYSIBM.SYSPACKDEP
WHERE BNAME = ’PROJ’
AND BQUALIFIER = ’DSN8810’
AND BTYPE = ’T’;
Finding dependent plans: The next example lists the plans, identified by plan name,
that are affected if you drop the project table:
SELECT DNAME
FROM SYSIBM.SYSPLANDEP
WHERE BNAME = ’PROJ’
AND BCREATOR = ’DSN8810’
AND BTYPE = ’T’;
Re-creating a table
| To re-create a DB2 table to decrease the length attribute of a string column or the
precision of a numeric column, follow these steps:
1. If you do not have the original CREATE TABLE statement and all authorization
statements for the table (call it T1), query the catalog to determine its
description, the description of all indexes and views on it, and all users with
privileges on it.
2. Create a new table (call it T2) with the desired attributes.
90 Administration Guide
3. Copy the data from the old table T1 into the new table T2 by using one of the
following methods:
v Execute the following INSERT statement:
INSERT INTO T2
SELECT * FROM T1;
v Load data from your old table into the new table by using the INCURSOR
option of the LOAD utility. This option uses the DB2 UDB family
cross-loader function.
4. Execute the statement DROP TABLE T1. If T1 is the only table in an explicitly
created table space, and you do not mind losing the compression dictionary, if
one exists, drop the table space instead, so that the space is reclaimed.
5. Commit the DROP statement.
6. Use the statement RENAME TABLE to rename table T2 to T1.
7. Run the REORG utility on the table space that contains table T1.
8. Notify users to re-create any synonyms, indexes, views, and authorizations they
had on T1.
If you want to change a data type from string to numeric or from numeric to
string (for example, INTEGER to CHAR or CHAR to INTEGER), use the CHAR
and DECIMAL scalar functions in the SELECT statement to do the conversion.
Another alternative is to use the following method:
1. Use UNLOAD or REORG UNLOAD EXTERNAL (if the data to unload in less
than 32 KB) to save the data in a sequential file, and then
2. Use the LOAD utility to repopulate the table after re-creating it. When you
reload the table, make sure you edit the LOAD statement to match the new
column definition.
This method is particularly appealing when you are trying to re-create a large
table.
For other changes, you must drop and re-create the index as described in
“Dropping and redefining an index” on page 94.
| When you add a new column to an index, change how varying-length columns are
| stored in the index, or change the data type of a column in the index, DB2 creates
| a new version of the index, as described in “Index versions” on page 94.
| For example, assume that you have a table that was created with columns that
| include ACCTID, STATE, and POSTED:
| CREATE TABLE TRANS
| (ACCTID ...,
| STATE ...,
| POSTED ...,
| ... , ...)
| ...;
| To add a ZIPCODE column to the table and the index, issue the following
| statements:
92 Administration Guide
| ALTER TABLE TRANS ADD COLUMN ZIPCODE CHAR(5);
| ALTER INDEX STATE_IX ADD COLUMN (ZIPCODE);
| COMMIT;
| Because the ALTER TABLE and ALTER INDEX statements are executed within the
| same unit of work, DB2 can use the new index with the key STATE, ZIPCODE
| immediately for data access.
| Restriction: You cannot add a column to an index that enforces a primary key,
| unique key, or referential constraint.
| The ALTER INDEX statement is successful only if the index has at least one
| varying-length column.
| When you alter the padding attribute of an index, the index is placed into a
| restricted REBUILD-pending (RBDP) state. When you alter the padding attribute of
| a nonpartitioned secondary index (NPSI), the index is placed into a page set
| REBUILD-pending (PSRBD) state. In both cases, the indexes cannot be accessed
| until they are rebuilt from the data.
| If an index established the partitions for the table space, you can use the ALTER
INDEX statement for that index and a REORG job to shift data among the affected
partitions. The result is that data is balanced according to your specifications. You
can rebalance data by changing the limit key values of all or most of the partitions.
If you want to drop a unique index, you must take additional steps before running
a DROP INDEX statement. Any primary key, unique key, or referential constraints
associated with the unique index must be dropped before you drop the unique
index. However, you can drop a unique index for a unique key without dropping
the unique constraint if the unique key was created before Version 8.
You must commit the DROP INDEX statement before you create any new table
spaces or indexes by the same name. If an index is dropped and then an
application program using that index is run (and thereby automatically rebound),
that application program does not use the old index. If, at a later time, the index is
re-created, and the application program is not rebound, the application program
cannot take advantage of the new index.
| Index versions
| DB2 creates at least one index version each time you commit one of the following
| schema changes:
| v Change the data type of a non-numeric column that is contained in one or more
| indexes by using the ALTER TABLE statement
| DB2 creates a new index version for each index that is affected by this operation.
# v Change the length of a VARCHAR column that is contained in one or more
# PADDED indexes by using the ALTER TABLE statement
# DB2 creates a new index version for each index that is affected by this operation.
| v Add a column to an index by using the ALTER INDEX statement
| DB2 creates one new index version, because only one index is affected by this
| operation.
# Exceptions: DB2 does not create an index version under the following
# circumstances:
| v When the index was created with DEFINE NO
# v When you extend the length of a varying character (varchar data type) or
# varying graphic (vargraphic data type) column that is contained in one or more
# NOT PADDED indexes
| v When you specify the same data type and length that a column—which is
| contained in one or more indexes—currently has, such that its definition does
| not actually change
94 Administration Guide
| DB2 creates only one index version if, in the same unit of work, you make
| multiple schema changes to columns contained in the same index. If you make
| these same schema changes in separate units of work, each change results in a new
| index version.
| Reorganizing indexes: DB2 uses index versions to maximize data availability. Index
| versions enable DB2 to keep track of schema changes and simultaneously, provide
| users with access to data in altered columns that are contained in one or more
| indexes. When users retrieve rows from a table with an altered column, the data is
| displayed in the format that is described by the most recent schema definition,
| even though the data is not currently stored in this format. The most recent
| schema definition is associated with the current index version.
| Recycling index version numbers: DB2 can store up to 16 index versions, numbered
| sequentially from 0 to 15. (The next consecutive version number after 15 is 1.
| Version number 0 is never reused; it is reserved for the original version of the
| index.) The versions that are associated with schema changes that have not been
| applied yet are considered to be “in use,” and the range of used versions is stored
| in the catalog. In-use versions can be recovered from image copies of the table
| space, if necessary. To determine the range of version numbers currently in use for
| an index, query the OLDEST_VERSION and CURRENT_VERSION columns of the
| SYSIBM.SYSINDEXES catalog table.
| To prevent DB2 from running out of index version numbers (and to prevent
| subsequent ALTER statements from failing), you must recycle unused index
| version numbers regularly:
| v For indexes defined as COPY YES, run the MODIFY RECOVERY utility.
| If all reusable version numbers (1 to 15) are currently in use, you must
| reorganize the index by running REORG INDEX before you can recycle the
| version numbers.
| v For indexes defined as COPY NO, run the REORG TABLESPACE, REORG
| INDEX, LOAD REPLACE, or REBUILD INDEX utility.
| These utilities recycle the version numbers in the process of performing their
| primary functions.
| Version numbers are considered to be unused if the schema changes that are
| associated with them have been applied and there are no image copies that contain
| data at those versions. For more information about managing index version
| numbers, see DB2 Utility Guide and Reference.
When you drop a view, DB2 invalidates application plans and packages that are
dependent on the view and revokes the privileges of users who are authorized to
use it. DB2 attempts to rebind the package or plan the next time it is executed, and
you receive an error if you do not re-create the view.
To tell how much rebinding and reauthorizing is needed if you drop a view, check
the catalog tables in Table 21.
Table 21. Catalog tables to check after dropping a view
Catalog table What to check
SYSIBM.SYSPLANDEP Application plans dependent on the view
SYSIBM.SYSPACKDEP Packages dependent on the view
SYSIBM.SYSVIEWDEP Views dependent on the view
SYSIBM.SYSTABAUTH Users authorized to use the view
For more information about defining and dropping views, see Chapter 5 of DB2
SQL Reference.
In this example, two functions named CENTER exist in the SMITH schema. The
first function has two input parameters with INTEGER and FLOAT data types,
respectively. The specific name for the first function is FOCUS1. The second
function has three parameters with CHAR(25), DEC(5,2), and INTEGER data types.
Using the specific name to identify the function, change the WLM environment in
which the first function runs from WLMENVNAME1 to WLMENVNAME2:
ALTER SPECIFIC FUNCTION SMITH.FOCUS1
WLM ENVIRONMENT WLMENVNAME2;
96 Administration Guide
This example changes the second function when any arguments are null:
ALTER FUNCTION SMITH.CENTER (CHAR(25), DEC(5,2), INTEGER)
RETURNS ON NULL CALL;
| Example: Assume that you have a very large transaction table named TRANS that
| contains one row for each transaction. The table includes the following columns:
| v ACCTID, which is the customer account ID
| v POSTED, which holds the date of the transaction
| The table space that contains TRANS is divided into 13 partitions, each of which
| contains one month of data. Two existing indexes are defined as follows:
| v A partitioning index is defined on the transaction date by the following CREATE
| INDEX statement with a PARTITION ENDING AT clause:
| CREATE INDEX IX1 ON TRANS(POSTED)
| CLUSTER
| (PARTITION 1 ENDING AT (’01/31/2002’),
| PARTITION 2 ENDING AT (’02/28/2002’),
| ...
| PARTITION 13 ENDING AT (’01/31/2003’));
| The partitioning index is the clustering index, and the data rows in the table are
| in order by the transaction date. The partitioning index controls the partitioning
| of the data in the table space.
| v A nonpartitioning index is defined on the customer account ID:
| CREATE INDEX IX2 ON TRANS(ACCTID);
| DB2 usually accesses the transaction table through the customer account ID by
| using the nonpartitioning index IX2. The partitioning index IX1 is not used for
| data access and is wasting space. In addition, you have a critical requirement for
| availability on the table, and you want to be able to run an online REORG job at
| the partition level with minimal disruption to data availability.
| To save space and to facilitate reorganization of the table space, you can drop the
| partitioning index IX1, and you can replace the access index IX2 with a partitioned
| clustering index that matches the 13 data partitions in the table.
| What happens:
| v When you drop the partitioning index IX1, DB2 converts the table space from
| index-controlled partitioning to table-controlled partitioning. DB2 changes the
| high limit key value that was originally specified to the highest value for the key
| column.
| v When you create the index IX3, DB2 creates a partitioned index with 13
| partitions that match the 13 data partitions in the table. Each index partition
| contains the account numbers for the transactions during that month, and those
| account numbers are ordered within each partition. For example, partition 11 of
| the index matches the table partition that contains the transactions for
| November, 2002, and it contains the ordered account numbers of those
| transactions.
| v You drop the nonpartitioning index IX2 because it has been replaced by IX3.
| You can now run an online REORG at the partition level with minimal impact on
| availability. For example:
| REORG TABLESPACE dbname.tsname PART 11
| SHRLEVEL CHANGE
| Running this utility reorganizes the data for partition 11 of dbname.tsname. The data
| rows are ordered within each partition to match the ordering of the clustering
| index.
| Recommendation:
| v Drop a partitioning index if it is used only to define partitions. When you drop
| a partitioning index, DB2 automatically converts the associated index-controlled
| partitioned table space to a table-controlled partitioned table space.
| v You can create a data-partitioned secondary index (DPSI) as the clustering index
| so that the data rows are ordered within each partition of the table space to
| match the ordering of the keys of the DPSI.
| v Create any new tables in a partitioned table space by using the PARTITION BY
| clause and the PARTITION ENDING AT clause in the CREATE TABLE
| statement to specify the partitioning key and the limit key values.
These procedures do not actually move or copy data. For information about
moving data, see “Moving DB2 data” on page 105.
Changing the high-level qualifier for DB2 data sets is a complex task; so, you
should have experience both with DB2 and with managing user catalogs. The
following tasks are described:
v “Defining a new integrated catalog alias” on page 99
98 Administration Guide
v “Changing the qualifier for system data sets,” which includes the DB2 catalog,
directory, active and archive logs, and the BSDS
v “Changing qualifiers for other databases and user data sets” on page 102, which
includes the work file database (DSNDB07), the default database (DSNDB04),
and other DB2 databases and user databases
To concentrate on DB2-related issues, this procedure assumes that the catalog alias
resides in the same user catalog as the one that is currently used. If the new catalog
alias resides in a different user catalog, see DFSMS/MVS: Access Method Services for
the Integrated Catalog for information about planning such a move.
If the data sets are managed by the Storage Management Subsystem (SMS), make
sure that automatic class selection routines are in place for the new data set name.
Set up the new high-level qualifier as an alias to a current integrated catalog, using
the following access method services command:
DEFINE ALIAS (NAME (newcat) RELATE (usercat) CATALOG (master-cat))
See DFSMS/MVS: Access Method Services for the Integrated Catalog for more
information.
The output from the CLIST is a new set of tailored JCL with new cataloged
procedures and a DSNTIJUZ job, which produces a new member.
3. Run DSNTIJUZ.
Unless you have specified a new name for the load module, make sure the
output load module does not go to the SDSNEXIT or SDSNLOAD library used
by the active DB2 subsystem.
DSNTIJUZ also places any archive log data sets into the BSDS and creates a
new DSNHDECP member. You do not need to run these steps, because they are
unnecessary for changing the high-level qualifier.
DB2 table spaces are defined as linear data sets with DSNDBC as the second node
of the name for the cluster and DSNDBD for the data component (as described in
“Requirements for your own data sets” on page 38). The examples shown here
assume the normal defaults for DB2 and VSAM data set names. Use access method
services statements with a generic name (*) to simplify the process. Access method
services allows only one generic name per data set name string.
1. Using IDCAMS, change the names of the catalog and directory table spaces.
Also, be sure to specify the instance qualifier of your data set, y, which can be
either I or J:
ALTER oldcat.DSNDBC.DSNDB01.*.y0001.A001 -
NEWNAME (newcat.DSNDBC.DSNDB01.*.y0001.A001)
ALTER oldcat.DSNDBD.DSNDB01.*.y0001.A001 -
NEWNAME (newcat.DSNDBD.DSNDB01.*.y0001.A001)
ALTER oldcat.DSNDBC.DSNDB06.*.y0001.A001 -
NEWNAME (newcat.DSNDBC.DSNDB06.*.y0001.A001)
ALTER oldcat.DSNDBD.DSNDB06.*.y0001.A001 -
NEWNAME (newcat.DSNDBD.DSNDB06.*.y0001.A001)
2. Using IDCAMS, change the active log names. Active log data sets are named
oldcat.LOGCOPY1.COPY01 for the cluster component and
oldcat.LOGCOPY1.COPY01.DATA for the data component.
ALTER oldcat.LOGCOPY1.* -
NEWNAME (newcat.LOGCOPY1.*)
ALTER oldcat.LOGCOPY1.*.DATA -
NEWNAME (newcat.LOGCOPY1.*.DATA)
ALTER oldcat.LOGCOPY2.* -
NEWNAME (newcat.LOGCOPY2.*)
ALTER oldcat.LOGCOPY2.*.DATA -
NEWNAME (newcat.LOGCOPY2.*.DATA)
3. Using IDCAMS, change the BSDS names.
ALTER oldcat.BSDS01 -
NEWNAME (newcat.BSDS01)
ALTER oldcat.BSDS01.* -
NEWNAME (newcat.BSDS01.*)
ALTER oldcat.BSDS02 -
NEWNAME (newcat.BSDS02)
ALTER oldcat.BSDS02.* -
NEWNAME (newcat.BSDS02.*)
Step 6: Start DB2 with the new xxxxMSTR and load module
Use the START DB2 command with the new load module name as shown here:
-START DB2 PARM(new_name)
Change the databases in the following list that apply to your environment:
v DSNDB07 (work file database)
v DSNDB04 (default database)
v DSNDDF (communications database)
v DSNRLST (resource limit facility database)
v DSNRGFDB (the database for data definition control)
v Any other application databases that use the old high-level qualifier
Table spaces and indexes that span more than one data set require special
procedures. Partitioned table spaces can have different partitions allocated to
different DB2 storage groups. Nonpartitioned table spaces or indexes only have the
additional data sets to rename (those with the lowest level name of A002, A003,
and so on).
New installation:
1. Reallocate the database using the installation job DSNTIJTM from
prefix.SDSNSAMP
2. Modify your existing job. Change the job to remove the BIND step for
DSNTIAD and rename the data set names in the DSNTTMP step to your new
names, making sure you include your current allocations.
Renaming the data sets can be done while DB2 is down. They are included here
because the names must be generated for each database, table space, and index
space that is to change.
Although DB2 data sets are created using VSAM access method services, they are
specially formatted for DB2 and cannot be processed by services that use VSAM
record processing. They can be processed by VSAM utilities that use
control-interval (CI) processing and, if they are linear data sets (LDSs), also by
utilities that recognize the LDS type.
Furthermore, copying the data might not be enough. Some operations require
copying DB2 object definitions. And when copying from one subsystem to another,
you must consider internal values that appear in the DB2 catalog and the log, for
example, the DB2 object identifiers (OBIDs) and log relative byte addresses (RBAs).
You might also want to use the following tools to move DB2 data:
v The DB2 DataPropagator is a licensed program that can extract data from DB2
tables, DL/I databases, VSAM files, and sequential files. For instructions, see
“Loading data from DL/I” on page 65.
v DFSMS, which contains the following functional components:
– Data Set Services (DFSMSdss)
Use DFSMSdss to copy data between disk devices. For instructions, see Data
Facility Data Set Services: User's Guide and Reference. You can use online panels
to control this, through the Interactive Storage Management Facility (ISMF)
that is available with DFSMS; for instructions, refer to z/OS DFSMSdfp Storage
Administration Reference.
– Data Facility Product (DFSMSdfp)
This is a prerequisite for DB2. You can use access method services EXPORT
and IMPORT commands with DB2 data sets when control interval processing
Some of the listed tools rebuild the table space and index space data sets, and they
therefore generally require longer to execute than the tools that merely copy them.
The tools that rebuild are REORG and LOAD, RECOVER and REBUILD,
DSNTIAUL, and DataRefresher. The tools that merely copy data sets are
DSN1COPY, DFSMSdss, DFSMSdfp EXPORT and IMPORT, and DFSMShsm.
DSN1COPY is fairly efficient in use, but somewhat complex to set up. It requires a
separate job step to allocate the target data sets, one job step for each data set to
copy the data, and a step to delete or rename the source data sets. DFSMSdss,
DFSMSdfp, and DFSMShsm all simplify the job setup significantly.
The following procedures differ mainly in that the first one assumes you do not
want to reorganize or recover the data. Generally, this means that the first
procedure is faster. In all cases, make sure that there is enough space on the target
volume to accommodate the data set.
Chapter 6. Altering your database design 107
If you use storage groups, then you can change the storage group definition to
include the new volumes, as described in “Altering DB2 storage groups” on page
67.
As with the other operations, DSN1COPY is likely to execute faster than the other
applicable tools. It copies directly from one data set to another, while the other
tools extract input for LOAD, which then loads table spaces and builds indexes.
Only two of the tools listed are applicable: DFSMSdss DUMP and RESTORE, and
DFSMSdfp EXPORT and IMPORT. Refer to the documentation on those programs
for the most recent information about their use.
Record overhead: Allows for eight bytes of record header and control data, plus
space wasted for records that do not fit exactly into a DB2 page. For the second
consideration, see “Choosing a page size” on page 45. The factor can range from
about 1.01 (for a careful space-saving design) to as great as 4.0. A typical value is
about 1.10.
Free space: Allows for space intentionally left empty to allow for inserts and
updates. You can specify this factor on the CREATE TABLESPACE statement; see
“Specifying free space on pages” on page 620 for more information. The factor can
range from 1.0 (for no free space) to 200 (99% of each page used left free, and a
free page following each used page). With default values, the factor is about 1.05.
Unusable space: Track lengths in excess of the nearest multiple of page lengths.
Table 24 shows the track size, number of pages per track, and the value of the
unusable-space factor for several different device types.
Table 24. Unusable space factor by device type
Device type Track size Pages per track Factor value
3380 47476 10 1.16
3390 56664 12 1.15
Indexes: Allows for storage for indexes to data. For data with no indexes, the factor
is 1.0. For a single index on a short column, the factor is 1.01. If every column is
indexed, the factor can be greater than 2.0. A typical value is 1.20. For further
discussion of the factor, see “Calculating the space required for an index” on page
117.
Table 25 shows calculations of the multiplier M for three different database designs:
v The tight design is carefully chosen to save space and allows only one index on
a single, short field.
v The loose design allows a large value for every factor, but still well short of the
maximum. Free space adds 30% to the estimate, and indexes add 40%.
v The medium design has values between the other two. You might want to use
these values in an early stage of database design.
In each design, the device type is assumed to be a 3390. Therefore, the
unusable-space factor is 1.15. M is always the product of the five factors.
Table 25. Calculations for different database designs
Tight Medium Loose
Factor design design design
Record overhead × 1.02 1.10 1.30
Free space × 1.00 1.05 1.30
Unusable space × 1.15 1.15 1.15
Data set excess × 1.02 1.10 1.30
Indexes = 1.02 1.20 1.40
Multiplier M 1.22 1.75 3.54
In addition to the space for your data, external storage devices are required for:
v Image copies of data sets, which can be on tape
v System libraries, system databases, and the system log
v Temporary work files for utility and sort jobs
A rough estimate of the additional external storage needed is three times the
amount calculated for disk storage.
Also consider:
v Normalizing your entities
v Using larger page sizes
v Using LOB data types if a single column in a table is greater than 32 K
In addition to the bytes of actual data in the row (not including LOB data, which is
not stored in the base row or included in the total length of the row), each record
has:
v A six-byte prefix
v One additional byte for each column that can contain null values
v Two additional bytes for each varying-length column or ROWID column
v Six bytes of descriptive information in the base table for each LOB column
The sum of each column’s length is the record length, which is the length of data
that is physically stored in the table. You can retrieve the value of the
AVGROWLEN column in the SYSIBM.SYSTABLES catalog table to determine the
average length of rows within a table. The logical record length can be longer, for
example, if the table contains LOBs.
To simplify the calculation of record and page length, consider the directory entry
as part of the record. Then, every record has a fixed overhead of 8 bytes, and the
space available to store records in a 4 KB page is 4074 bytes. Achieving that
maximum in practice is not always simple. For example, if you are using the
default values, the LOAD utility leaves approximately 5 percent of a page as free
space when loading more than one record per page. Therefore, if two records are
to fit in a page, each record cannot be longer than 1934 bytes (approximately 0.95 ×
4074 × 0.5).
Furthermore, the page size of the table space in which the table is defined limits
the record length. If the table space is 4 KB, the record length of each record cannot
be greater than 4056 bytes. Because of the 8-byte overhead for each record, the sum
of column lengths cannot be greater than 4048 bytes (4056 minus the 8-byte
overhead for a record).
DB2 provides three larger page sizes to allow for longer records. You can improve
performance by using pages for record lengths that best suit your needs. For
details on selecting an appropriate page size, see “Choosing a page size” on page
45.
As shown in Table 26 on page 114, the maximum record size for each page size
depends on the size of the table space and on whether you specified the
EDITPROC clause.
Creating a table using CREATE TABLE LIKE in a table space of a larger page size
changes the specification of LONG VARCHAR to VARCHAR and LONG
VARGRAPHIC to VARGRAPHIC. You can also use CREATE TABLE LIKE to create
a table with a smaller page size in a table space if the maximum record size is
within the allowable record size of the new table space.
The disk saved by data compression is countered by the disk required for a
dictionary. Every compressed table space or partition requires a dictionary. See
“Calculating the space required for a dictionary” on page 116 to figure the disk
requirements and the virtual storage requirements for a dictionary.
When estimating the storage required for LOB table spaces, begin with your
estimates from other table spaces, round up to the next page size, and then
multiply by 1.1. One page never contains more than one LOB. When a LOB value
is deleted, the space occupied by that value remains allocated as long as any
application might access that value.
An auxiliary table resides in a LOB table space. There can be only one auxiliary
table in a LOB table space. An auxiliary table can store only one LOB column of a
base table and there must be one and only one index on this column.
See DB2 Installation Guide for information about storage options for LOB values.
For example, consider a table space containing a single table with the following
characteristics:
v Number of records = 100000
v Average record size = 80 bytes
v Page size = 4 KB
v PCTFREE = 5 (5% of space is left free on each page)
v FREEPAGE = 20 (one page is left free for each 20 pages used)
v MAXROWS = 255
Disk requirements
This section helps you calculate the disk requirements for a dictionary associated
with a compressed nonsegmented table space and for a dictionary associated with
a compressed segmented table space.
Nonsegmented table space: The dictionary contains 4096 entries in most cases.
This means you need to allocate an additional sixteen 4-KB pages, eight 8-KB
pages, four 16-KB pages, or two 32-KB pages. Although it is possible that your
dictionary can contain fewer entries, allocate enough space to accommodate a
dictionary with 4096 entries. For 32-KB pages, one segment (minimum of four
pages) is sufficient to contain the dictionary. Use Table 27 to determine how many
4-KB pages, 8-KB pages, 16-KB pages, or 32-KB pages to allocate for the dictionary
of a compressed nonsegmented table space.
Table 27. Pages required for the dictionary of a compressed nonsegmented table space
Table space Dictionary Dictionary Dictionary Dictionary Dictionary
page size size (512 size (1024 size (2048 size (4096 size (8192
(KB) entries) entries) entries) entries) entries)
4 2 4 8 16 32
8 1 2 4 8 16
16 1 1 2 4 8
32 1 1 1 2 4
Segmented table space: The size of the dictionary depends on the size of your
segments. Assuming 4096 entries is recommended. Use Table 28 to determine how
many 4-KB pages to allocate for the dictionary of a compressed segmented table
space.
Table 28. Pages required for the dictionary of a compressed segmented table space
Segment Dictionary Dictionary Dictionary Dictionary Dictionary
size (4-KB size (512 size (1024 size (2048 size (4096 size (8192
pages) entries) entries) entries) entries) entries)
4 4 4 8 16 32
8 8 8 8 16 32
When a dictionary is read into storage from a buffer pool, the whole dictionary is
read, and it remains there as long as the compressed table space is being accessed.
If an index has more than one leaf page, it must have at least one nonleaf page
that contains entries that point to the leaf pages. If the index has more than one
nonleaf page, then the nonleaf pages whose entries point to leaf pages are said to
be on level 1. If an index has a second level of nonleaf pages whose entries point to
nonleaf pages on level 1, then those nonleaf pages are said to be on level 2, and so
on. The highest level of an index contains a single page, which DB2 creates when it
first builds the index. This page is called the root page. The root page is a 4-KB
index page. Figure 6 on page 118 shows, in schematic form, a typical index.
Level 0
Key Record-ID Key Record-ID Key Record-ID
Table Row
Row Row
If you insert data with a constantly increasing key, DB2 adds the new highest key
to the top of a new page. Be aware, however, that DB2 treats nulls as the highest
value. When the existing high key contains a null value in the first column that
differentiates it from the new key that is inserted, the inserted nonnull index
entries cannot take advantage of the highest-value split.
Estimating the space requirements for DB2 objects is easier if you collect and
maintain a statistical history of those objects. The accuracy of your estimates
depends on the currentness of the statistical data. To ensure that the statistics
history is current, use the MODIFY STATISTICS utility to delete outdated statistical
data from the catalog history tables.
The storage required for an index, newly built by the LOAD utility, depends on the
number of index pages at all levels. That, in turn, depends on whether the index is
unique or not. The numbers of leaf pages (index pages that point directly to the
data in your tables) and of nonleaf pages (index pages that contain the page
number and the highest key of each page in the next-level index) are calculated
separately.
An index key on an auxiliary table used for LOBs is 19 bytes and uses the same
formula as other indexes. The RID value stored within the index is 5 bytes, the
same as for large table spaces (defined with DSSIZE greater than or equal to 4 GB).
In general, the length of the index key is the sum of the lengths of all the columns
of the key, plus the number of columns that allow nulls. The length of a
The following index calculations are intended only to help you estimate the storage
required for an index. Because there is no way to predict the exact number of
duplicate keys that can occur in an index, the results of these calculations are not
absolute. It is possible, for example, that for a nonunique index, more index entries
than the calculations indicate might fit on an index page. The calculations are
divided into cases using a unique index and using a nonunique index.
Calculate pages for a unique index: Use the following calculations to estimate the
number of leaf and nonleaf pages in a unique index.
Calculate the total leaf pages:
1. Space per key ≅k + r + 3
2. Usable space per page ≅FLOOR((100 - f) × 4038 / 100)
3. Entries per page ≅FLOOR(usable space per page / space per key)
4. Total leaf pages ≅CEILING(number of table rows / entries per page)
Calculate the total nonleaf pages:
1. Space per key ≅k + 7
2. Usable space per page ≅FLOOR(MAX(90, (100 - f )) × 4046/100)
3. Entries per page ≅FLOOR(usable space per page / space per key)
4. Minimum child pages ≅MAX(2, (entries per page + 1))
5. Level 2 pages ≅CEILING(total leaf pages / minimum child pages)
6. Level 3 pages ≅CEILING(level 2 pages / minimum child pages)
7. Level x pages ≅CEILING(previous level pages / minimum child pages)
8. Total nonleaf pages ≅(level 2 pages + level 3 pages + ...+ level x pages until the
number of level x pages = 1)
Calculate pages for a nonunique index: Use the following calculations to estimate
the number of leaf and nonleaf pages for a nonunique index.
Calculate the total leaf pages:
1. Space per key ≅4 + k + (n × (r+1))
Calculate the total space requirement: Finally, calculate the number of kilobytes
required for an index built by LOAD.
1. Free pages ≅FLOOR(total leaf pages / p), or 0 if p = 0
2. Space map pages ≅CEILING((tree pages + free pages) / 8131)
3. Tree pages ≅MAX(2, (total leaf pages + total nonleaf pages))
4. Total index pages ≅MAX(4, (1 + tree pages + free pages + space map pages))
5. Total space requirement ≅4 × (total index pages + 2)
In the following example of the entire calculation, assume that an index is defined
with these characteristics:
v It is unique.
v The table it indexes has 100000 rows.
v The key is a single column defined as CHAR(10) NOT NULL.
v The value of PCTFREE is 5.
v The value of FREEPAGE is 4.
Total leaf pages CEILING(number of table rows / entries per page) 445
Total nonleaf pages (level 2 pages + level 3 pages +...+ level x pages until x = 1) 3
Calculate total space required
Free pages FLOOR(total leaf pages / p), or 0 if p = 0 111
Tree pages MAX(2, (total leaf pages + total nonleaf pages)) 448
Space map pages CEILING((tree pages + free pages)/8131) 1
Total index pages MAX(4, (1 + tree pages + free pages + space map pages)) 561
Security covers control of access, whether to the DB2 subsystem, its data, or its
resources. A security plan sets objectives for a security system, determining who
has access to what, and under which circumstances. The security plan also
describes how to meet the objectives by using functions of DB2, functions of other
programs, and administrative procedures.
Auditing is how you determine whether the security plan is working and who has
accessed data. Auditing includes questions, such as:
v Have attempts been made to gain unauthorized access?
v Is the data in the subsystem accurate and consistent?
v Are system resources used efficiently?
Because the two topics are not the same, this chapter suggests different ways to
approach the information about security and auditing.
| Multilevel security: You can use multilevel security for object-level access control,
| and you can use multilevel security with row-level granularity. For information
| about multilevel security, see “Multilevel security” on page 191.
| Encryption: You can use new built-in functions for data encryption and decryption
| that let you protect sensitive data as it is stored in or retrieved from a DB2
| subsystem. For more information about encryption, see “Data encryption through
| built-in functions” on page 206.
| Session variables and special registers: You can use new special registers and
| session variables to facilitate information sharing between applications. You can
| also use these new values to help enforce a security policy. For more information
| about new session variables and special registers, see Chapter 9, “Controlling
| access to DB2 objects,” on page 133.
If you are also interested in controlling data access, first read “Controlling data
access.” Then read Chapter 9, “Controlling access to DB2 objects,” on page 133.
Finally, read Chapter 13, “Auditing,” on page 285.
Several routes exist from a process to DB2 data. DB2 controls each route except the
data set protection route, as shown in Figure 7 on page 129.
DB2 data
One of the ways that DB2 controls access to data is through the use of identifiers
(IDs). Four types of IDs exist:
Primary authorization ID
Generally, the primary authorization ID identifies a process. For example,
statistics and performance trace records use a primary authorization ID to
identify a process.
Secondary authorization ID
A secondary authorization ID, which is optional, can hold additional
privileges that are available to the process. For example, a secondary
authorization ID can be a Resource Access Control Facility (RACF) group
ID.
SQL ID
An SQL ID holds the privileges that are exercised when certain dynamic
SQL statements are issued. The SQL ID can be set equal to the primary ID
or any of the secondary IDs. If an authorization ID of a process has
SYSADM authority, the process can set its SQL ID to any authorization ID.
| RACF ID
| The RACF ID is generally the source of the primary and secondary
| authorization IDs (RACF groups). When you use the RACF Access Control
| Module or multilevel security, the RACF ID is used directly.
DB2 relies only on IDs to determine whether to allow or prohibit certain processes.
An ID can hold privileges that allow the ID to take certain actions and that
prohibit the ID from taking other actions. DB2 does not determine access control
based on the process or the person accessing data. Therefore, if the same set of IDs
are associated with two different accesses to DB2, DB2 cannot determine whether
the IDs involve the same process.
Therefore, this book uses phrases like “an ID owns an object” instead of “a person
owns an object” to discuss access control within DB2.
DB2 also defines sets of related privileges, called administrative authorities. When
you grant one of the administrative authorities to a person’s ID, that person has all
of the privileges that are associated with that administrative authority. You can
efficiently grant many privileges by granting one administrative authority.
You can also efficiently grant multiple privileges by granting the privilege to
execute an application plan or a package. When an ID executes a plan or package,
the ID implicitly uses all of the privileges that the owner needed when binding the
plan or package. Therefore, granting to an ID the privilege to execute a plan or
package can provide a finely detailed set of privileges and can eliminate the need
to grant other privileges separately.
For more information about granting privileges to execute a plan or package, see
“Privileges exercised through a plan or a package” on page 148.
DB2 provides separate controls for creation and ownership of objects. When you
create an object, your ID can own the object, or another ID can own the object.
| For more information about multilevel security, see “Multilevel security” on page
| 191
The RACF system provides several advantages of its own. For example, RACF can:
v Identify and verify the identifier that is associated with a process
v Connect those identifiers to RACF group names
v Log and report unauthorized attempts to access protected resources
While controlling access from a remote locations, RACF can do the following:
v Verify an ID that is associated with a remote attachment request and check the
ID with a password.
v Generate PassTickets on the sending side. PassTickets can be used instead of
passwords. A PassTicket lets a user gain access to a host system without sending
the RACF password across the network. For more information about RACF
PassTickets, see“Sending RACF PassTickets” on page 263.
The communications database: You can also control access authentication by using
the DB2 communications database (CDB). The CDB is a set of tables in the DB2
catalog that are used to establish conversations with remote database management
systems. The CDB can translate IDs before it sends the IDs to the remote system.
For more information about CDB requests, see “The communications database for
the requester” on page 253. For more information about CDB controls on the
server, see “The communications database for the server” on page 240.
If you use RACF or a similar security system to control access to DB2, the simplest
way to controlling data set access outside of DB2 is to use RACF. If you want to
use RACF for data set protection outside of DB2, define RACF profiles for data
sets, and permit access to the data sets for certain DB2 IDs.
If your data is very sensitive, consider encrypting the data. Encryption protects
against unauthorized access to data sets and to backup copies outside of DB2. You
have the following encryption options for protecting sensitive data:
v Built-in data encryption functions, which are described in “Data encryption
through built-in functions” on page 206.
v DB2 edit procedures or field procedures, which can use the Integrated
Cryptographic Service Facility (ICSF). For information about the ICSF, see z/OS
ICSF Overview.
v The IBM Data Encryption for IMS and DB2 Databases tool, which is described in
IBM Data Encryption for IMS and DB2 Databases User's Guide.
Data compression is not a substitute for encryption. In some cases, the compression
method does not actually shorten the data. In those cases, the data is left
uncompressed and readable. If you encrypt and compress your data, compress it
first. After you obtain the maximum compression, encrypt the result. When you
retrieve your data, first decrypt the data. After the data is decrypted, decompress
the result.
| DB2 controls access to its objects by a set of privileges. Each privilege allows a
| specific action to be taken on some object. Figure 8 shows the four primary ways
| within DB2 to give an ID access to data.1
|
ID
Data
|
| Figure 8. Access to data within DB2
|
As a security planner, you must be aware of every way to allow access to data.
Before you write a security plan, see the following sections:
v “Explicit privileges and authorities”
v “Implicit privileges of ownership” on page 145
v “Privileges exercised through a plan or a package” on page 148
v “Access control for user-defined functions and stored procedures” on page 155
| v “Multilevel security” on page 191
v “Data encryption through built-in functions” on page 206
DB2 has primary authorization IDs, secondary authorization IDs, and SQL IDs.
Some privileges can be exercised only by one type of ID; other privileges can be
exercised by more than one. To decide which IDs should hold specific privileges,
see “Which IDs can exercise which privileges” on page 161.
After you decide which IDs should hold specific privileges, you can implement a
security plan. Before you begin your plan, you can see what others have done in
“Matching job titles with privileges” on page 171 and “Examples of granting and
revoking privileges” on page 173.
The DB2 catalog records the privileges that IDs are granted and the objects that
IDs own. To check the implementation of your security plan, see “Finding catalog
information about privileges” on page 187.
1. Certain authorities are assigned when DB2 is installed, and can be reassigned by changing the subsystem parameter
(DSNZPARM). You can consider changing the DSNZPARM value to be a fifth way of granting data access in DB2.
Authorization IDs
Every process that connects to or signs on to DB2 is represented by a set of one or
more DB2 short identifiers that are called authorization IDs. Authorization IDs can
be assigned to a process by default procedures or by user-written exit routines.
Methods of assigning those IDs are described in detail in Chapter 11, “Controlling
access to a DB2 subsystem,” on page 231; see especially Table 70 on page 233 and
Table 71 on page 234.
When authorization IDs are assigned, every process receives exactly one ID that is
called the primary authorization ID. All other IDs are secondary authorization IDs.
Example of changing the SQL ID: Suppose that ALPHA is your primary
authorization ID or one of your secondary authorization IDs. You can make it your
current SQL ID by issuing the following SQL statement:
SET CURRENT SQLID = ’ALPHA’;
If you issue the statement through the distributed data facility (DDF), ALPHA
must be one of the IDs that are associated with your process at the location where
the statement runs. Your primary ID can be translated before it is sent to a remote
location. Secondary IDs are associated with your process at the remote location.
The current SQL ID, however, is not translated. For more information about
authorization IDs and remote locations, see “Controlling requests from remote
applications” on page 238.
An ID with SYSADM authority can set the current SQL ID to any string whose
length is less than or equal to 8 bytes.
| Authorization IDs that are sent to a connecting server must conform to the security
| management product guidelines of the server. See the documentation of the
| security product for the connecting server.
Table 37 shows the table and view privileges that DB2 allows.
Table 37. Explicit table and view privileges
Table or view privilege SQL statements allowed for a named table or view
ALTER ALTER TABLE, to change the table definition
DELETE DELETE, to delete rows
INDEX CREATE INDEX, to create an index on the table
INSERT INSERT, to insert rows
REFERENCES ALTER or CREATE TABLE, to add or remove a referential
constraint referring to the named table or to a list of columns in
the table
SELECT SELECT, to retrieve data from the table
TRIGGER CREATE TRIGGER, to define a trigger on a table
Privileges needed for statements, commands, and utility jobs: For lists of all
privileges and authorities that let you perform the following actions, consult the
appropriate resource:
v To execute a particular SQL statement, see the description of the statement in
Chapter 5 of DB2 SQL Reference.
v To issue a particular DB2 command, see the description of the command in
Chapter 2 of DB2 Command Reference.
v To run a particular type of utility job, see the description of the utility in DB2
Utility Guide and Reference.
Administrative authorities
Figure 9 on page 139 shows how privileges are grouped into authorities and how
the authorities form a branched hierarchy. Table 41 on page 140 supplements
Figure 9 on page 139 and includes the capabilities of each authority.
Table 41 on page 140 shows DB2 authorities and the actions that they are allowed
to perform.
If held with the GRANT option, SYSOPR can grant these privileges to
others.
Installation One or two IDs are assigned this authority when DB2 is installed. They
SYSOPR have all the privileges of SYSOPR, plus:
v The authority is not recorded in the DB2 catalog. Therefore, the
catalog does not need to be available to check the Installation SYSOPR
authority.
v No ID can revoke the authority; it can be removed only by changing
the module that contains the subsystem initialization parameters
(typically DSNZPARM).
If held with the GRANT option, DBCTRL can grant those privileges to
others.
DBADM Database administrator authority includes DBCTRL privileges over a
specific database. Additionally, DBADM has privileges to access any
tables in a specific database by using SQL statements.
If held with the GRANT option, DBADM can grant these privileges to
others.
IDs with installation SYSADM authority can also perform the following
actions:
v Run the CATMAINT utility
v Access DB2 when the subsystem is started with ACCESS(MAINT)
v Start databases DSNDB01 and DSNDB06 when they are stopped or in
restricted status
v Run the DIAGNOSE utility with the WAIT statement
v Start and stop the database that contains the ART and the ORT
Example: Suppose that you want the ID MATH110 to be able to extract the
following column data from the sample employee table for statistical investigation:
HIREDATE, JOB, EDLEVEL, SEX, SALARY, BONUS, and COMM for
DSN8810.EMP. However, you want to impose the following restrictions:
v No access to employee names or identification numbers
v No access to data for employees hired before 1996
v No access to data for employees with an education level less than 13
v No access to data for employees whose job is MANAGER or PRES
To do that, create and name a view that shows exactly that combination of data.
You can create the view with the following CREATE statement:
CREATE VIEW SALARIES AS
SELECT HIREDATE, JOB, EDLEVEL, SEX, SALARY, BONUS, COMM
FROM DSN8810.EMP
WHERE HIREDATE > ’1995-12-31’ AND
EDLEVEL >= 13 AND
JOB <> ’MANAGER’ AND
JOB <> ’PRES’;
Then grant the SELECT privilege on the view SALARIES to MATH110 with the
following statement:
GRANT SELECT ON SALARIES TO MATH110;
After you grant the privilege, MATH110 can execute SELECT statements on the
restricted set of data only.
Authorities that are granted on DSNDB06 also cover database DSNDB01, which
contains the DB2 directory. An ID with SYSADM authority can control access to
the directory by granting privileges to run utilities (that are listed in Table 42) on
DSNDB06, but cannot grant privileges on DSNDB01 directly.
Table 42 shows which utilities IDs with different authorities can run on the
DSNDB01 and DSNDB06 databases.
Table 42. Utility privileges on the DB2 catalog and directory
Authorities
Installation SYSOPR, DBCTRL,
SYSCTRL, SYSADM, DBADM on DBMAINT on
Utilities Installation SYSADM DSNDB06 DSNDB06
1
LOAD No No No
REPAIR DBD No No No
CHECK DATA Yes No No
CHECK LOB Yes No No
REORG TABLESPACE Yes No No
STOSPACE Yes No No
REBUILD INDEX Yes Yes No
RECOVER Yes Yes No
REORG INDEX Yes Yes No
REPAIR Yes Yes No
REPORT Yes Yes No
CHECK INDEX Yes Yes Yes
COPY Yes Yes Yes
MERGECOPY Yes Yes Yes
MODIFY Yes Yes Yes
QUIESCE Yes Yes Yes
RUNSTATS Yes Yes Yes
Note: LOAD can be used to add lines to SYSIBM.SYSSTRINGS. LOAD cannot be run on
other DSNDB01 or DSNDB06 tables.
Exception: Plans and packages are not created with SQL CREATE statements, and
they have unique features of their own. For information about these features, see
“Privileges exercised through a plan or a package” on page 148 and “Access
control for user-defined functions and stored procedures” on page 155.
If the name of a table, view, index, alias, or synonym is unqualified, you establish
the object's ownership in the following ways:
v If you issue the CREATE statement dynamically, perhaps using SPUFI, QMF, or
some similar program, the owner of the created object is your current SQL ID.
That ID must have the privileges that are needed to create the object.
v If you issue the CREATE statement statically, by running a plan or package that
contains it, the ownership of the created object depends on the option used for
the bind operation. You can bind the plan or package with either the
QUALIFIER option, the OWNER option, or both.
– If the plan or package is bound with the QUALIFIER option only, the
QUALIFIER is the owner of the object. The QUALIFIER option allows the
binder to name a qualifier to use for all unqualified names of tables, views,
indexes, aliases, or synonyms that appear in the plan or package.
– If the plan or package is bound with the OWNER option only, the OWNER is
the owner of the object.
– If the plan or package is bound with both the QUALIFIER option and the
OWNER option, the QUALIFIER is the owner of the object.
– If neither option is specified, the binder of the plan or package is implicitly
the object owner.
In addition, the plan or package owner must have all required privileges on the
objects designated by the qualified names.
If you create a table, view, index, or alias with a qualified name, the qualifier
becomes the owner of the object, subject to these restrictions for specifying the
qualifier:
v If you issue the CREATE statement dynamically, and have no administrative
authority, the qualifier must be your primary ID or one of your secondary IDs.
However, if your current SQL ID has at least DBCTRL authority, you can use
any qualifier for a table or an index. If your current SQL ID has at least DBADM
authority and the value of field DBADM CREATE AUTH on installation panel
DSNTIPP was set to YES during DB2 installation, you can also use any qualifier
for a view.
v If you issue the CREATE statement statically, and the owner of the plan or
package that contains the statement has no administrative authority, the qualifier
can only be the owner. However, if the owner has at least DBCTRL authority, the
plan or package can use any qualifier for a table or for an index. If the owner of
the plan or package has at least DBADM authority and the value of field
DBADM CREATE AUTH on installation panel DSNTIPP was set to YES during
DB2 installation, the owner can also use any qualifier for a view.
The owner of a JAR (Java class for a routine) that is used by a stored procedure or
a user-defined function is the current SQL ID of the process that performs the
INSTALL_JAR function. For information about installing a JAR, see DB2 Application
Programming Guide and Reference for Java.
Example: The owner of a table can grant the SELECT privilege on the table to any
other user. To grant the SELECT privilege on TABLE3 to USER4, the owner of the
table can issue the following statement:
GRANT SELECT ON TABLE3 TO USER4
Exception: You can change package or plan ownership while a package or plan
exists. For more information about changing package or plan ownership, see
“Establishing or changing ownership of a plan or a package.”
The owner of the plan or package must hold privileges for every action an
application plan or package performs. However, the owner of a plan or package
can grant the privilege to execute a plan or package to any ID. When the
EXECUTE privilege on a plan or package is granted to an ID, that ID can execute a
plan or package without holding the privileges for every action that the plan or
package performs. However, the ID is restricted by the SQL statements in the
original program.
The statement puts the data for employee number 000010 into the host structure
EMPREC. The data comes from table DSN8810.EMP, but the ID does not have
unlimited access to DSN8810.EMP. Instead, the ID that has EXECUTE privilege for
this plan can access rows in the DSN8810.EMP table only when EMPNO = '000010'.
If any of the privileges that are required by the plan or package are revoked from
the owner, the plan or the package is invalidated. The plan or package must be
rebound, and the new owner must have the required privileges.
2. Dropping a package does not delete all privileges on it if another version of the package still remains in the catalog.
Some systems that can bind a package at a DB2 system do not support the
OWNER option. When the OWNER option is not supported, the primary
authorization ID is always the owner of the package because a secondary ID
cannot be named as the owner.
When you perform bind operations on plans or packages that contain static SQL,
the BINDAGENT privilege and the OWNER and QUALIFIER options give you
considerable flexibility.
Example: Suppose that ALPHA has the BINDAGENT privilege from BETA, and
BETA has privileges on tables that are owned by GAMMA. ALPHA can bind a
plan using OWNER (BETA) and QUALIFIER (GAMMA). ALPHA does not need to
have privileges on the tables to bind the plan. However, ALPHA does not have the
privilege to execute the plan.
# Authorization to execute
The plan or package owner must have authorization to execute all static SQL
statements that are embedded in the plan or package. However, you do not need
to have the authorization when the plan or package is bound. The objects to which
the plan or package refers do not even need to exist at bind time.
A bind operation always checks whether a local object exists and whether the
owner has the required privileges on it. Any failure results in a message. However,
The corresponding existence and authorization checks for remote objects are
always made at run time.
Applications that use the Resource Recovery Services attachment facility (RRSAF)
to connect to DB2 do not require a plan. If the requesting application is an RRSAF
application, DB2 follows the rules described in “Checking authorization to execute
an RRSAF application without a plan” on page 151 to check authorizations.
Requester
Runs a package
DB2 server
(Process runner)
Uses DB2 private protocol to
execute an SQL statement
remotely
Second DB2 server
In the figure, a remote requester, either a DB2 UDB for z/OS or some other
requesting system, runs a package at the DB2 server. A statement in the package
uses an alias or a three-part name to request services from a second DB2 UDB for
z/OS server. The ID that is checked for the privileges that are needed to run at the
second server can be:
v The owner of the plan that is running at the requester (if the requester is DB2
UDB for z/OS or OS/390)
v The owner of the package that is running at the DB2 server
v The authorization ID of the process that runs the package at the first DB2 server
(the “process runner”)
In addition, if a remote alias is used in the SQL, the alias must be defined at the
requester site. The ID that is used depends on these four factors:
150 Administration Guide
v Whether the requester is DB2 UDB for z/OS or OS/390, or a different system.
v The value of the bind option DYNAMICRULES. See “Authorization for dynamic
SQL statements” on page 164 for detailed information about the
DYNAMICRULES options.
v Whether the parameter HOPAUTH at the DB2 server site was set to BOTH or
RUNNER when the installation job DSNTIJUZ was run. The default value is
BOTH.
v Whether the statement that is executed at the second server is static or dynamic
SQL.
Hop situation with non-DB2 UDB for z/OS or OS/390 server: Using
DBPROTOCOL(DRDA), a three-part name statement can hop to a server other
than DB2 UDB for z/OS or OS/390. In this hop situation, only package
authorization information is passed to the second server.
A hop is not allowed on a connection that matches the LUWID of another existing
DRDA thread. For example, in a hop situation from site A to site B to site C to site
A, a hop is not allowed to site A again.
Table 44 shows how these factors determine the ID that must hold the required
privileges when bind option DBPROTOCOL (PRIVATE) is in effect.
Table 44. The authorization ID that must hold required privileges for the double-hop situation
Requester DYNAMICRULES HOPAUTH Statement Authorization ID
DB2 UDB for z/OS Static Plan owner
Run behavior (default)1 n/a
Dynamic Process runner
1
Bind behavior n/a Either Plan owner
Different system or Static Package owner
RRSAF application 1
YES (default)
Run behavior (default) Dynamic Process runner
without a plan
NO Either Process runner
1
Bind behavior n/a Either Package owner
1
Note: If DYNAMICRULES define behavior is in effect, DB2 converts to DYNAMICRULES bind behavior. If
DYNAMICRULES invoke behavior is in effect, DB2 converts to DYNAMICRULES run behavior.
Caching IDs for plans: Authorization checking is fastest when the EXECUTE
privilege is granted to PUBLIC and, after that, when the plan is reused by an ID
that already appears in the cache.
| You can set the size of the plan authorization cache by using the BIND PLAN
| subcommand. For suggestions on setting this cache size, see Part 5 of DB2
| Application Programming and SQL Guide. The default cache size is specified by an
| installation option, with an initial default setting of 3072 bytes.
You can set the size of the package authorization cache using the PACKAGE
AUTH CACHE field on installation panel DSNTIPP. The default value, 100 KB, is
enough storage to support about 690 collection-id.package-id entries or collection-id.*
entries.
You can cache more package authorization information by using any of the
following strategies:
v Granting package execute authority to collection.*
v Granting package execute authority to PUBLIC for some packages or collections
v Increasing the size of the cache
The QTPACAUT field in the package accounting trace indicates how often DB2
succeeds at reading package authorization information from the cache.
Caching IDs for routines: The routine authorization cache stores authorization
IDs with the EXECUTE privilege on a specific routine. A routine is identified as
schema.routine-name.type, where the routine name is one of the following names:
v The specific function name for user-defined functions
v The procedure name for stored procedures
v ’*’ for all routines in the schema
You can set the size of the routine authorization cache by using the ROUTINE
AUTH CACHE field on installation panel DSNTIPP. The initial default setting of
100 KB is enough storage to support about 690schema.routine.type or schema.*.type
entries.
You can cache more authorization information about routines by using the
following strategies:
v Granting EXECUTE on schema.*
| Caching and multilevel security: Caching is used with multilevel security with
| row-level granularity to improve performance. DB2 caches all security labels that
| are checked (successfully and unsuccessfully) during processing. At commit or
| rollback, the security labels are removed from the cache. If a security policy that
| employs multilevel security with row-level granularity requires an immediate
| change and long-running applications have not committed or rolled back, you
| might need to cancel the application. For more information about multilevel
| security with row-level granularity, see “Working with data in a multilevel-secure
| environment” on page 198.
For those reasons, if the program accesses any sensitive data, the EXECUTE
privileges on the plan and on packages are also sensitive. They should be granted
only to a carefully planned list of IDs.
The ENABLE and DISABLE options: You can limit the use of plans and packages
by using the ENABLE and DISABLE options on the BIND and REBIND
subcommands.
Example: The ENABLE IMS option allows the plan or package to run from any
IMS connection. Unless other systems are also named, ENABLE IMS does not
allow the plan or package to run from any other type of connection.
You can exercise even finer control with the ENABLE and DISABLE options. You
can enable or disable particular IMS connection names, CICS application IDs,
requesting locations, and so forth. For details, see the syntax of the BIND and
REBIND subcommands in DB2 Command Reference.
Exceptions:
v For a BIND COPY operation, the owner must have the COPY privilege at the
local DB2 site or subsystem, where the package that is being copied resides.
v If the creator of the package is not the owner, the creator must have SYSCTRL
authority or higher, or must have been granted the BINDAGENT privilege by
the owner. That authority or privilege is granted at the local DB2.
Binding a plan with a package list (BIND PLAN PKLIST) is done at the local DB2,
and bind privileges must be held there. Authorization to execute a package at a
remote location is checked at execution time, as follows:
v For DB2 private protocol, the owner of the plan at the requesting DB2 must
have EXECUTE privilege for the package at the DB2 server.
v For DRDA, if the server is a DB2 UDB for z/OS subsystem, the authorization ID
of the process (primary ID or any secondary ID) must have EXECUTE privilege
for the package at the DB2 server.
v If the server is not DB2 UDB for z/OS, the primary authorization ID must have
whatever privileges are needed. Check that product's documentation.
The routine implementer typically codes the routine in a program and precompiles
the program. If the program contains SQL statements, the implementer binds the
DBRM. In general, the authorization ID that binds the DBRM into a package is the
package owner. The implementer is the routine package owner. As package owner,
the implementer implicitly has EXECUTE authority on the package and has the
authority to grant EXECUTE privileges to other users to execute the code within
the package.
The implementer grants EXECUTE authority on the routine package to the definer.
EXECUTE authority is necessary only if the package contains SQL. For
user-defined functions, the definer requires EXECUTE authority on the package.
For stored procedures, the EXECUTE privilege on the package is checked for the
definer and other IDs. For information about these additional IDs, see the CALL
statement in DB2 SQL Reference.
The definer is the routine owner. The definer issues a CREATE FUNCTION
statement to define a user-defined function or a CREATE PROCEDURE statement
to define a stored procedure. The definer of a routine is determined as follows:
v If the SQL statement is embedded in an application program, the definer is the
authorization ID of the owner of the plan or package.
v If the SQL statement is dynamically prepared, the definer is the SQL
authorization ID that is contained in the CURRENT SQLID special register.
The definer grants EXECUTE authority on the routine to the invoker, that is, any
user ID that needs to invoke the routine.
The invoker invokes the routine from an SQL statement in the invoking plan or
package. The invoker for a routine is determined as follows:
v For a static statement, the invoker is the authorization ID of the plan or package
owner.
See DB2 SQL Reference for more information about the CREATE FUNCTION and
CREATE PROCEDURE statements.
The CALL statement invokes a stored procedure. The privileges that are required
to execute a stored procedure that is invoked by the CALL statement are described
in Chapter 5 of DB2 SQL Reference.
Chapter 5 of DB2 SQL Reference also describes additional privileges that are
required on each package that the stored procedure uses during its execution. The
database server determines the privileges that are required and the authorization
ID that must have the privileges.
Example: Suppose that Alan is a programmer and that his ID is A1. Alan is
working on a set of stored procedures for project B1. You want to create views that
limit Alan’s access to specific rows in SYSROUTINES_SRC and
SYSROUTINES_OPTS. You can require that Alan use schema names that begin
with the characters A1B1. Then you can create views that limit Alan’s access to
rows where the SCHEMA value begins with A1B1. The following CREATE
statement creates a view on SYSROUTINES_SRC:
CREATE VIEW A1.B1GRSRC AS
SELECT SCHEMA, ROUTINENAME, VERSION,
SEQNO, IBMREQD, CREATESTMT
FROM SYSIBM.SYSROUTINE_SRC
WHERE SCHEMA LIKE ’A1B1%’
WITH CHECK OPTION;
After a set of generated routines goes into production, you can decide to regain
control over the routine definitions in SYSROUTINES_SRC and
SYSROUTINES_OPTS by revoking the INSERT, DELETE, and UPDATE privileges
on the appropriate views. You can allow programmers to keep the SELECT
privilege on their views, so that they can use the old rows for reference when they
define new generated routines.
sqlstate = 0;
memset( message,0,70 );
/*******************************************************************
* Copy the employee’s serial into a host variable *
*******************************************************************/
strcpy( hvEMPNO,employeeSerial );
/*******************************************************************
* Get the employee’s work department and current salary *
*******************************************************************/
EXEC SQL SELECT WORKDEPT, SALARY
INTO :hvWORKDEPT, :hvSALARY
FROM EMP
WHERE EMPNO = :hvEMPNO;
/*******************************************************************
* See if the employee is a manager *
*******************************************************************/
EXEC SQL SELECT DEPTNO
INTO :hvWORKDEPT
FROM DEPT
WHERE MGRNO = :hvEMPNO;
/*******************************************************************
* If the employee is a manager, do not apply the raise *
*******************************************************************/
if( SQLCODE == 0 )
{
newSalary = hvSALARY;
}
return;
} /* end C_SALARY */
Figure 11. Example of a user-defined function (Part 2 of 2)
The implementer requires the UPDATE privilege on table EMP. Users with the
EXECUTE privilege on function C_SALARY do not need the UPDATE privilege
on the table.
2. Because this program contains SQL, the implementer performs the following
steps:
a. Precompile the program that implements the user-defined function.
b. Link-edit the user-defined function with DSNRLI (RRS attachment facility),
and name the program’s load module C_SALARY.
c. Bind the DBRM into package MYCOLLID.C_SALARY.
After performing these steps, the implementer is the function package owner.
3. The implementer then grants EXECUTE privilege on the user-defined function
package to the definer.
GRANT EXECUTE ON PACKAGE MYCOLLID.C_SALARY
TO definer
As package owner, the implementer can grant execute privileges to other users,
which allows those users to execute code within the package. For example:
GRANT EXECUTE ON PACKAGE MYCOLID.C_SALARY
TO other_user
After executing the CREATE FUNCTION statement, the definer owns the
user-defined function. The definer can execute the user-defined function
package because the user-defined function package owner, in this case the
implementer, granted to the definer the EXECUTE privilege on the package
that contains the user-defined function.
2. The definer then grants the EXECUTE privilege on SALARY_CHANGE to all
function invokers.
GRANT EXECUTE ON FUNCTION SALARY_CHANGE
TO invoker1, invoker2, invoker3, invoker4
For an example of determining authorization IDs for dynamic SQL, see “Example
of determining authorization IDs for dynamic SQL statements in routines” on page
168.
Table 46 and Table 47 on page 162 summarize, for different actions, which IDs can
provide the necessary privileges. For more specific details about any statement or
command, see DB2 SQL Reference or DB2 Command Reference.
Table 46. Required privileges for basic operations on dynamic SQL statements
Operation ID Required privileges
GRANT Current SQL ID Any of the following privileges:
v The applicable privilege with the grant
option
v An authority that includes the privilege,
with the grant option (not needed for
SYSADM or SYSCTRL)
v Ownership that implicitly includes the
privilege
Table 47. Required privileges for basic operations on plans and packages
Operation ID Required privileges
Execute a plan Primary ID or any Any of the following privileges:
secondary ID v Ownership of the plan
v EXECUTE privilege for the plan
v SYSADM authority
Bind embedded SQL Plan or package Any of the following privileges:
statements, for any owner v Applicable privileges required by the
bind operation statements
v Authorities that include the privileges
v Ownership that implicitly includes the
privileges
Object names include the value of
QUALIFIER, where it applies.
Notes:
1. A user-defined function, stored procedure, or trigger package does not need to be
included in a package list.
2. A trigger package cannot be deleted by FREE PACKAGE or DROP PACKAGE. The
DROP TRIGGER statement must be used to delete the trigger package.
The behaviors are summarized in “Common attribute values for bind, define, and
invoke behavior” on page 166.
Run behavior
DB2 processes dynamic SQL statements using the standard attribute values for
dynamic SQL statements. These attributes are collectively called run behavior and
consist of the following attributes:
v DB2 uses the authorization ID of the application process and the current SQL ID
to:
– Check for authorization of dynamic SQL statements
– Serve as the implicit qualifier of table, view, index, and alias names
v Dynamic SQL statements use the values of application programming options
that were specified during installation. The installation option USE FOR
DYNAMICRULES has no effect.
v GRANT, REVOKE, CREATE, ALTER, DROP, and RENAME statements can be
executed dynamically.
Bind behavior
DB2 processes dynamic SQL statements using bind behavior. Bind behavior
consists of the following attributes:
v DB2 uses the authorization ID of the plan or package for authorization checking
of dynamic SQL statements.
v Unqualified table, view, index, and alias names in dynamic SQL statements are
implicitly qualified with value of the bind option QUALIFIER; if you do not
specify QUALIFIER, DB2 uses the authorization ID of the plan or package
owner as the implicit qualifier.
v Bind behavior consists of the attribute values that are described in “Common
attribute values for bind, define, and invoke behavior” on page 166.
The values of the authorization ID and the qualifier for unqualified objects are the
same as those that are used for embedded or static SQL statements.
Define behavior
When the package is run as or under a stored procedure or user-defined function
package, DB2 processes dynamic SQL statements using define behavior. Define
behavior consists of the following attribute values:
v DB2 uses the authorization ID of the user-defined function or the stored
procedure owner for authorization checking of dynamic SQL statements in the
application package.
v The default qualifier for unqualified objects is the user-defined function or the
stored procedure owner.
v Define behavior consists of the attribute values that are described in “Common
attribute values for bind, define, and invoke behavior” on page 166.
Invoke behavior
When the package is run as or under a stored procedure or user-defined function
package, DB2 processes dynamic SQL statements using invoke behavior. Invoke
behavior consists of the following attribute values:
v DB2 uses the authorization ID of the user-defined function or the stored
procedure invoker for authorization checking of dynamic SQL statements in the
application package.
If the invoker is the primary authorization ID of the process or the current SQL
ID, the following rules apply:
– The ID of the invoker is checked for the required authorization.
– Secondary authorization IDs are also checked if they are needed for the
required authorization.
v The default qualifier for unqualified objects is the user-defined function or the
stored procedure invoker.
v Invoke behavior consists of the attribute values that are described in “Common
attribute values for bind, define, and invoke behavior.”
When the package is run as a stand-alone program, DB2 processes dynamic SQL
statements using bind behavior or run behavior, depending on the
DYNAMICRULES specified value.
Table 49 shows the dynamic SQL attribute values for each type of dynamic SQL
behavior.
Table 49. Definitions of dynamic SQL statement behaviors
Setting for dynamic SQL attributes
Dynamic SQL attribute Bind behavior Run behavior Define behavior Invoke behavior
Authorization ID Plan or package Current SQLID User-defined Authorization ID of
owner function or stored invoker 1
procedure owner
Default qualifier for Bind OWNER or Current SQLID User-defined Authorization ID of
unqualified objects QUALIFIER value function or stored invoker
procedure owner
2
CURRENT SQLID Not applicable Applies Not applicable Not applicable
Source for application Determined by Install panel Determined by Determined by
programming options DSNHDECP DSNTIPF DSNHDECP DSNHDECP
parameter parameter parameter
DYNRULS 3 DYNRULS 3 DYNRULS 3
Can execute GRANT, No Yes No No
REVOKE, CREATE,
ALTER, DROP, RENAME?
Notes:
1. If the invoker is the primary authorization ID of the process or the current SQL ID, the following rules apply:
v The ID of the invoker is checked for the required authorization.
v Secondary authorization IDs are also checked if they are needed for the required authorization.
2. DB2 uses the current SQL ID as the authorization ID for dynamic SQL statements only for plans and packages
that have DYNAMICRULES run behavior. For the other dynamic SQL behaviors, DB2 uses the authorization ID
that is associated with each dynamic SQL behavior, as shown in this table.
The initial current SQL ID is independent of the dynamic SQL behavior. For stand-alone programs, the current
SQL ID is initialized to the primary authorization ID. See DB2 Application Programming and SQL Guide for
information about initialization of current SQL ID for user-defined functions and stored procedures.
You can execute the SET CURRENT SQLID statement to change the current SQL ID for packages with any
dynamic SQL behavior, but DB2 uses the current SQL ID only for plans and packages with run behavior.
3. The value of DSNHDECP parameter DYNRULS, which you specify in field USE FOR DYNAMICRULES in
installation panel DSNTIPF, determines whether DB2 uses the precompiler options or the application
programming defaults for dynamic SQL statements. See Part 5 of DB2 Application Programming and SQL Guide for
more information.
Program D
Plan DP
Package AP
Subroutine B
Package owner: IDB
DYNAMICRULES(...)
Package BP
Figure 12. Authorization for dynamic SQL statements in programs and routines
Stored procedure A was defined by IDASP and is therefore owned by IDASP. The
stored procedure package AP was bound by IDA and is therefore owned by IDA.
Package BP was bound by IDB and is therefore owned by IDB. The authorization
ID under which EXEC SQL CALL A runs is IDD, the owner of plan DP.
Plan DP
Program C
Definer (owner): IDASP EXEC SQL
SELECT B(...)... Package
Stored Procedure A
(Authorization ID IDC) owner: IDC
EXEC SQL
SELECT B(...)... Package CP
Package owner: IDA
(Authorization ID IDA) DYNAMICRULES(...)
Package AP
User-defined
function B Package owner: IDB
DYNAMICRULES(...)
Package BP
Figure 13. Authorization for dynamic SQL statements in programs and nested routines
Composite privileges
An SQL statement can name more than one object. For example, a SELECT
operation can join two or more tables, or an INSERT can use a subquery. Those
operations require privileges on all of the tables that are involved in the statement.
However, you might be able to issue such a statement dynamically even though
one of your IDs alone does not have all the required privileges.
Example: Suppose that a user with an ID of FREDDY has the BIND privilege on
plan P1 and that a user with an ID of REUBEN has the BIND privilege on plan P2.
Assume that someone with FREDDY and REUBEN as secondary authorization IDs
issues the following command:
REBIND PLAN(P1,P2)
P1 and P2 are successfully rebound, even though neither the FREDDY ID nor the
REUBEN ID has the BIND privilege for both plans.
You can grant and revoke privileges to and from a single ID, or you can name
several IDs on one GRANT or REVOKE statement. Additionally, you can grant
privileges to the ID PUBLIC. When you grant privileges to PUBLIC, the privileges
become available to all IDs at the local DB2, including the owner IDs of packages
that are bound from a remote location.
When you grant any privilege to PUBLIC, DB2 catalog tables record the grantee of
the privilege as PUBLIC. Implicit table privileges are also granted to PUBLIC for
declared temporary tables. Because PUBLIC is a special identifier that is used by
DB2 internally, you should not use PUBLIC as a primary ID or secondary ID.
When a privilege is revoked from PUBLIC, authorization IDs to which the
privilege was specifically granted retain the privilege.
Example: Suppose that Juan has the ID USER1 and that Meg has the ID USER2.
Juan creates a table TAB1 and grants ALL PRIVILEGES on it to PUBLIC. Juan does
not explicitly grant any privileges on the table to Meg’s ID, USER2. Using the
PUBLIC privileges, Meg creates a view on TAB1. Because the ID USER2 requires
the SELECT privilege on TAB1 to create the view, the view is dropped if PUBLIC
loses the privilege.
Example: Suppose that Kim has the ID USER3. Kim binds a plan and names it
PLAN1. PLAN1 contains a program that refers to TAB1 from the previous
example. PLAN1 is not valid unless all of the proper privileges are held on objects
to which the plan refers, including TAB1. Although Kim does not have any
privileges on TAB1, Kim can bind the plan by using the PUBLIC privileges on
TAB1. However, if PUBLIC loses its privilege, the plan is invalidated.
DB2 ignores duplicate grants and keeps only one record of a grant in the catalog.
Example: Suppose that Susan grants the SELECT privilege on the EMP table to
Ray. Then suppose that Susan grants the same privilege to Ray again, without
revoking the first grant. When Susan issues the second grant, DB2 ignores it and
maintains the record of the first grant in the catalog.
Granting privileges to remote users: A query that arrives at your local DB2 through
the distributed data facility (DDF) is accompanied by an authorization ID. That ID
can go through connection or sign-on processing when it arrives, it can be
translated to another value, and it can be associated with secondary authorization
IDs. (For the details of all these processes, see “Controlling requests from remote
applications” on page 238.)
As the end result of these processes, the remote query is associated with a set of
IDs that is known to your local DB2 subsystem. You assign privileges to these IDs
in the same way that you assign privileges to IDs that are associated with local
queries.
You can grant a table privilege to any ID anywhere that uses DB2 private protocol
access to your data, by issuing the following command:
GRANT privilege TO PUBLIC AT ALL LOCATIONS;
You can grant SELECT, INSERT, UPDATE, and DELETE table privileges.
Some differences exist in the privileges for a query that uses system-directed
access:
v Although the query can use privileges granted TO PUBLIC AT ALL
LOCATIONS, it cannot use privileges granted TO PUBLIC.
v The query can exercise only the SELECT, INSERT, UPDATE, and DELETE
privileges at the remote location.
These restrictions do not apply to queries that are run by a package that is bound
at your local DB2 subsystem. Those queries can use any privilege that is granted to
their associated IDs or any privilege that is granted to PUBLIC.
Suppose that the Spiffy Computer Company wants to create a database to hold
information that is usually posted on hallway bulletin boards. For example, the
database might hold notices of upcoming holidays and bowling scores. Because the
president of the Spiffy Computer Company is an excellent bowler, she wants
everyone in the company to have access to her scores.
To create and maintain the tables and programs that are needed for this
application, Spiffy Computer Company develops the security plan shown in
Figure 14 on page 175.
Application programmers
IDs: PGMR01, PGMR02
PGMR03
Figure 14. Security plan for the Spiffy Computer Company. Lines connect the grantor of a
privilege or authority to the grantee.
The system administrator uses the ADMIN authorization ID, which has SYSADM
authority, to create a storage group (SG1) and to issue the following statements:
1. GRANT PACKADM ON COLLECTION BOWLS TO PKA01 WITH GRANT OPTION;
This statement grants to PKA01 the CREATE IN privilege on the collection
BOWLS and BIND, EXECUTE, and COPY privileges on all packages in the
collection. Because ADMIN used the WITH GRANT OPTION clause, PKA01
can grant those privileges to others.
2. GRANT CREATEDBA TO DBA01;
This statement grants to DBA01 the privilege to create a database and to have
DBADM authority over that database.
3. GRANT USE OF STOGROUP SG1 TO DBA01 WITH GRANT OPTION;
This statement allows DBA01 to use storage group SG1 and to grant that
privilege to others.
4. GRANT USE OF BUFFERPOOL BP0, BP1 TO DBA01 WITH GRANT OPTION;
This statement allows DBA01 to use buffer pools BP0 and BP1 and to grant that
privilege to others.
The database administrator, DBA01, using the CREATEDBA privilege, creates the
database DB1. When DBA01 creates DB1, DBA01 automatically has DBADM
authority over the database.
The database administrator at Spiffy Computer Company wants help with running
the COPY and RECOVER utilities. Therefore DBA01 grants DBCTRL authority over
database DB1 to DBUTIL1 and DBUTIL2.
Spiffy can grant DB2 privileges to primary IDs indirectly, by granting privileges to
secondary IDs that are associated with the primary IDs. This approach associates
privileges with a functional ID rather than an individual ID. Functional IDs, also
called group IDs, are granted privileges based on the function that certain job roles
serve in the system. Multiple primary IDs can be associated with a functional ID
and receive the privileges that are granted to that functional ID. In contrast,
individual IDs are connected to specific people. Their privileges need to be
updated as people join the company, leave the company, or serve different roles
within the company. Functional IDs have the following advantages:
v Functional IDs reduce system maintenance because they are more permanent
than individual IDs. Individual IDs require frequent updates, but each functional
ID can remain in place until Spiffy redesigns its procedures.
Example: Suppose that Joe retires from the Spiffy Computer Company. Joe is
replaced by Mary. If Joe’s privileges are associated with functional ID DEPT4,
those privileges are maintained in the system even after Joe’s individual ID is
removed from the system. When Mary enters the system, she will have all of
Joe’s privileges after her ID is associated with the functional ID DEPT4.
v Functional IDs reduce the number of grants that are needed because functional
IDs often represent groups of individuals.
v Functional IDs reduce the need to revoke privileges and re-create objects when
they change ownership.
Example: Suppose that Bob changes jobs within the Spiffy Computer Company.
Bob’s individual ID has privileges on many objects in the system and owns three
The database administrator, DBA01, owns database DB1 and has the privileges to
use storage group SG1 and buffer pool BP0. The database administrator holds both
of these privileges with the GRANT option. The database administrator issues the
following statements:
1. GRANT CREATETAB, CREATETS ON DATABASE DB1 TO DEVGROUP;
2. GRANT USE OF STOGROUP SG1 TO DEVGROUP;
3. GRANT USE OF BUFFERPOOL BP0 TO DEVGROUP;
Because the system and database administrators at Spiffy still need to control the
use of those resources, the preceding statements are issued without the GRANT
option.
Three programmers in the Software Support department write and test a new
program, PROGRAM1. Their IDs are PGMR01, PGMR02, and PGMR03. Each
programmer needs to create test tables, use the SG1 storage group, and use one of
the buffer pools. All of those resources are controlled by DEVGROUP, which is a
RACF group ID.
With that privilege, any member of the RACF group DEVGROUP can bind plans
and packages that are to be owned by DEVGROUP. Any member of the group can
rebind a plan or package that is owned by DEVGROUP. The following graphic
shows the BINDADD privilege granted to the group:
The Software Support department proceeds to create and test the program.
Spiffy gives that job to a production binder with the ID BINDER. BINDER needs
privileges to bind a plan or package that DEVGROUP owns, to bind a plan or
package with OWNER (PRODCTN), and to add a package to the collection
BOWLS. The following graphic shows the privileges for BINDER:
Any member of the group DEVGROUP can grant the BINDAGENT privilege for
DEVGROUP by using the following statements:
SET CURRENT SQLID=’DEVGROUP’;
GRANT BINDAGENT TO BINDER;
Any member of PRODCTN can grant the BINDAGENT privilege for PRODCTN
by using the following statements:
SET CURRENT SQLID=’PRODCTN’;
GRANT BINDAGENT TO BINDER;
With the plan in place, the database administrator at Spiffy wants to make the
PROGRAM1 plan available to all employees by issuing the following statement:
GRANT EXECUTE ON PLAN PROGRAM1 TO PUBLIC;
More than one ID has the authority or privileges that are necessary to issue this
statement. For example, ADMIN has SYSADM authority and can grant the
EXECUTE privilege. Also, PGMR01 can set CURRENT SQLID to PRODCTN,
which owns PROGRAM1, and issue the statement. When EXECUTE is granted to
PUBLIC, other IDs do not need any explicit authority on T1.
To enable access from all Spiffy locations, administrators perform the following
steps:
1. Add a CONNECT statement to the program, naming the location at which
table PRODCTN.T1 resides. (In this case, the table and the package reside at
only the central location.)
2. Issue the following statement:
GRANT CREATE IN COLLECTION BOWLS TO DEVGROUP;
Any system that is connected to the original DB2 location can then run
PROGRAM1 and execute the package by using DRDA access.
Restriction: If the remote system is another DB2, a plan must be bound there that
includes the package in its package list.
An ID with SYSADM or SYSCTRL authority can revoke a privilege that has been
granted by another ID with the following statement:
REVOKE authorization-specification FROM auth-id BY auth-id
The BY clause specifies the authorization ID that originally granted the privilege. If
two or more grantors grant the same privilege to an ID, executing a single
REVOKE statement does not remove the privilege. To remove it, each grant of the
privilege must be revoked.
The WITH GRANT OPTION clause of the GRANT statement allows an ID to pass
the granted privilege to others. If the privilege is removed from the ID, its deletion
can cascade to others, with side effects that are not immediately evident. For
example, when a privilege is removed from authorization ID X, it is also removed
from any ID to which X granted it, unless that ID also has the privilege from some
other source.
Example: Suppose that DBA01 grants DBCTRL authority with the GRANT option
on database DB1 to DBUTIL1. Then DBUTIL1 grants the CREATETAB privilege on
DB1 to PGMR01. If DBA01 revokes DBCTRL from DBUTIL1, PGMR01 loses the
CREATETAB privilege. If PGMR01 also granted the CREATETAB privilege to
OPER1 and OPER2, they also lose the privilege.
Example: Suppose that PGMR01 from the preceding example created table T1
while holding the CREATETAB privilege. If PGMR01 loses the CREATETAB
privilege, table T1 is not dropped, and the privileges that PGMR01 has as owner of
the table are not deleted. Furthermore, the privileges that PGMR01 grants on T1
are not deleted. For example, PGMR01 can grant SELECT on T1 to OPER1 as long
as PGMR01 owns of the table. Even when the privilege to create the table is
revoked, the table remains, the privilege remains, and OPER1 can still access T1.
DBUTIL2
Time 2
Time 1 Time 3
DBUTIL1 PGMR01 OPER1
4. DB2 does not cascade a revoke of SYSADM authority from the installation SYSADM authorization IDs.
After Time 3, DBUTIL1's authority is revoked, along with all of the privileges and
authorities that DBUTIL1 granted. However, PGMR01 also has the CREATETAB
privilege from DBUTIL2, so PGMR01 does not lose the privilege. The following
criteria determine whether OPER1 loses the CREATETAB privilege when
DBUTIL1’s authority is revoked:
v If Time 3 comes after Time 2, OPER1 does not lose the privilege. The recorded
dates and times show that, at Time 3, PGMR01 could have granted the privilege
entirely on the basis of the privilege that was granted by DBUTIL2. That
privilege was not revoked.
v If Time 3 is precedes Time 2, OPER1 does lose the privilege. The recorded dates
and times show that, at Time 3, PGMR01 could have granted the privilege only
on the basis of the privilege that was granted by DBUTIL1. That privilege was
revoked, so the privileges that are dependent on it are also revoked.
However, you might want to revoke only privileges that are granted by a certain
ID. To revoke privileges that are granted by DBUTIL1 and to leave intact the same
privileges if they were granted by any other ID, use the following statement:
REVOKE CREATETAB, CREATETS ON DATABASE DB1 FROM PGMR01 BY DBUTIL1;
For distinct types, the following objects that are owned by the revokee can have
dependencies:
v A table that has a column that is defined as a distinct type
v A user-defined function that has a parameter that is defined as a distinct type
v A stored procedure that has a parameter that is defined as a distinct type
| v A sequence that has a parameter that is defined as a distinct type
For user-defined functions, the following objects that are owned by the revokee can
have dependencies:
v Another user-defined function that is sourced on the user-defined function
v A view that uses the user-defined function
v A table that uses the user-defined function in a check constraint or user-defined
default clause
v A trigger package that uses the user-defined function
For JARs (Java classes for a routine), the following objects that are owned by the
revokee can have dependencies:
v A Java user-defined function that uses a JAR
v A Java stored procedure that uses a JAR
For stored procedures, a trigger package that refers to the stored procedure in a
CALL statement can have dependencies.
| For sequences, the following objects that are owned by the revokee can have
| dependencies:
| v Triggers that contain NEXT VALUE or PREVIOUS VALUE expressions that
| specify a sequence
| v Inline SQL routines that contain NEXT VALUE or PREVIOUS VALUE
| expressions that specify a sequence
One way to ensure that the REVOKE statement succeeds is to drop the object that
has a dependency on the privilege. To determine which objects are dependent on
which privileges before attempting the revoke, use the following SELECT
statements.
List the routines that are owned by the revokee USRT002 and that use a JAR
named USRT001.SPJAR:
SELECT * FROM SYSIBM.SYSROUTINES WHERE
OWNER = ’USRT002’ AND
JARCHEMA = ’USRT001’ AND
JAR_ID = ’SPJAR’;
List the trigger packages that refer to the stored procedure USRT001.WLMOCN2
that is owned by the revokee USRT002:
SELECT * FROM SYSIBM.SYSPACKDEP WHERE
DOWNER = ’USRT002’ AND
BQUALIFIER = ’USRT001’ AND
BNAME = ’WLMLOCN2’ AND
BTYPE = ’O’;
| For a sequence:
| v List the sequences that are owned by the revokee USRT002 and that use a
| trigger named USRT001.SEQ1:
| SELECT * FROM SYSIBM.SYSPACKDEP WHERE
| BNAME = ’SEQ1’
| BQUALIFIER = ’USRT001’
| BTYPE = ’Q’
| DOWNER = ’USRT002’
| DTYPE = ’T’;
| v List the sequences that are owned by the revokee USRT002 and that use a inline
| SQL routine named USRT001.SEQ1:
| SELECT * FROM SYSIBM.SYSSEQUENCESDEP WHERE
| DCREATOR = ’USRT002’
| DTYPE = ’F’
| BNAME = ’SEQ1’
| BSCHEMA = ’USRT001’;
Views and privileges: If a table privilege is revoked from the owner of a view on
the table, the corresponding privilege on the view is revoked. The privilege on the
view is revoked not only from the owner of the view, but also from all other IDs to
which the owner granted the privilege.
If the SELECT privilege on the base table is revoked from the owner of the view,
the view is dropped. However, if another grantor granted the SELECT privilege to
the view owner before the view was created, the view is not dropped.
Example: Suppose that OPER2 has the SELECT and INSERT privileges on table T1
and creates a view of the table. If the INSERT privilege on T1 is revoked from
OPER2, all insert privileges on the view are revoked. If the SELECT privilege on
T1 is revoked from OPER2, and if OPER2 did not have the SELECT privilege from
another grantor before the view was created, the view is dropped.
| Materialized query tables and privileges:If the SELECT privilege on a source table
| is revoked from the owner of a materialized query table, the corresponding
| privilege on the materialized query table is revoked. The SELECT privilege on the
| materialized query table is revoked not only from the owner of the materialized
| query table, but also from all other IDs to which the owner granted the SELECT
| privilege.
| If the SELECT privilege on the source table is revoked from the owner of a
| materialized query table, the materialized query table is dropped. However, if
| another grantor granted the SELECT privilege to the materialized query table
| owner before the materialized query table was created, the materialized query
| table is not dropped.
Example: Suppose that OPER7 has the SELECT privilege on table T1 and creates a
materialized query table T2 by selecting from T1. If the SELECT privilege on T1 is
revoked from OPER7, and if OPER7 did not have the SELECT privilege from
another grantor before T2 was created, T2 is dropped.
Example: Suppose that IDADM, with SYSADM authority, creates a view on TABLX
with OPER as the owner of the view. OPER now has the SELECT privilege on the
view, but not necessarily any privileges on the base table. If SYSADM is revoked
from IDADM, the SELECT privilege on TABLX is gone and the view is dropped.
If one ID creates a view for another ID, the catalog table SYSIBM.SYSTABAUTH
needs either one or two rows to record the associated privileges. The number of
rows that DB2 uses to record the privilege is determined by the following criteria:
v If IDADM creates a view for OPER when OPER has enough privileges to create
the view by itself, only one row is inserted in SYSTABAUTH. The row shows
only that OPER granted the required privileges.
v If IDADM creates a view for OPER when OPER does not have enough
privileges to create the view by itself, two rows are inserted in SYSTABAUTH.
One row shows IDADM as GRANTOR and OPER as GRANTEE of the SELECT
privilege. The other row shows any other privileges that OPER might have on
the view because of privileges that are held on the base table.
To change the ID that holds installation SYSADM authority, perform the following
steps:
1. Select a new ID that you will grant installation SYSADM authority.
2. Grant SYSADM authority to the ID that you selected.
3. Revoke SYSADM authority from the ID that currently holds installation
SYSADM authority.
4. Update the SYSTEM ADMIN 1 or SYSTEM ADMIN 2 field on installation panel
DSNTIPP with the new ID that you want to grant installation SYSADM
authority.
To delete extraneous IDs with SYSADM authority, perform the following steps:
1. Write down the ID that currently holds installation SYSADM authority.
2. Change the authority of the ID that you want to delete from SYSADM to
installation SYSADM. You can change the authority by updating the SYSTEM
ADMIN 1 or SYSTEM ADMIN 2 field on installation panel DSNTIPP. Replace
the ID that you wrote down in step 1 with the ID that you want to delete.
3. Revoke SYSADM authority from the ID that you want to delete.
For descriptions of the columns of each table, see Appendix F of DB2 SQL
Reference.
Periodically, you should compare the list of IDs that is retrieved by this SQL code
with the following lists:
v Lists of users from subsystems that connect to DB2 (such as IMS, CICS, and
TSO)
v Lists of RACF groups
v Lists of users from other DBMSs that access your DB2
If DB2 lists IDs that do not exist elsewhere, you should revoke their privileges.
Example: Suppose that Judy, Kate, and Patti all grant the SELECT privilege on
TABLE1 to Chris. If you care that Chris’s ID has the privilege but not who granted
the privilege, you might consider two of the SELECT grants to be redundant and
unnecessary performance liabilities.
However, you might want to maintain information about authorities that are
granted from several different IDs, especially when privileges are revoked.
Example: Suppose that the SELECT privilege from the previous example is
revoked from Judy. If Chris has the SELECT privilege from only Judy, Chris loses
the SELECT privilege. However, Chris retains the SELECT privilege because Kate
and Patti also granted the SELECT privilege to Chris. In this case, the similar
grants prove not to be redundant.
You can query the catalog to find duplicate grants on objects. If multiple grants
clutter your catalog, consider eliminating unnecessary grants. You can use the
following SQL statement to retrieve duplicate grants on plans:
SELECT GRANTEE, NAME, COUNT(*)
FROM SYSIBM.SYSPLANAUTH
GROUP BY GRANTEE, NAME
HAVING COUNT(*) > 2
ORDER BY 3 DESC;
This statement orders the duplicate grants by frequency, so that you can easily
identify the most duplicated grants. Similar statements for other catalog tables can
retrieve information about multiple grants on other types of objects.
To retrieve all IDs that can change the sample employee table (IDs with
administrative authorities and IDs to which authority is explicitly granted), issue
the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH
WHERE TTNAME = ’EMP’ AND
TCREATOR = ’DSN8810’ AND
GRANTEETYPE = ’ ’ AND
(ALTERAUTH <> ’ ’ OR
DELETEAUTH <> ’ ’ OR
INSERTAUTH <> ’ ’ OR
UPDATEAUTH <> ’ ’)
UNION
SELECT GRANTEE FROM SYSIBM.SYSUSERAUTH
WHERE SYSADMAUTH <> ’ ’
UNION
SELECT GRANTEE FROM SYSIBM.SYSDBAUTH
WHERE DBADMAUTH <> ’ ’ AND NAME = 'DSN8D81A';
To retrieve the columns of DSN8810.EMP for which update privileges have been
granted on a specific set of columns, issue the following statement:
SELECT DISTINCT COLNAME, GRANTEE, GRANTEETYPE FROM SYSIBM.SYSCOLAUTH
WHERE CREATOR=’DSN8810’ AND TNAME=’EMP’
ORDER BY COLNAME;
The character in the GRANTEETYPE column shows whether the privileges have
been granted to an authorization ID (blank) or are used by an application plan or
package (P).
To retrieve the IDs that have been granted the privilege of updating one or more
columns of DSN8810.EMP, issue the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH
WHERE TTNAME = ’EMP’ AND
TCREATOR=’DSN8810’ AND
GRANTEETYPE=’ ’ AND
UPDATEAUTH <> ’ ’;
The query returns only the IDs to which update privileges have been specifically
granted. It does not return IDs that have the privilege because of SYSADM or
DBADM authority. You could include them by forming a union with additional
queries, as shown in the following example:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH
WHERE TTNAME = ’EMP’ AND
TCREATOR = ’DSN8810’ AND
GRANTEETYPE = ’ ’ AND
UPDATEAUTH <> ’ ’
UNION
SELECT GRANTEE FROM SYSIBM.SYSUSERAUTH
WHERE SYSADMAUTH <> ’ ’
UNION
SELECT GRANTEE FROM SYSIBM.SYSDBAUTH
WHERE DBADMAUTH <> ’ ’ AND NAME = 'DSN8D81A';
You can write a similar statement to retrieve the IDs that are authorized to access a
user-defined function. To retrieve the IDs that are authorized to access user-defined
function UDFA in schema SCHEMA1, issue the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSROUTINEAUTH
WHERE SPECIFICNAME=’UDFA’ AND
SCHEMA=’SCHEMA1’ AND
GRANTEETYPE=’ ’ AND
ROUTINETYPE =’F’;
To retrieve the tables, views, and aliases that PGMR001 owns, issue the following
statement:
SELECT NAME FROM SYSIBM.SYSTABLES
WHERE CREATOR = ’PGMR001’;
The preceding query does not distinguish between plans and packages. To identify
a package, use the COLLID column of table SYSTABAUTH, which names the
collection in which a package resides and is blank for a plan.
A plan or package can refer to the table indirectly, through a view. To find all
views that refer to the table, perform the following steps:
1. Issue the following query:
SELECT DISTINCT DNAME FROM SYSIBM.SYSVIEWDEP
WHERE BTYPE = ’T’ AND
BCREATOR = ’DSN8810’ AND
BNAME = ’EMP’;
2. Write down the names of the views that satisfy the query. These values are
instances of DNAME_list.
3. Find all plans and packages that refer to those views by issuing a series of SQL
statements. For each instance of DNAME_list, issue the following statement:
SELECT DISTINCT GRANTEE FROM SYSIBM.SYSTABAUTH
WHERE GRANTEETYPE = ’P’ AND
TCREATOR = ’DSN8810’ AND
TTNAME = DNAME_list;
For example, the following view includes the owner and the name of every table
on which a user's primary authorization ID has the SELECT privilege:
CREATE VIEW MYSELECTS AS
SELECT TCREATOR, TTNAME FROM SYSIBM.SYSTABAUTH
WHERE SELECTAUTH <> ’ ’ AND
GRANTEETYPE = ’ ’ AND
GRANTEE IN (USER, ’PUBLIC’, ’PUBLIC*’, CURRENT SQLID);
The keyword USER in that statement is equal to the value of the primary
authorization ID. To include tables that can be read by a secondary ID, set the
current SQLID to that secondary ID before querying the view.
To make the view available to every ID, issue the following GRANT statement:
GRANT SELECT ON MYSELECTS TO PUBLIC;
Similar views can show other privileges. This view shows privileges over columns:
CREATE VIEW MYCOLS (OWNER, TNAME, CNAME, REMARKS, LABEL)
AS SELECT DISTINCT TBCREATOR, TBNAME, NAME, REMARKS, LABEL
FROM SYSIBM.SYSCOLUMNS, SYSIBM.SYSTABAUTH
WHERE TCREATOR = TBCREATOR AND
TTNAME = TBNAME AND
GRANTEETYPE = ’ ’ AND
GRANTEE IN (USER,’PUBLIC’,CURRENT SQLID,’PUBLIC*’);
| Multilevel security
| Important
| The following information about multilevel security is specific to DB2. It does
| not describe all aspects of multilevel security. However, this specific
| information assumes that you have general knowledge of multilevel security.
| Before implementing multilevel security on your DB2 subsystem, read z/OS
| Planning for Multilevel Security and the Common Criteria.
|
|
| Security labels
| Multilevel security restricts access to an object or a row based on the security label
| of the object or row and the security label of the user.
| For local connections, the security label of the user is the security label that the
| user signed on with. It is the security label that is associated with the DB2 primary
| authorization ID and that is accessed from the RACF ACEE control block.
| For TCP/IP connections, the security label of the user can be defined by the
| security zone. IP addresses are grouped into security zones on the DB2 server.
| Users that come in on an IP address have the security label that is associated with
| the security zone that the IP address is grouped under.
| For SNA connections, the default security label for the user is used instead of the
| security label that the user signed on with.
| Security levels: Along with security categories, hierarchical security levels are used
| as a basis for mandatory access checking decisions. When you define the security
| level of an object, you define the degree of sensitivity of that object. Security levels
| ensure that an object of a certain security level is protected from access by a user
| of a lower security level.
| For information about defining security levels, see z/OS Security Server RACF
| Security Administrator's Guide.
| Security categories: Security categories are the non-hierarchical basis for mandatory
| access checking decisions. When making security decisions, mandatory access
| control checks whether one set of security categories includes the security
| categories that are defined in a second set of security categories.
| Exception: IDs with Install SYSADM authority bypass mandatory access checking
| at the DB2 object level because actions by Install SYSADM do not invoke the
| external access control exit routine (DSNX@XAC). However, multilevel security
| with row-level granularity is enforced for IDs with Install SYSADM authority.
| Discretionary access checking: Once the user passes the mandatory access check, a
| discretionary check follows. The discretionary access check restricts access to
| objects based on the identity of a user and the groups to which the user belongs.
| The discretionary access check ensures that the user is identified as having a “need
| to know” for the requested resource. The check is discretionary because a user
| with a certain access permission is capable of passing that permission to any other
| user.
# Comparisons between user security labels and object security labels can result in
# four types of relationships:
# Dominant
# One security label dominates another security label when both of the
# following conditions are true:
# v The security level that defines the first security label is greater than or
# equal to the security level that defines the second security label.
# v The set of security categories that defines the first security label includes
# the set of security categories that defines the other security label.
# Reading data requires that the user security label dominates the data
# security label.
# Reverse dominant
# One security label reverse dominates another security label when both of
# the following conditions are true:
# v The security level that defines the first security label is less than or equal
# to the security level that defines the second security label.
# v The set of security categories that defines the first security label is a
# subset of the security categories that defines the other security label.
| Example: Suppose that the security level ″secret″ for the security label HIGH is
| greater than the security level ″sensitive″ for the security label MEDIUM. Also,
| suppose that the security label HIGH includes the security categories Project_A,
| Project_B, and Project_C, and that the security label MEDIUM includes the security
| categories Project_A and Project_B. The security label HIGH dominates the security
| label MEDIUM because both conditions for dominance are true.
| Example: Suppose that the security label HIGH includes the security categories
| Project_A, Project_B, and Project_C, and that the security label MEDIUM includes
| the security categories Project_A and Project_Z. In this case, the security label
| HIGH does not dominate the security label MEDIUM because the set of security
| categories that define the security label HIGH does not contain the security
| category Project_Z. The security labels are disjoint.
| Write-down control
| Mandatory access checking prevents users from declassifying information by not
| allowing a user to write to an object unless the security label of the user and the
| security label of the object are equivalent. You can override this security feature,
| known as write-down control, for specific users by granting write-down privilege
| to those users.
| Example: Suppose that user1 has a security label of HIGH and that row_x has a
| security label of MEDIUM. Because the security label of the user and the security
| label of the row are not equivalent, user_1 cannot write to row_x. Therefore,
| write-down control prevents user1 from declassifying the information that is in
| row_x.
| Example: Suppose that user2 has a security label of MEDIUM and that row_x has
| a security label of MEDIUM. Because the security label of the user and the security
| label of the row are equivalent, user1 can read from and write to row_x. However,
| user2 cannot change the security label for row_x unless user2 has write-down
| privilege. Therefore write-down control prevents user2 from declassifying the
| information that is in row_x.
| Example: To grant write down privilege to users, perform the following steps:
| 1. Define a profile. The following RACF command defines the
| IRR.WRITEDOWN.BYUSER profile:
| You can implement multilevel security with row-level granularity with or without
| implementing multilevel security on the object level. If you do not implement
| multilevel security on the object level, you must perform step 1 and step 4 in
| “Implementing multilevel security at the object level” on page 195. If you do not
| use the access control authorization exit and RACF access control, you can use DB2
| native authorization control.
| The write-down privilege for multilevel security with row-level granularity has the
| following properties:
| v A user with the write-down privilege can update the security label of a row to
| any valid value. The user can make this update independent of the user’s
| dominance relationship with the row.
| v DB2 requires that a user have the write-down privilege to perform certain
| utilities.
| v If write-down control is not enabled, all users with valid security labels are
| equivalent to users with the write-down privilege.
| Requirement: You must have z/OS Version 1 Release 5 or later to use DB2
| authorization with multilevel security with row-level granularity.
| Defining multilevel security on tables: You can use multilevel security with
| row-level checking to control table access by creating or altering a table to have a
| column with the AS SECURITY LABEL attribute. Tables with multilevel security in
| effect can be dropped by using the DROP TABLE statement. Users must have a
| valid security label to execute CREATE TABLE, ALTER TABLE, and DROP TABLE
| statements on tables with multilevel security enabled. For information about
| defining security labels and enabling the security label class, see z/OS Planning for
| Multilevel Security.
| Indexing the security label column: The performance of tables that you create and
| alter can suffer if the security label is not included in indexes. The security label
| column is used whenever a table with multilevel security enabled is accessed.
| Therefore, the security label column should be included in indexes on the table. If
| you do not index the security label column, you cannot maintain index-only access.
| CREATE TABLE: When a user with a valid security label creates a table, the user
| can implement row-level security by including a security label column. The
| security label column can have any name, but it must be defined as CHAR(8) and
| NOT NULL WITH DEFAULT. It also must be defined with the AS SECURITY
| LABEL clause.
| Example: To create a table that is named TABLEMLS1 and that has row-level
| security enabled, issue the following statement:
| CREATE TABLE TABLEMLS1
| (EMPNO CHAR(6) NOT NULL,
| EMPNAME VARCHAR(20) NOT NULL,
| DEPTNO VARCHAR(5)
| SECURITY CHAR(8) NOT NULL WITH DEFAULT AS SECURITY LABEL,
| PRIMARY KEY (EMPNO) )
| IN DSN8D71A.DSN8S71D;
| After the user specifies the AS SECURITY LABEL clause on a column, users can
| indicate the security label for each row by entering values in that column. When a
| user creates a table and includes a security label column, SYSIBM.SYSTABLES
| indicates that the table has row-level security enabled. Once a user creates a table
| ALTER TABLE: A user with a valid security label can implement row-level
| security on an existing table by adding a security label column to the existing
| table. The security label column can have any name, but it must be defined as
| CHAR(8) and NOT NULL WITH DEFAULT. It also must be defined with the AS
| SECURITY LABEL clause.
| Example: Suppose that the table EMP does not have row-level security enabled. To
| alter EMP so that it has row-level security enabled, issue the following statement:
| ALTER TABLE EMP
| ADD SECURITY CHAR(8) NOT NULL WITH DEFAULT AS SECURITY LABEL;
| Important: Plans, packages, and dynamic statements are invalidated when a table
| is altered to add a security label column.
| DROP TABLE: When a user with a valid security label drops a table that has
| row-level security in effect, the system generates an audit record. Row-level
| security does not affect the success of a DROP statement; the user’s privilege on
| the table determines whether the statement succeeds.
| For more information about these SQL statements and multilevel security, see the
| DB2 SQL Reference.
| In addition to SQL statements and utilities, this section discusses the following
| topics:
| v “Using views to restrict access” on page 204
| v “Global temporary tables with multilevel security” on page 204
| v “Materialized query tables with multilevel security” on page 205
| v “Constraints and multilevel security” on page 205
| v “Field procedures, edit procedures, validation procedures, and multilevel
| security” on page 205
| v “Triggers and multilevel security” on page 205
| For information about defining security labels and enabling the security label class,
| see z/OS Planning for Multilevel Security.
| Example: Suppose that Alan has a security label of HIGH, Beth has a security label
| of MEDIUM, and Carlos has a security label of LOW. Suppose that DSN8710.EMP
| contains the data that is shown in Table 52 and that the SECURITY column has
| been declared with the AS SECURITY LABEL clause.
| Table 52. Sample data from DSN8710.EMP
| EMPNO LASTNAME WORKDEPT SECURITY
| 000010 HAAS A00 LOW
| 000190 BROWN D11 HIGH
| 000200 JONES D11 MEDIUM
| 000210 LUTZ D11 LOW
| 000330 LEE E21 MEDIUM
|
| Now, suppose that Alan, Beth, and Carlos each submit the following SELECT
| statement:
| SELECT LASTNAME
| FROM EMP
| ORDER BY LASTNAME;
| Because Alan has the security label HIGH, he receives the following result:
| BROWN
| HAAS
| JONES
| LEE
| LUTZ
| Because Beth has the security label MEDIUM, she receives the following result:
| HAAS
| JONES
| LEE
| LUTZ
| Beth does not see BROWN in her result set because the row with that information
| has a security label of HIGH.
| Because Carlos has the security label LOW, he receives the following result:
| HAAS
| LUTZ
| Carlos does not see BROWN, JONES, or LEE in his result set because the rows
| with that information have security labels that dominate Carlos’s security label.
| Although Beth and Carlos do not receive the full result set for the query, DB2 does
| not return an error code to Beth or Carlos.
| Example: Suppose that Alan has a security label of HIGH, that Beth has a security
| label of MEDIUM and write-down privilege defined in RACF, and that Carlos has
| a security label of LOW. Write-down control is enabled.
| Now, suppose that Alan, Beth, and Carlos each submit the following INSERT
| statement:
| INSERT INTO DSN8710.EMP(EMPNO, LASTNAME, WORKDEPT, SECURITY)
| VALUES(’099990’, ’SMITH’, ’C01’, ’MEDIUM’);
| Because Alan does not have write-down privilege, Alan cannot choose the security
| label of the row that he inserts. Therefore DB2 ignores the security label of
| MEDIUM that is specified in the statement. The security label of the row becomes
| HIGH because Alan’s security label is HIGH.
| Because Beth has write-down privilege on the table, she can specify the security
| label of the new row. In this case, the security label of the new row is MEDIUM. If
| Beth submits a similar INSERT statement that specifies a value of LOW for the
| security column, the security label for the row becomes LOW.
| Because Carlos does not have write-down privilege, Carlos cannot choose the
| security label of the row that he inserts. Therefore DB2 ignores the security label of
| MEDIUM that is specified in the statement. The security label of the row becomes
| LOW because Carlos’s security label is LOW.
| Considerations for INSERT from a fullselect: For statements that insert the result
| of a fullselect, DB2 does not return an error code if the fullselect contains a table
| with a security label column.
| If the user has write-down privilege or write-down is not in effect, the security
| label of the user might not dominate the security label of the row. For statements
| that insert rows and select the inserted rows, the INSERT statement succeeds in
| this. However, the inserted row is not returned.
| Considerations for INSERT with subselect: If you insert data into a table that does
| not have a security label column, but a subselect in the INSERT statement does
| include a table with a security label column, row-level checking is performed for
| the subselect. However, the inserted rows will not be stored with a security label
| column.
| Example: Suppose that Alan has a security label of HIGH and write-down
| privilege defined in RACF, that Beth has a security label of MEDIUM and
| write-down privilege defined in RACF, and that Carlos has a security label of
| LOW. Write-down control is enabled.
| Suppose that DSN8710.EMP contains the data that is shown in Table 53 and that
| the SECURITY column has been declared with the AS SECURITY LABEL clause.
| Table 53. Sample data from DSN8710.EMP
| EMPNO LASTNAME WORKDEPT SECURITY
| 000190 BROWN D11 HIGH
| 000200 JONES D11 MEDIUM
| 000210 LUTZ D11 LOW
|
| Now, suppose that Alan, Beth, and Carlos each submit the following UPDATE
| statement:
| UPDATE DSN8710.EMP
| SET DEPTNO=’X55’, SECURITY=’MEDIUM’
| WHERE DEPTNO=’D11’;
# Because Alan has a security label that is equivalent to the security label of the row
# with HIGH security, the update on that row succeeds. Because Alan has a security
# label that dominates the rows with security labels of MEDIUM and LOW, his
# write-down privilege determines whether these rows are updated. Alan has the
# write-down privilege that is required to set the security label to any value, so the
# update succeeds for these rows and the security label for all of the rows becomes
# MEDIUM. The results of Alan’s update are shown in Table 54 on page 202.
# Because the row with the security label of HIGH dominates Beth’s security label,
| the update fails for that row, which causes the entire update to fail.
# Because the rows with the security labels of MEDIUM and HIGH dominate
# Carlos’s security label, the update fails for those rows, which causes the entire
# update to fail.
| Recommendation: To avoid failed updates, qualify the rows that you want to
| update with the following predicate, for the security label column SECLABEL:
# WHERE SECLABEL=GETVARIABLE(’SYSIBM.SECLABEL’);
| Using this predicate avoids failed updates because it ensures that the user’s
| security label is equivalent to the security label of the rows that DB2 attempts to
| update.
| Example: Suppose that Alan has a security label of HIGH, that Beth has a security
| label of MEDIUM and write-down privilege defined in RACF, and that Carlos has
| a security label of LOW. Write-down control is enabled.
| Suppose that DSN8710.EMP contains the data that is shown in Table 55 and that
| the SECURITY column has been declared with the AS SECURITY LABEL clause.
| Table 55. Sample data from DSN8710.EMP
| EMPNO LASTNAME WORKDEPT SECURITY
| 000190 BROWN D11 HIGH
| 000200 JONES D11 MEDIUM
| 000210 LUTZ D11 LOW
|
| Now, suppose that Alan, Beth, and Carlos each submit the following DELETE
| statement:
202 Administration Guide
| DELETE FROM DSN8710.EMP
| WHERE DEPTNO=’D11’;
| Because Alan has a security label that dominates the rows with security labels of
| MEDIUM and LOW, his write-down privilege determines whether these rows are
| deleted. Alan does not have write-down privilege, so the delete fails for these
| rows. Because Alan has a security label that is equivalent to the security label of
| the row with HIGH security, the delete on that row succeeds. The results of Alan’s
| delete are shown in Table 56.
| Table 56. Sample data from DSN8710.EMP after Alan’s delete
| EMPNO EMPNAME DEPTNO SECURITY
| 000200 JONES D11 MEDIUM
| 000210 LUTZ D11 LOW
|
| Because Beth has a security label that dominates the row with a security label of
| LOW, her write-down privilege determines whether this row is deleted. Beth has
| write-down privilege, so the delete succeeds for this row. Because Beth has a
| security label that is equivalent to the security label of the row with MEDIUM
| security, the delete succeeds for that row. Because the row with the security label
| of HIGH dominates Beth’s security label, the delete fails for that row. The results
| of Beth’s delete are shown in Table 57.
| Table 57. Sample data from DSN8710.EMP after Beth’s delete
| EMPNO EMPNAME DEPTNO SECURITY
| 000190 BROWN D11 HIGH
|
| Because Carlos’s security label is LOW, the delete fails for the rows with security
| labels of MEDIUM and HIGH. Because Carlos has a security label that is
| equivalent to the security label of the row with LOW security, the delete on that
| row succeeds. The results of Carlos’s delete are shown in Table 58.
| Table 58. Sample data from DSN8710.EMP after Carlos’s delete
| EMPNO EMPNAME DEPTNO SECURITY
| 000190 BROWN D11 HIGH
| 000200 JONES D11 MEDIUM
|
| Important: Do not omit the WHERE clause from DELETE statements. If you omit
| the WHERE clause from the DELETE statement, checking occurs for rows that
| have security labels. This checking behavior might have a negative impact on
| performance.
| For more information about these SQL statements and multilevel security, see the
| DB2 SQL Reference.
| When you run LOAD RESUME, you must have the write-down privilege to
| specify values for the security label column. If you run a LOAD RESUME job and
| do not have the write-down privilege, DB2 assigns your security label as the value
| for each row in the security label column.
| For more information about utilities and multilevel security, see DB2 Utility Guide
| and Reference.
| Example: Suppose that the ORDER table has the following columns: ORDERNO,
| PRODNO, CUSTNO, SECURITY. Suppose that SECURITY is the security label
| column, and that you do not want users to see the SECURITY column. Use the
| following statement to create a view that hides the security label column from
| users:
| CREATE VIEW V1 AS
| SELECT ORDERNO, PRODNO, CUSTNO FROM ORDER;
| Alternatively, you can create views that give each user access only to the rows that
| include that user’s security label column. To do that, retrieve the value of the
| SYSIBM.SECLABEL session variable, and create a view that includes only the rows
| that match the session variable value.
| Example: To allow access only to the rows that match the user’s security label, use
| the following CREATE statement:
| CREATE VIEW V2 AS SELECT * FROM ORDER
| WHERE SECURITY=GETVARIABLE(SYSIBM.SECLABEL);
| When a BEFORE trigger is activated, the value of the NEW transition variable that
| corresponds to the security label column is set to the security label of the user if
| either of the following criteria are met:
| All users on a TCP/IP connection have the security label that is associated with
| the IP address that is defined on the server. If a user requires a different security
| label, the user must enter through an IP address that has that security label
| associated with it. If you require multiple IP addresses on a remote z/OS server, a
| workstation, or a gateway, you can configure multiple virtual IP addresses. This
| strategy can increase the number of security labels that are available on a client.
| Remote users that access DB2 by using a TCP/IP network connection use the
| security label that is associated with the RACF SERVAUTH class profile when the
| remote user is authenticated. Security labels are assigned to the database access
| thread when the DB2 server authenticates the remote server by using the
| RACROUTE REQUEST = VERIFY service.
| DB2 provides built-in data encryption functions that you can use to encrypt
| sensitive data, such as credit card numbers and medical record numbers. When
| you use data encryption, DB2 requires the correct password to retrieve the data in
| a decrypted format. If an incorrect password is provided, DB2 does not decrypt the
| data.
| You can encrypt data by using two different methods. These methods are described
| in the following sections:
| v “Defining encryption at the column level” on page 208
| v “Defining encryption at the value level” on page 209
| Important: Built-in encryption functions work for data that is stored within DB2
| subsystem and is retrieved from within that same DB2 subsystem. The encryption
| functions do not work for data that is passed into and out of a DB2 subsystem.
| This task is handled by DRDA data encryption, and it is separate from built-in
| data encryption functions.
| Example: Suppose that you have non-encrypted data in a column that is defined as
| VARCHAR(6). Use the following calculation to determine the column definition for
| storing the data in encrypted format:
| Maximum length of non-encrypted data 6 bytes
| Number of bytes to the next multiple of 8 2 bytes
| 24 bytes for encryption key 24 bytes
| --------
| Encrypted data column length 32 bytes
| Therefore, define the column for encrypted data as VARCHAR(32) FOR BIT DATA.
| If you use a password hint, DB2 requires an additional 32 bytes to store the hint.
| Example: Suppose that you have non-encrypted data in a column that is defined as
| VARCHAR(10). Use the following calculation to determine the column definition
| for storing the data in encrypted format with a password hint:
| Maximum length of non-encrypted data 10 bytes
| Number of bytes to the next multiple of 8 6 bytes
| 24 bytes for encryption key 24 bytes
| 32 bytes for password hint 32 bytes
| --------
| Encrypted data column length 72 bytes
| Therefore, define the column for encrypted data as VARCHAR(72) FOR BIT DATA.
| When encrypted data is selected, DB2 must hold the same password that was held
| at the time of encryption to decrypt the data. To ensure that DB2 holds the correct
| password, issue a SET ENCRYPTION PASSWORD statement with the correct
| password immediately before selecting encrypted data.
| Example: Suppose that you need to create an employee table EMP that contains
| employee ID numbers in encrypted format. Suppose also that you want to set the
| password for all rows in an encrypted column to the host variable hv_pass. Finally,
| suppose that you want to select employee ID numbers in decrypted format.
| Perform the following steps:
| 1. Create the EMP table with the EMPNO column. The EMPNO column must be
| defined with the VARCHAR data type, must be defined FOR BIT DATA, and
| must be long enough to hold the encrypted data. The following statement
| creates the EMP table:
| CREATE TABLE EMP (EMPNO VARCHAR(32) FOR BIT DATA);
| 2. Set the encryption password. The following statement sets the encryption
| password to the host variable :hv_pass:
| SET ENCRYPTION PASSWORD = :hv_pass;
| 3. Use the ENCRYPT keyword to insert encrypted data into the EMP table by
| issuing the following statements:
| INSERT INTO EMP (EMPNO) VALUES(ENCRYPT(’47138’));
| INSERT INTO EMP (EMPNO) VALUES(ENCRYPT(’99514’));
| INSERT INTO EMP (EMPNO) VALUES(ENCRYPT(’67391’));
| 4. Select the employee ID numbers in decrypted format:
| If you provide the correct password, DB2 returns the employee ID numbers in
| decrypted format.
| Example: Suppose that you want to create a view that contains decrypted
| employee ID numbers from the EMP table.
| 1. Create a view on the EMP table by using the following statement:
| CREATE VIEW CLR_EMP (EMPNO) AS SELECT DECRYPT_CHAR(EMPNO) FROM EMP;
| 2. Set the encryption password, so that the fullselect in the view definition can
| retrieve decrypted data. Use the following statement:
| SET ENCRYPTION PASSWORD = :hv_pass;
| 3. Select the desired data from the view by using the following statement:
| SELECT EMPNO FROM CLR_EMP;
| Example: Use the following statement to set the password hint to the host variable
| hv_hint:
| SET ENCRYPTION PASSWORD = :hv_pass WITH HINT = :hv_hint;
| Example: Suppose that the EMPNO column in the EMP table contains encrypted
| data and that you submitted a password hint when you inserted the data. Suppose
| that you cannot remember the encryption password for the data. Use the following
| statement to return the password hint:
| SELECT GETHINT (EMPNO) FROM EMP;
| Before the application displays the credit card number for a customer, the customer
| must enter the password. The application retrieves the credit card number by
| using the following statement:
| SELECT DECRYPT_CHAR(CCN, :userpswd) FROM CUSTOMER WHERE NAME = :custname;
| Recommendation: Use host variables instead of literal values for all passwords
| and password hints. If the statements contain literal values for passwords and
| password hints, the security of the encrypted data can be compromised in the DB2
| catalog and in a trace report.
| Example: Suppose that you want the application from the previous example to use
| a hint to help customers remember their passwords. The application stores the hint
| in the host variable pswdhint. For this example, assume the values 'Tahoe' for
| userpswd and 'Ski Holiday' for pswdhint. The application uses the following
| statement to insert the customer information:
| INSERT INTO CUSTOMER (CCN, NAME)
| VALUES(ENCRYPT(:cardnum, :userpswd, :pswdhint), :custname);
| If the customer requests a hint about the password, the following query is used:
| SELECT GETHINT(CCN) INTO :pswdhint FROM CUSTOMER WHERE NAME = :custname;
| The value for pswdhint is set to 'Ski Holiday' and returned to the customer.
| Hopefully the customer can remember the password 'Tahoe' from this hint.
| Example: Suppose that the value 1234 is encrypted as H71G. Also suppose that the
| value 5678 is encrypted as BF62. If you use a <> predicate to compare these two
| values in encrypted format, you receive the same result as you will if you compare
| these two values in decrypted format:
| Decrypted: 1234 <> 5678 True
| Encrypted: H71G <> BF62 True
| Example: However, if you use a < predicate to compare these values in encrypted
| format, you receive a different result than you will if you compare these two
| values in decrypted format:
| Decrypted: 1234 < 5678 True
| Encrypted: H71G < BF62 False
| To ensure that predicates such as >, <, and LIKE return accurate results, you must
| first decrypt the data.
| Example: Suppose that you need to encrypt timestamp data and retrieve it in
| decrypted format. Perform the following steps:
| 1. Create a table to store the encrypted values and set the column-level encryption
| password by using the following statements:
| CREATE TABLE ETEMP (C1 VARCHAR(124) FOR BIT DATA);
| SET ENCRYPTION PASSWORD :hv_pass;
| 2. Cast, encrypt, and insert the timestamp data by using the following statement:
| INSERT INTO ETEMP VALUES ENCRYPT(CHAR(CURRENT TIMESTAMP));
| 3. Recast, decrypt, and select the timestamp data by using the following
| statement:
| SELECT TIMESTAMP(DECRYPT_CHAR(C1)) FROM ETEMP;
| Recommendation: Encrypt only a few highly sensitive data elements, such credit
| card numbers and medical record numbers.
| Some data values are poor candidates for encryption. For example, boolean values
| and other small value sets, such as the integers 1 through 10, are poor candidates
| for encryption. Because few values are possible, these types of data can be easy to
| guess even when they are encrypted. In most cases, encryption is not a good
| security option for this type of data.
| Data encryption and indexes: Creating indexes on encrypted data can improve
| performance in some cases. Exact matches and joins of encrypted data (if both
| tables use the same encryption key to encrypt the same data) can use the indexes
| that you create. Because encrypted data is binary data, range checking of
| encrypted data requires table space scans. Range checking requires all the row
| values for a column to be decrypted. Therefore, range checking should be avoided,
| or at least tuned appropriately.
| Example: Suppose that you must store EMPNO in encrypted form in the EMP
| table and in the EMPPROJ table. To define tables and indexes for the encrypted
| data, use the following statements:
| CREATE TABLE EMP (EMPNO VARCHAR(48) FOR BIT DATA, NAME VARCHAR(48));
| CREATE TABLE EMPPROJ(EMPNO VARCHAR(48) FOR BIT DATA, PROJECTNAME VARCHAR(48));
| CREATE INDEX IXEMPPRJ ON EMPPROJ(EMPNO);
| Example: Next, suppose that one employee can work on multiple projects, and that
| you want to insert employee and project data into the table. To set the encryption
| password and insert data into the tables, use the following statements:
| SET ENCRYPTION PASSWORD = :hv_pass;
| SELECT INTO :hv_enc_val FROM FINAL TABLE
| (INSERT INTO EMP VALUES (ENCRYPT(’A7513’),’Super Prog’));
| INSERT INTO EMPPROJ VALUES (:hv_enc_val,’UDDI Project’);
| INSERT INTO EMPPROJ VALUES (:hv_enc_val,’DB2 UDB Version 10’);
| SELECT INTO :hv_enc_val FROM FINAL TABLE
| (INSERT INTO EMP VALUES (ENCRYPT(’4NF18’),’Novice Prog’));
| INSERT INTO EMPPROJ VALUES (:hv_enc_val,’UDDI Project’);
| Example: Next, suppose that you want to find the programmers who are working
| on the UDDI Project. Consider the following pair of SELECT statements:
| v Poor performance: The following query shows how not to write the query for
| good performance:
| SELECT A.NAME, DECRYPT_CHAR(A.EMPNO) FROM EMP A, EMPPROJECT B
| WHERE DECRYPT_CHAR(A.EMPNO) = DECRYPT_CHAR(B.EMPNO) AND
| B.PROJECT =’UDDI Project’;
| Although the preceding query returns the correct results, it decrypts every
| EMPNO value in the EMP table and every EMPNO value in the EMPPROJ table
| where PROJECT = 'UDDI Project' to perform the join. For large tables, this
| unnecessary decryption is a significant performance problem.
| v Good performance: The following query produces the same result as the
| preceding query, but with significantly better performance. To find the
| programmers who are working on the UDDI Project, use the following statement
| :
| SELECT A.NAME, DECRYPT_CHAR(A.EMPNO) FROM EMP A, EMPPROJ B
| WHERE A.EMPNO = B.EMPNO AND B.PROJECT =’UDDI Project’;
| Example: Next, suppose that you want to find the projects that the programmer
| with employee ID A7513 is working on. Consider the following pair of SELECT
| statements:
| v Poor performance: The following query requires DB2 to decrypt every EMPNO
| value in the EMPPROJ table to perform the join:
If you install data definition control support on the DSNTIPZ installation panel, you
can control how specific plans or package collections can use data definition
statements. Data definition control support does not replace existing authorization
checks; it imposes additional checks. Table 59 lists the specific statements that are
controlled by using data definition control support.
Table 59. Statements that are controlled by data definition control support
Object CREATE statement ALTER statement DROP statement
Alias CREATE ALIAS DROP ALIAS
Database CREATE DATABASE ALTER DATABASE DROP DATABASE
Index CREATE INDEX ALTER INDEX DROP INDEX
Storage group CREATE STOGROUP ALTER STOGROUP DROP STOGROUP
Synonym CREATE SYNONYM DROP SYNONYM
Table CREATE TABLE ALTER TABLE DROP TABLE
Table space CREATE TABLESPACE ALTER TABLESPACE DROP TABLESPACE
View CREATE VIEW DROP VIEW
Note: Data definition control support also controls COMMENT statements and
LABEL statements.
The statements in Table 59 are a subset of statements that are referred to as “data
definition language.” In this chapter, data definition language refers only to this
subset of statements. For information about how to impose several degrees of
control over data definition in applications and objects, see “Controlling data
definition” on page 218.
“Columns of the ART” and “Columns of the ORT” on page 217 describe the
columns of the two registration tables. For more information about maintaining the
ART and the ORT, see “Managing the registration tables and their indexes” on
page 226.
After you specify values on the installation panel, you enter the appropriate
information in the ART and ORT to enable data definition control support.
This section explains what values to enter on the DSNTIPZ installation panel and
what data to enter in the ART and ORT. This section first explains the basic
installation options in “Installing data definition control support” on page 219.
Then, the section explains the specific options for the following four methods of
controlling data definition control support:
v “Controlling data definition by application name” on page 220 describes the
simplest of the four methods for implementing data definition control.
v “Controlling data definition by application name with exceptions” on page 221
describes how to give one or more applications almost total control over data
definition.
v “Controlling data definition by object name” on page 222 describes how to
register all of the objects in a subsystem and have several applications control
specific sets of objects.
v “Controlling data definition by object name with exceptions” on page 224
describes how to control registered and unregistered objects.
Finally, the section describes the optional task of registering sets of objects in
“Registering sets of objects” on page 225.
Recommendation: As you read through the relevant sections in this chapter, use
the template in Figure 17 on page 219 to record the values that you specify on the
DSNTIPZ installation panel.
You can accept the default names or assign names of your own. If you specify
your own table names, each name can have a maximum of 17 characters. This
chapter uses the default names.
3. If you want to use the percent character (%) or the underscore character (_) as a
regular character in the ART or ORT, enter an escape character for option 5 on
the DSNTIPZ installation panel. You can use any special character other than
underscore or percent as the escape character.
Example: To use the pound sign (#) as an escape character, fill in option 5 as
follows:
5 ART/ORT ESCAPE CHARACTER ===> #
After you specify the pound sign as an escape character, the pound sign can be
used in names in the same way that an escape character is used in an SQL
LIKE predicate.
For more information about escape characters and the percent and underscore
characters, see DB2 SQL Reference.
4. Register plans, packages, and objects in the ART and ORT, and enter values for
the three other options on the DSNTIPZ installation panel as follows:
Choose the values to enter and the plans, packages, and objects to register
based on the control method that you plan to use:
v If you want to control data definition by application name, perform the
steps in one of the following sections:
– For registered applications that have total control over all data definition
language in the DB2 subsystem, see “Controlling data definition by
application name.”
– For registered applications that have total control with some exceptions,
see “Controlling data definition by application name with exceptions” on
page 221. If you also want to control data definition by registering sets of
objects, perform the steps in “Registering sets of objects” on page 225.
v If you want to control data definition by object name, perform the steps in
one of the following sections:
– For subsystems in which all objects are registered and controlled by name,
see “Controlling data definition by object name” on page 222. If you also
want to control data definition by registering sets of objects, perform the
steps in “Registering sets of objects” on page 225.
– For subsystems in which some specific objects are registered and
controlled, and data definition language is accepted for objects that are not
registered, see “Controlling data definition by object name with
exceptions” on page 224. If you also want to control data definition by
registering sets of objects, perform the steps in “Registering sets of
objects” on page 225.
When you specify YES, only package collections or plans that are registered in
the ART are allowed to use data definition statements.
2. In the ART, register all package collections and plans that you will allow to
issue DDL statements, and enter the value Y in the DEFAULTAPPL column for
these package collections. You must supply values for the APPLIDENT,
APPLIDENTTYPE, and DEFAULTAPPL columns of the ART. You can enter
information in other columns for your own use as indicated in Table 60 on page
216.
Example: Suppose that you want all data definition language in your subsystem to
be issued only through certain applications. The applications are identified by the
following application plan names, collection-IDs, and patterns:
PLANA The name of an application plan
PACKB The collection-ID of a package
TRULY% A pattern name for any plan name beginning with TRULY
TR% A pattern name for any plan name beginning with TR
An inactive table entry: If the row with TR% for APPLIDENT in Table 62 contains
the value Y for DEFAULTAPPL, any plan with a name beginning with TR can
execute data definition language. If DEFAULTAPPL is later changed to N to
disallow that use, the changed row does not prevent plans beginning with TR
from using data definition language; the row merely fails to allow that specific use.
In this case, the plan TRXYZ is not allowed to use data definition language.
However, the plan TRULYXYZ is allowed to use data definition language, by the
row with TRULY% specified for APPLIDENT.
When you specify NO, you allow unregistered applications to use data
definition statements on some objects.
2. On the DSNTIPZ installation panel, specify the following for option 4:
4 UNREGISTERED DDL DEFAULT ===> APPL
When you specify APPL, you restrict the use of data definition statements for
objects that are not registered in the ORT. If an object is registered in the ORT,
any applications that are not registered in the ART can use data definition
language on the object. However, if an object is not registered in the ORT, only
applications that are registered in the ART can use data definition language on
the object.
3. In the ART, register package collections and plans that you will allow to issue
data definition statements on any object. Enter the value Y in the
DEFAULTAPPL column for these package collections. Applications that are
registered in the ART retain almost total control over data definition. Objects
that are registered in the ORT are the only exceptions. See step 2 of
“Controlling data definition by application name” on page 220 for more
information about registering package collections and plans in the ART.
4. In the ORT, register all objects that are exceptions to the subsystem data
definition control that you defined in the ART. You must supply values for the
QUALIFIER, NAME, TYPE, APPLMATCHREQ, APPLIDENT, and
Example: Suppose that you want almost all of the data definition language in your
subsystem to be issued only through an application plan (PLANA) and a package
collection (PACKB).
Table 64 shows the entries that are needed to register these exceptions in the ORT.
Table 64. Table DSN_REGISTER_OBJT for subsystem control with exceptions
QUALIFIER NAME TYPE APPLMATCHREQ APPLIDENT APPLIDENTTYPE
KIM VIEW1 C Y PLANC P
BOB ALIAS C Y PACKD C
FENG TABLE2 C N
SPIFFY MSTR_ C Y TRULY% P
You can register objects in the ORT individually, or you can register sets of objects.
For information about registering sets of objects, see “Registering sets of objects”
on page 225.
When you specify REJECT for option 4, you totally restrict the use of data
definition statements for objects that are not registered in the ORT. Therefore,
no application can use data definition language for any unregistered object.
3. In the ORT, register all of the objects in the subsystem, and enter Y in the
APPLMATCHREQ column. You must supply values for the QUALIFIER,
NAME, TYPE, APPLMATCHREQ, APPLIDENT, and APPLIDENTTYPE
columns of the ORT. You can enter information in other columns of the ORT for
your own use as indicated in Table 61 on page 217.
4. In the ART, register any plan or package collection that can use a set of objects
that you register in the ORT with an incomplete name. Enter the value Y in the
QUALIFIEROK column. These plans or package collections can use data
definition lanaguage on sets of objects regardless of whether a set of objects has
a value of Y in the APPLMATCHREQ column.
Example: Table 65 on page 224 shows entries in the ORT for a DB2 subsystem that
contains the following objects that are controlled by object name:
v Two storage groups (STOG1 and STOG2) and a database (DATB1) that are not
controlled by a specific application. These objects can be created, altered, or
dropped by a user with the appropriate authority by using any application, such
as SPUFI or QMF.
v Two table spaces (TBSP1 and TBSP2) that are not controlled by a specific
application. Their names are qualified by the name of the database in which
they reside (DATB1).
v Three objects (OBJ1, OBJ2, and OBJ3) whose names are qualified by the
authorization IDs of their owners. Those objects might be tables, views, indexes,
synonyms, or aliases. Data definition statements for OBJ1 and OBJ2 can be
issued only through the application plan named PLANX. Data definition
statements for OBJ3 can be issued only through the package collection named
PACKX.
v Objects that match the qualifier pattern E%D and the name OBJ4 can be created,
altered, or deleted by application plan SPUFI. For example, the objects
EDWARD.OBJ4, ED.OBJ4, and EBHARD.OBJ4, can be created, altered, or deleted
by application plan SPUFI. Entry E%D in the QUALIFIER column represents all
three objects.
v Objects with names that begin with TRULY.MY_, where the underscore character
is actually part of the name. Assuming that you specify # as the escape character,
all of the objects with this name pattern can be created, altered, or dropped only
by plans with names that begin with TRULY.
Entries in Table 65 on page 224 do not specify incomplete names. Hence, objects
that are not represented in the table cannot be created in the subsystem, except by
an ID with installation SYSADM authority.
You can register objects in the ORT individually, or you can register sets of objects.
For information about registering sets of objects, see “Registering sets of objects”
on page 225.
When you specify NO, you allow unregistered applications to use data
definition statements on some objects.
2. On the DSNTIPZ installation panel, fill in option 4 as follows:
4 UNREGISTERED DDL DEFAULT ===> ACCEPT
This option does not restrict the use of data definition statements for objects
that are not registered in the ORT. Therefore, any application can use data
definition language for any unregistered object.
3. Register all controlled objects in the ORT. Use a name and qualifier to identify
a single object. Use only one part of a two-part name to identify a set of objects
that share just that part of the name. For each controlled object, use
APPLMATCHREQ = Y. Enter the name of the plan or package collection that
controls the object in the APPLIDENT column.
4. For each set of controlled objects (identified by only a simple name in the
ORT), register the controlling application in the ART. You must supply values
for the APPLIDENT, APPLIDENTTYPE, and QUALIFIEROK columns of the
ART.
Example: The following two tables assume that the installation option REQUIRE
FULL NAMES is set to NO, as described in “Registering sets of objects” on page
225. Table 66 on page 225 shows entries in the ORT for the following controlled
objects:
You can register objects in the ORT individually, or you can register sets of objects.
For information about registering sets of objects, see “Registering sets of objects.”
Registering sets of objects allows you to save time and to simplify object
registration. Because complete two-part names are not required for every object
that is registered in the ORT, you can use incomplete names to register sets of
objects. To use incomplete names and register sets of objects, fill in option 3 on the
DSNTIPZ installation panel as follows:
3 REQUIRE FULL NAMES ===> NO
The default value YES requires you to use both parts of the name for each
registered object. If you specify the value NO, an incomplete name in the ORT
Example: If you specify NO for option 3, you can include entries with incomplete
names in the ORT. Table 68 shows entries in the ORT for the following objects:
v Two sets of objects, *.TABA and *.TABB, which are controlled by PLANX and
PACKY, respectively. Only PLANX can create, alter, or drop any object whose
name is *.TABA. Only PACKY can create, alter, or drop any object whose name
is *.TABB. PLANX and PACKY must also be registered in the ART with
QUALIFIEROK set to Y, as shown in Table 69. That setting allows the
applications to use sets of objects that are registered in the ORT with an
incomplete name.
v Tables, views, indexes, or aliases with names like SYSADM.*.
v Table spaces with names like DBSYSADM.*; that is, table spaces in database
DBSYSADM.
v Tables with names like USER1.* and tables with names like *.TABLEX.
Table 68. Table DSN_REGISTER_OBJT for objects with incomplete names
QUALIFIER NAME TYPE APPLMATCHREQ APPLIDENT APPLIDENTTYPE
TABA C Y PLANX P
TABB C Y PACKY C
SYSADM C N
DBSYSADM T N
USER1 TABLEX C N
ART entries for objects with incomplete names in the ORT: APPLMATCHREQ=N
and objects SYSAMD.*, DBSYSADM.*, USER1.*, and *.TABLEX can be created,
altered, or dropped by any package collection or application plan. However, the
collection or plan that creates, alters, or drops such an object must be registered in
the ART with QUALIFIEROK=Y to allow it to use incomplete object names.
Table 69 shows that PLANA and PACKB are registered in the ART to use sets of
objects that are registered in the ORT with incomplete names.
Table 69. Table DSN_REGISTER_APPL for plans that use sets of objects
APPLIDENT APPLIDENTTYPE DEFAULTAPPL QUALIFIEROK
PLANA P N Y
PACKB C N Y
Recommendation: Avoid changing ART and ORT table names. If you change either
of the table names, their owner, or their database, you must reinstall DB2 in
update mode and make the corresponding changes on the DSNTIPZ installation
panel.
Name the required index by adding the letter I to the corresponding table name.
For example, suppose that you are naming a required index for the ART named
ABC. You should name the required index ABCI.
If you want to use a table space with a different name or different attributes, you
can modify job DSNTIJSG before installing DB2. Alternatively, you can drop the
table space and re-create it, the two tables, and their indexes.
Adding columns
You can add columns to either registration table for your own use, by using the
ALTER TABLE statement. If you add columns, the additional columns must come
at the end of the table, after existing columns.
Recommendation: Use a special character, such as the plus sign (+), in your
column names to avoid possible conflict. If IBM adds columns to the ART or the
ORT in future releases, the column names will contain only letters and numbers.
| If you are using RACF, you can also define multilevel security for DB2 resources,
| which is described in “Multilevel security” on page 191.
Control by RACF is not strictly necessary, and some alternatives are described
under “Other methods of controlling access” on page 280. However, most of the
information in this chapter assumes that RACF, or an equivalent product, is
already in place.
Local requests only: If you do not accept requests from or send requests to remote
locations, begin reading this chapter with “Controlling local requests” on page 232.
When you reach “Controlling requests from remote applications” on page 238, you
can skip forward to “Establishing RACF protection for DB2” on page 264.
If you are sending requests to a remote DB2 subsystem, that subsystem can subject
your requests to various security checks. For suggestions on how to plan for these
checks, see “Planning to send remote requests” on page 252. If you send requests
to a remote DBMS that is not DB2 UDB for z/OS, use the documentation for that
DRDA database server.
For instructions on controlling the IDs that are associated with connection requests,
see “Processing connections.” For instructions on controlling the IDs that are
associated with sign-on requests, see “Processing sign-ons” on page 236.
IMS, CICS, RRSAF, and DDF-to-DDF connections can send a sign-on request,
typically to execute an application plan. That request must provide a primary ID,
and can also provide secondary IDs. After a plan is allocated, it need not be
deallocated until a new plan is required. A different transaction can use the same
plan by issuing a new sign-on request with a new primary ID.
Processing connections
A connection request makes a new connection to DB2; it does not reuse an
application plan that is already allocated. Therefore, an essential step in processing
the request is to check that the ID is authorized to use DB2 resources, as shown in
Figure 18.
2. RACF is called through the z/OS system authorization facility (SAF) to check
whether the ID that is associated with the address space is authorized to use
the following resources:
v The DB2 resource class (CLASS=DSNR)
v The DB2 subsystem (SUBSYS=ssnm)
v The requested connection type
For instructions on authorizing the use of these resources, see “Permitting
RACF access” on page 268.
The SAF return code (RC) from the invocation determines the next step, as
follows:
v If RC > 4, RACF determined that the RACF user ID is not valid or does not
have the necessary authorization to access the resource name. DB2 rejects the
request for a connection.
v If RC = 4, the RACF return code is checked.
– If RACF return code value is equal to 4, the resource name is not defined
to RACF and DB2 rejects the request with reason code X'00F30013'. For
instructions on defining the resource name, see “Defining DB2 resources
to RACF” on page 265.
– If RACF return code value is not equal to 4, RACF is not active. DB2
continues with the next step, but the connection request and the user are
not verified.
v If RC = 0, RACF is active and has verified the RACF user ID; DB2 continues
with the next step.
3. If RACF is active and has verified the RACF user ID, DB2 runs the connection
exit routine. To use DB2 secondary IDs, you must replace the exit routine. See
“Supplying secondary IDs for connection requests” on page 234.
If you do not want to use secondary IDs, do nothing. The IBM-supplied default
connection exit routine continues the connection processing. The process has
the following effects:
v The DB2 primary authorization ID is set based on the following rules:
– If a value for the initial primary authorization ID exists, the value
becomes the DB2 primary ID.
– If no value exists (the value is blank), the primary ID is set by default, as
shown in Table 71 on page 234.
The difference between the default connection exit routine and the sample
connection exit routine are shown in Table 72.
Table 72. Differences between default connection exit routine and sample connection exit
routine
Default connection exit routine Sample connection exit routine
Supplied as object code. Supplied as source code. You can change the
code.
Installed as part of the normal DB2 Must be compiled and placed in the DB2
installation procedure. library.
Provides values for primary IDs and SQL Provides values for primary IDs, secondary
IDs, but does not provide values for IDs, and SQL IDs.
secondary IDs.
Installation job DSNTIJEX replaces the default connection exit routine with the
sample connection exit routine; for more information, see Part 2 of DB2 Installation
Guide.
If the default connection exit routine and the sample connection exit routine do not
provide the flexibility and features that your subsystem requires, you can write
your own exit routine. For instructions on writing your own exit routine, see
Appendix B, “Writing exit routines,” on page 1015.
You must change the sample sign-on exit routine (DSN3SSGN) before using it if
the following conditions are all true:
v You have the RACF list-of-groups option active.
v You have transactions whose initial primary authorization ID is not defined to
RACF.
Processing sign-ons
For requests from IMS dependent regions, CICS transaction subtasks, or RRS
connections, the initial primary ID is not obtained until just before allocating a
plan for a transaction. A new sign-on request can run the same plan without
deallocating the plan and reallocating it. Nevertheless, the new sign-on request can
change the primary ID.
Unlike connection processing, sign-on processing does not check the RACF user ID
of the address space. The steps in processing sign-ons are shown in Figure 19.
Distinguish carefully between the two routines. The default sign-on routine
provides no secondary IDs and has the effects described in step 2 of “Processing
sign-ons” on page 236. The sample sign-on routine supports DB2 secondary IDs,
and is like the sample connection routine.
If you use TCP/IP protocols, DB2 supports the following DRDA authentication
mechanisms:
v User ID only (already verified)
v User ID and password, described in “Sending passwords” on page 262
v User ID and PassTicket, described in “Sending RACF PassTickets” on page 263
| If you use TCP/IP protocols with the z/OS Integrated Cryptographic Service
| Facility, DB2 also supports the following DRDA authentication mechanisms:
| v Encrypted user ID and encrypted password
| v Encrypted user ID and encrypted security-sensitive data
| v Encrypted user ID, encrypted password, and encrypted security-sensitive data
If you use a requester other than DB2 UDB for z/OS, refer to that product's
documentation.
| DB2 UDB for z/OS as a server also supports the following authentication
| mechanisms if the z/OS Integrated Cryptographic Service Facility is installed and
| active:
| v Encrypted user ID and encrypted security-sensitive data
| v Encrypted user ID, encrypted password, and encrypted security-sensitive data
Allowing users to change expired passwords: DB2 can return to the DRDA
requester information about errors and expired passwords. To allow this, specify
YES in the EXTENDED SECURITY field of installation panel DSNTIPR.
When the DRDA requester is notified that the RACF password has expired, and
the requester has implemented function to allow passwords to be changed, the
requester can prompt the end user for the old password and a new password. The
requester sends the old and new passwords to the DB2 server. This function is
supported through DB2 Connect.
With the extended security option, DB2 passes the old and new passwords to
RACF. If the old password is correct, and the new password meets the
installation's password requirements, the end user's password is changed and the
DRDA connection request is honored.
When a user changes a password, the user ID, the old password, and the new
password are sent to DB2 by the client system. The client system can optionally
encrypt these three tokens before they are sent.
The communications database (CDB) is a set of DB2 catalog tables that let you
control aspects of how requests leave this DB2 and how requests come in. This
section concentrates on the columns of the communications database that pertain
to security on the inbound side (the server).
The SYSIBM.IPNAMES table is not described in this section, because that table is
not used to control inbound TCP/IP requests.
The field should contain only I or O. Any other character, including blank,
causes the row to be ignored.
| AUTHID VARCHAR(128)
An authorization ID that is permitted and perhaps translated. If blank, any
authorization ID is permitted with the corresponding LINKNAME; all
authorization IDs are translated in the same way. Outbound translation is
not performed on CONNECT statements that contain an authorization ID
for the value of the USER parameter.
LINKNAME CHAR(8)
Identifies the VTAM or TCP/IP network locations that are associated with
Finally, DB2 itself imposes several checks before accepting an attachment request.
If using private protocols, the LOCATIONS table controls the locations that can
access DB2. To allow a remote location to access DB2, the remote location name
must be specified in the SYSIBM.LOCATIONS table. This check is only supported
for connections using private protocols.
Verifying a partner LU
This check is carried out by RACF and VTAM, to check the identity of an LU
sending a request to your DB2.
Conversation-level security: This section assumes that you have defined your DB2
to VTAM with the conversation-level security set to “already verified”. (To do that,
you coded SECACPT=ALREADYV on the VTAM APPL statement, as described in
Part 3 of DB2 Installation Guide. That value provides more options than does
“conversation” (SECACPT=CONV), which is not recommend.
Steps, tools, and decisions: The steps an attachment request goes through before
acceptance allow much flexibility in choosing security checks. Scan Figure 20 on
page 245 to see what is possible.
The primary tools for controlling remote attachment requests are entries in tables
SYSIBM.LUNAMES and SYSIBM.USERNAMES in the communications database.
You need a row in SYSIBM.LUNAMES for each system that sends attachment
requests, a dummy row that allows any system to send attachment requests, or
both. You might need rows in SYSIBM.USERNAMES to permit requests from
specific IDs or specific LUNAMES, or to provide translations for permitted IDs.
When planning to control remote requests, answer the questions posed by the
following topics for each remote LU that can send a request.
v “Do you permit access?”
v “Do you manage inbound IDs through DB2 or RACF?”
v “Do you trust the partner LU?” on page 244
v “If you use passwords, are they encrypted?” on page 244
v “If you use Kerberos, are users authenticated?” on page 244
v “Do you translate inbound IDs?” on page 247
v “How do you associate inbound IDs with secondary IDs?” on page 249
If you manage incoming IDs through DB2, you can avoid calls to RACF and can
specify acceptance of many IDs by a single row in the SYSIBM.USERNAMES table.
To manage incoming IDs through RACF, leave USERNAMES blank for that LU (or
leave the O unchanged). Requests from that LU go through connection processing,
and its IDs are not subject to translation.
If an authentication token does accompany a request, DB2 calls RACF to check the
authorization ID against it. To require an authentication token from a particular
LU, put a V in the SECURITY_IN column in SYSIBM.LUNAMES; your acceptance
level for requests from that LU is now “verify”. You must also register every
acceptable incoming ID and its password with RACF.
Figure 20. Steps in accepting a remote attachment request from requester that is using SNA
Example: Suppose that the ID DBADM1 is known to the local DB2 and has
DBADM authority over certain databases there; suppose also that the same ID
exists in some remote LU. If an attachment request comes in from DBADM1, and if
nothing is done to alter the ID, the wrong user can exercise privileges of DBADM1
in the local DB2. The way to protect against that exposure is to translate the
remote ID into a different ID before the attachment request is accepted.
You must be prepared to translate the IDs of plan owners, package owners, and
the primary IDs of processes that make remote requests. For the IDs that are sent
to you by other DB2 LUs, see “What IDs you send” on page 257. (Do not plan to
translate all IDs in the connection exit routine—the routine does not receive plan
and package owner IDs.)
If you have decided to manage inbound IDs through DB2, you can translate an
inbound ID to some other value. Within DB2, you grant privileges and authorities
only to the translated value. As Figure 20 on page 245 shows, that “translation” is
not affected by anything you do in your connection or sign-on exit routine. The
output of the translation becomes the input to your sign-on exit routine.
The examples in Table 73 shows the possibilities for translation and how to control
translation by SYSIBM.USERNAMES. You can use entries to allow requests only
from particular LUs or particular IDs, or from combinations of an ID and an LU.
You can also translate any incoming ID to another value. Table 75 on page 249
shows the search order of the SYSIBM.USERNAMES table.
Update considerations: If you update tables in the CDB while the distributed
data facility is running, the changes might not take effect immediately. For details,
see Part 3 of DB2 Installation Guide.
ALBERT requests from DB2 searches for an entry for AUTHID=ALBERT and LINKNAME=LUDALLAS. DB2 finds
LUDALLAS one in row 4, so the request is accepted. The value of NEWAUTHID in that row is blank, so
ALBERT is left unchanged.
BETTY requests from DB2 searches for an entry for AUTHID=BETTY and LINKNAME=LUDALLAS; none exists.
LUDALLAS DB2 then searches for AUTHID=BETTY and LINKNAME=blank. It finds that entry in row 5,
so the request is accepted. The value of NEWAUTHID in that row is blank, so BETTY is left
unchanged.
CHARLES requests DB2 searches for AUTHID=CHARLES and LINKNAME=LUDALLAS; no such entry exists.
from LUDALLAS DB2 then searches for AUTHID=CHARLES and LINKNAME=blank. The search ends at row
3; the request is accepted. The value of NEWAUTHID in that row is CHUCK, so CHARLES
is translated to CHUCK.
ALBERT requests from DB2 searches for AUTHID=ALBERT and LINKNAME=LUSNFRAN; no such entry exists.
LUSNFRAN DB2 then searches for AUTHID=ALBERT and LINKNAME=blank; again no entry exists.
Finally, DB2 searches for AUTHID=blank and LINKNAME=LUSNFRAN, finds that entry in
row 1, and the request is accepted. The value of NEWAUTHID in that row is blank, so
ALBERT is left unchanged.
BETTY requests from DB2 finds row 2, and BETTY is translated to ELIZA.
LUSNFRAN
CHARLES requests DB2 finds row 3 before row 1; CHARLES is translated to CHUCK.
from LUSNFRAN
WILBUR requests from No provision is made for WILBUR, but row 1 of the SYSIBM.USERNAMES table allows any
LUSNFRAN ID to make a request from LUSNFRAN and to pass without translation. The acceptance level
for LUSNFRAN is “already verified”, so WILBUR can pass without a password check by
RACF. After accessing DB2, WILBUR can use only the privileges that are granted to
WILBUR and to PUBLIC (for DRDA access) or to PUBLIC AT ALL LOCATIONS (for DB2
private-protocol access).
WILBUR requests from Because the acceptance level for LUDALLAS is “verify” as recorded in the
LUDALLAS SYSIBM.LUNAMES table, WILBUR must be known to the local RACF. DB2 searches in
succession for one of the combinations WILBUR/LUDALLAS, WILBUR/blank, or
blank/LUDALLAS. None of those is in the table, so the request is rejected. The absence of a
row permitting WILBUR to request from LUDALLAS imposes a “come-from” check:
WILBUR can attach from some locations (LUSNFRAN), and some IDs (ALBERT, BETTY, and
CHARLES) can attach from LUDALLAS, but WILBUR cannot attach if coming from
LUDALLAS.
Do you permit access by TCP/IP? If the serving DB2 UDB for z/OS subsystem has
a DRDA port and resynchronization port specified in the BSDS, DB2 is enabled for
TCP/IP connections.
Do you manage inbound IDs through DB2 or RACF? All IDs must be passed to
RACF or Kerberos for processing. No option exists to handle incoming IDs through
DB2.
Do you trust the partner? TCP/IP does not verify partner LUs as SNA does. If
your requesters support mutual authentication, use Kerberos to handle this on the
requester side.
If you use passwords, are they encrypted? Passwords can be encrypted through:
v RACF using PassTickets, described in “Sending RACF PassTickets” on page 263.
If you use Kerberos, are users authenticated? If your distributed environment uses
Kerberos to manage users and perform user authentication, DB2 UDB for z/OS can
use Kerberos security services to authenticate remote users. See “Establishing
Kerberos authentication through RACF” on page 279.
Do you translate inbound IDs? Inbound IDs are not translated when you use
TCP/IP.
How do you associate inbound IDs with secondary IDs? To associate an inbound
ID with secondary IDs, modify the default connection exit routine (DSN3@ATH).
TCP/IP requests do not use the sign-on exit routine.
Step 1: Step 2:
Is authentication No Does the serving TCPALVER=NO Reject
information present? subsystem accept request.
remote requests
Yes without verification?
TCPALVER=YES
Connection processing
Step 5:
Run the connection exit routine (DSN3@ATH).
Step 6:
Check local privilege at the server.
Details of steps: These notes explain the steps shown in Figure 21.
1. DB2 checks to see if an authentication token (RACF encrypted password, RACF
PassTicket, DRDA encrypted password, or Kerberos ticket) accompanies the
remote request.
2. If no authentication token is supplied, DB2 checks the TCPALVER subsystem
parameter to see if DB2 accepts IDs without authentication information. If
TCPALVER=NO, authentication information must accompany all requests, and
DB2 rejects the request. If TCPALVER=YES, DB2 accepts the request without
authentication.
3. The identity is a RACF ID that is authenticated by RACF if a password or
PassTicket is provided, or the identity is a Kerberos principal that is validated
by Kerberos Security Server, if a Kerberos ticket is provided. Ensure that the ID
is defined to RACF in all cases. When Kerberos tickets are used, the RACF ID
is derived from the Kerberos principal identity. To use Kerberos tickets, ensure
that you map Kerberos principal names with RACF IDs, as described in
“Establishing Kerberos authentication through RACF” on page 279.
In addition, depending on your RACF environment, the following RACF
checks may also be performed:
If you are planning to send remote requests to a DBMS that is not DB2 UDB for
z/OS, you need to satisfy the requirements of that system. You probably need
documentation for the particular type of system; some of the choices that are
described in this section might not apply.
If the request uses TCP/IP, the authentication tokens are always sent using DRDA
security commands.
The communications database (CDB) is a set of DB2 catalog tables that let you
control aspects of remote requests. This section concentrates on the columns of the
communications database that pertain to security issues related to the requesting
| system. (SYSIBM.IPLIST is not related to security and therefore not described in
| this topic.)
The field should contain only I or O. Any other character, including blank,
causes the row to be ignored.
| AUTHID VARCHAR(128)
An authorization ID that is permitted and perhaps translated. If blank, any
authorization ID is permitted with the corresponding LINKNAME, and all
authorization IDs are translated in the same way.
LINKNAME CHAR(8)
Identifies the VTAM or TCP/IP network locations that are associated with
this row. A blank value in this column indicates that this name translation
rule applies to any TCP/IP or SNA partner.
However, other IDs can accompany some requests. You need to understand what
other IDs are sent because they are subject to translation. You must include these
other IDs in table SYSIBM.USERNAMES to avoid an error when you use outbound
translation. Table 76 shows which IDs to send in the different situations.
Table 76. IDs that accompany the primary ID on a remote request
In this situation: You send this ID also:
An SQL query, using DB2 private-protocol The plan owner
access
A remote BIND, COPY, or REBIND The package owner
PACKAGE command
Step 2:
Is outbound translation specified?
Yes No
Step 3:
Check SECURITY_OUT column of
SYSIBM.LUNAMES or SYSIBM.USERNAMES.
Step 4:
Send request.
Details of steps in sending a request from DB2: These notes explain the steps in
Figure 22.
1. The DB2 subsystem that sends the request checks whether the primary
authorization ID has the privilege to execute the plan or package.
DB2 determines which value in the LINKNAME column of the
SYSIBM.LOCATIONS table matches either the LUNAME column in the
SYSIBM.LUNAMES table or the LINKNAME column in the SYSIBM.IPNAMES
table. This check determines whether SNA or TCP/IP protocols are used to
carry the DRDA request. (Statements that use DB2 private protocol, not DRDA,
always use SNA.)
2. When a plan is executed, the authorization ID of the plan owner is sent with
the primary authorization ID. When a package is bound, the authorization ID
of the package owner is sent with the primary authorization ID. If the
R:
Get PassTicket
from RACF.
P:
SNA or TCP/IP protocol?
SNA TCP/IP
Encrypt? Encrypt?
Yes No No Yes
Step 2 Step 4:
Send request.
D:
ICSF enabled and
server supports encryption?
No Yes
E:
ICSF enabled and
server supports encryption?
No Yes
To indicate that you want to translate outbound user IDs, perform the following
steps:
Example 1: Suppose that the remote system accepts from you only the IDs
XXGALE, GROUP1, and HOMER.
1. Specify that outbound translation is in effect for the remote system LUXXX by
specifying in SYSIBM.LUNAMES the values that are shown in Table 77.
Table 77. SYSIBM.LUNAMES to specify that outbound translation is in effect for the remote
system LUXXX
LUNAME USERNAMES
LUXX O
If your row for LUXXX already has I for the USERNAMES column (because
you translate inbound IDs that come from LUXXX), change I to B for both
inbound and outbound translation.
2. Translate the ID GALE to XXGALE on all outbound requests to LUXXX by
specifying in SYSIBM.USERNAMES the values that are shown in Table 78.
Table 78. Values in SYSIBM. USERNAMES to translate GALE to XXGALE on outbound
requests to LUXXX
TYPE AUTHID LINKNAME NEWAUTHID PASSWORD
O GALE LUXX XXGALE GALEPASS
5. Reject any requests from BASIL to LUXXX before they are sent. To do that,
leave SYSIBM.USERNAMES empty. If no row indicates what to do with the ID
BASIL on an outbound request to LUXXX, the request is rejected.
Example 2: If you send requests to another LU, such as LUYYY, you generally need
another set of rows to indicate how your IDs are to be translated on outbound
requests to LUYYY.
You can also use a single row to specify that all IDs that accompany requests to a
single remote system must be translated. For example, if every one of your IDs is
to be translated to THEIRS on requests to LUYYY, specify in SYSIBM.USERNAMES
the values that are shown in Table 82.
Table 82. Values in SYSIBM. USERNAMES to translate every ID to THEIRS
TYPE AUTHID LINKNAME NEWAUTHID PASSWORD
O (blank) LUYYY THEIR THEPASS
# If the ICSF is installed and properly configured, you can use the DSNLEUSR
# stored procedure to encrypt the translated outbound IDs that are specified in the
# NEWAUTHID column of SYSIBM.USERNAMES. DB2 decrypts the translated
# outbound IDs during connection processing. For more information about the
# DSNLEUSR stored procedure, see Appendix H, “DB2-supplied stored procedures,”
# on page 1191.
Sending passwords
Recommendation: For the tightest security, do not send passwords through the
network. Instead, use one of the following security mechanisms:
v RACF encrypted passwords, described in “Sending RACF encrypted passwords”
on page 263
v RACF PassTickets, described in “Sending RACF PassTickets” on page 263
v Kerberos tickets, described in “Establishing Kerberos authentication through
RACF” on page 279
v DRDA encrypted passwords or DRDA encrypted user IDs with encrypted
passwords, described in “Sending encrypted passwords from a workstation” on
page 264
If send passwords through the network, you can put the password for an ID in the
PASSWORD column of SYSIBM.USERNAMES.
The partner DB2 must also specify password encryption in its SYSIBM.LUNAMES
table. Both partners must register each ID and its password with RACF. Then, for
every request to LUXXX, your DB2 calls RACF to supply an encrypted password
to accompany the ID. With password encryption, you do not use the PASSWORD
column of SYSIBM.USERNAMES, so the security of that table becomes less critical.
See z/OS Security Server Security Administrator’s Guide for more information about
RACF PassTickets.
The group of
DSNCnn0 DSNnn0 ...Other aliases... DB2USER
all DB2 IDs
DB2 groups (aliases to integrated catalog facility catalogs)
Figure 24 shows some of the relationships among the names that are shown in
Table 84.
Table 84. RACF relationships
RACF ID Use
SYS1 Major RACF group ID
DB2 DB2 group
DB2OWNER Owner of the DB2 group
5. Diffie-Hellman is one of the first standard public key algorithms. It results in exchanging a connection key which is used by client
and server to generate a shared private key. The 56-bit Data Encryption Standards (DES) algorithm is used for encrypting and
decrypting of the password using the shared private key.
To establish RACF protection for DB2, perform the steps that are described in the
following sections:
v “Defining DB2 resources to RACF” includes steps that tell RACF what to
protect.
v “Permitting RACF access” on page 268 includes steps that make the protected
resources available to processes.
Some steps are required and some steps are optional, depending on your
circumstances. All steps presume that RACF is already installed. The steps do not
need to be taken strictly in the order in which they are shown here.
For a more thorough description of RACF facilities, see z/OS Security Server Security
Administrator’s Guide.
For information about using RACF for multilevel security with row-level
granularity, see “Multilevel security” on page 191.
No one can access the DB2 subsystem until you instruct RACF to permit access.
To control access, you need to define a profile name, as a member of class DSNR,
for every combination of subsystem and environment you want to use. For
example, suppose that you want to access:
v Subsystem DSN from TSO and DDF
v Subsystem DB2P from TSO, DDF, IMS, and RRSAF
v Subsystem DB2T from TSO, DDF, CICS, and RRSAF
You can do that with a single RACF command, which also names an owner for the
resources:
RDEFINE DSNR (DSN.BATCH DSN.DIST DB2P.BATCH DB2P.DIST DB2P.MASS DB2P.RRSAF
DB2T.BATCH DB2T.DIST DB2T.SASS DB2T.RRSAF) OWNER(DB2OWNER)
Those profiles are the ones that you later permit access to, as shown under “Permit
access for users and groups” on page 272. After you define an entry for your DB2
subsystem in the RACF router table, the only users that can access the system are
those who are permitted access to a profile. If you do not want to limit access to
particular users or groups, you can give universal access to a profile with a
command like this:
RDEFINE DSNR (DSN.BATCH) OWNER(DB2OWNER) UACC(READ)
After you add an entry for an DB2 subsystem to the RACF router table, you must
remove the entry for that subsystem from the router table to deactivate RACF
checking.
If you later decide not to use RACF checking for any or all of these resources, use
the RACF RDELETE command to delete the resources you do not want checked.
Then reassemble the RACF router table without them.
Finally, perform an IPL of the z/OS system to cause it to use the new router table.
Alternatively, you can delay the IPL until you have reassembled the RACF started
procedures table in the next set of steps and, therefore, do it only once.
Tip: The macro ICHRFRTB that is used in the job sends a message to warn that the
class name DSNR does not contain a digit or national character in the first four
characters. You can ignore the message.
*
* REASSEMBLE AND LINKEDIT THE INSTALLATION-PROVIDED
* ROUTER TABLE ICHRFR01 TO INCLUDE DB2 SUBSYSTEMS IN THE
* DSNR RESOURCE CLASS.
*
* PROVIDE ONE ROUTER ENTRY FOR EACH DB2 SUBSYSTEM NAME.
* THE REQUESTOR-NAME MUST ALWAYS BE "IDENTIFY".
ICHRFRTB CLASS=DSNR,REQSTOR=IDENTIFY,SUBSYS=DSN,ACTION=RACF
ICHRFRTB CLASS=DSNR,REQSTOR=IDENTIFY,SUBSYS=DB2P,ACTION=RACF
ICHRFRTB CLASS=DSNR,REQSTOR=IDENTIFY,SUBSYS=DB2T,ACTION=RACF
Only users with the SPECIAL attribute can issue the command.
If you are using stored procedures in a WLM-established address space, you might
also need to enable RACF checking for the SERVER class. See “Step 2: Control
access to WLM (optional)” on page 276.
Each member of a connecting pair must establish a profile for the other member.
For example, if LUAAA and LUBBB are to connect and know each other by those
LUNAMES, issue RACF commands similar to these:
At LUAAA: RDEFINE APPCLU netid.LUAAA.LUBBB UACC(NONE) ...
At LUBBB: RDEFINE APPCLU netid.LUBBB.LUAAA UACC(NONE) ...
Here, netid is the network ID, given by the VTAM start option NETID.
Finally, to enable RACF checking for the new APPCLU resources, issue this RACF
command at both LUAAA and LUBBB:
SETROPTS CLASSACT(APPCLU)
If you have IMS or CICS applications issuing DB2 SQL requests, you must
associate RACF user IDs, and can associate group names, with:
v The IMS control region
v The CICS address space
v The four DB2 address spaces
If the IMS and CICS address spaces are started as batch jobs, provide their RACF
IDs and group names with the USER and GROUP parameters on the JOB
statement. If they are started as started tasks, assign the IDs and group names as
you do for the DB2 address spaces, by changing the RACF started procedures
table.
Stored procedures: Entries for stored procedures address spaces are required in
the RACF started procedures table. The associated RACF user ID and group name
do not need to match those that are used for the DB2 address spaces, but they
must be authorized to run the call attachment facility (for the DB2-established
stored procedures address space) or Resource Recovery Services attachment facility
(for WLM-established stored procedures address spaces). Note: WLM-established
stored procedures started tasks IDs require an OMVS segment.
The IDs and group names associated with the address spaces are shown in
Table 85.
Table 85. DB2 address space IDs and associated RACF user IDs and group names
Address Space RACF User ID RACF Group Name
Figure 26 on page 270 shows a sample job that reassembles and link edits the
RACF started-procedures table (ICHRIN03):
ENTCOUNT DC AL2(((ENDTABLE-BEGTABLE)/ENTLNGTH)+32768)
* NUMBER OF ENTRIES AND INDICATE RACF FORMAT
*
* PROVIDE FOUR ENTRIES FOR EACH DB2 SUBSYSTEM NAME.
*
BEGTABLE DS 0H
* ENTRIES FOR SUBSYSTEM NAME "DSN"
DC CL8’DSNMSTR’ SYSTEM SERVICES PROCEDURE
DC CL8’SYSDSP’ USERID
DC CL8’DB2SYS’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
ENTLNGTH EQU *-BEGTABLE CALCULATE LENGTH OF EACH ENTRY
DC CL8’DSNDBM1’ DATABASE SERVICES PROCEDURE
DC CL8’SYSDSP’ USERID
DC CL8’DB2SYS’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
DC CL8’DSNDIST’ DDF PROCEDURE
DC CL8’SYSDSP’ USERID
DC CL8’DB2SYS’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
DC CL8’DSNSPAS’ STORED PROCEDURES PROCEDURE
DC CL8’SYSDSP’ USERID
DC CL8’DB2SYS’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
DC CL8’DSNWLM’ WLM-ESTABLISHED S.P. ADDRESS SPACE
DC CL8’SYSDSP’ USERID
DC CL8’DB2SYS’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
* ENTRIES FOR SUBSYSTEM NAME "DB2T"
DC CL8’DB2TMSTR’ SYSTEM SERVICES PROCEDURE
DC CL8’SYSDSPT’ USERID
DC CL8’DB2TEST’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
DC CL8’DB2TDBM1’ DATABASE SERVICES PROCEDURE
DC CL8’SYSDSPT’ USERID
DC CL8’DB2TEST’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
DC CL8’DB2TDIST’ DDF PROCEDURE
DC CL8’SYSDSPT’ USERID
DC CL8’DB2TEST’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
DC CL8’DB2TSPAS’ STORED PROCEDURES PROCEDURE
DC CL8’SYSDSPT’ USERID
DC CL8’DB2TEST’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
Figure 26. Sample job to reassemble the RACF started-procedures table (Part 1 of 2)
Figure 26. Sample job to reassemble the RACF started-procedures table (Part 2 of 2)
That gives class authorization to DB2OWNER for DSNR and USER. DB2OWNER
can add users to RACF and issue the RDEFINE command to define resources in
class DSNR. DB2OWNER has control over and responsibility for the entire DB2
security plan in RACF.
The RACF group SYS1 already exists. To add group DB2 and make DB2OWNER
its owner, issue the following RACF command:
ADDGROUP DB2 SUPGROUP(SYS1) OWNER(DB2OWNER)
To connect DB2OWNER to group DB2 with the authority to create new subgroups,
add users, and manipulate profiles, issue the following RACF command:
CONNECT DB2OWNER GROUP(DB2) AUTHORITY(JOIN) UACC(NONE)
To make DB2 the default group for commands issued by DB2OWNER, issue the
following RACF command:
ALTUSER DB2OWNER DFLTGRP(DB2)
To define a user to RACF, use the RACF ADDUSER command. That invalidates the
current password. You can then log on as a TSO user to change the password.
Defining profiles for IMS and CICS: You want the IDs for attaching systems to use
the appropriate access profile. For example, to let the IMS user ID use the access
profile for IMS on system DB2P, issue the following RACF command:
PERMIT DB2P.MASS CLASS(DSNR) ID(IMS) ACCESS(READ)
To let the CICS group ID use the access profile for CICS on system DB2T, issue the
following RACF command:
PERMIT DB2T.SASS CLASS(DSNR) ID(CICSGRP) ACCESS(READ)
Providing installation authorities to default IDs: When DB2 is installed, IDs are
named to have special authorities—one or two IDs for SYSADM and one or two
IDs for SYSOPR. Those IDs can be connected to the group DB2USER; if they are
not, you need to give them access. The next command permits the default IDs for
the SYSADM and SYSOPR authorities to use subsystem DSN through TSO:
PERMIT DSN.BATCH CLASS(DSNR) ID(SYSADM,SYSOPR) ACCESS(READ)
Using secondary IDs: You can use secondary authorization IDs to define a RACF
group. After you define the RACF group, you can assign privileges to it that are
shared by multiple primary IDs. For example, suppose that DB2OWNER wants to
create a group GROUP1 and to give the ID USER1 administrative authority over
the group. USER1 should be able to connect other existing users to the group. To
create the group, DB2OWNER issues this RACF command:
ADDGROUP GROUP1 OWNER(USER1) DATA(’GROUP FOR DEPT. G1’)
To let the group connect to the DSN system through TSO, DB2OWNER issues this
RACF command:
PERMIT DSN.BATCH CLASS(DSNR) ID(GROUP1) ACCESS(READ)
USER1 can now connect other existing IDs to the group GROUP1 by using the
RACF CONNECT command:
CONNECT (USER2 EPSILON1 EPSILON2) GROUP(GROUP1)
If you add or update secondary IDs for CICS transactions, you must start and stop
the CICS attachment facility to ensure that all threads sign on and get the correct
security information.
Allowing users to create data sets: Chapter 13, “Auditing,” on page 285
recommends using RACF to protect the data sets that store DB2 data. If you use
that method, when you create a new group of DB2 users, you might want to
connect it to a group that can create data sets. To allow USER1 to create and
control data sets, DB2OWNER creates a generic profile and permits complete
control to USER1 and to the four administrators. The SYSDSP parameter also gives
control to DB2. See See “Creating generic profiles for data sets” on page 281.
ADDSD ’DSNC810.DSNDBC.ST*’ UACC(NONE)
PERMIT ’DSNC810.DSNDBC.ST*’
ID(USER1 SYSDSP SYSAD1 SYSAD2 SYSOP1 SYSOP2) ACCESS(ALTER)
The following RACF commands let the users in the group DB2USER access DDF
on the DSN subsystem. These DDF requests can originate from any partner in the
network.
If you want to ensure that a specific user can access only when the request
originates from a specific LU name, you can use WHEN(APPCPORT) on the
PERMIT command.
| Example: To use the RACF APPCPORT class, perform the following steps:
| 1. Activate the ACCPORT class by issuing the following RACF command:
| SETROPTS CLASSACT(APPCPORT) REFRESH
| 2. Define the general resource profile and name it TCPIP. Specify NONE for
| universal access and APPCPORT for class. Issue the following RACF command:
| RDEFINE APPCPORT (TCPIP) UACC(NONE)
| 3. Permit READ access on profile TCPIP in the APPCPORT class. To permit READ
| access to USER5, issue the following RACF command:
| PERMIT TCPIP ACCESS(READ) CLASS(APPPORT) ID(USER5)
| 4. Permit READ access on profile DSN.DIST in the DSNR class. To permit READ
| access to USER5, issue the following RACF command:
| PERMIT DSN.DIST CLASS(DSNR) ID(USER5) ACCESS(READ) +
| WHEN(APPCPORT(TCPIP))
| 5. Refresh the APPCPORT class by issuing the following RACF command:
| SETROPTS CLASSACT(APPCPORT) REFRESH RACLIST(APPCPORT)
If the RACF APPCPORT class is active on your system, and a resource profile for
the requesting LU name already exists, you must permit READ access to the
APPCPORT resource profile for the user IDs that DB2 uses. You must permit
READ access even when you are using the DSNR resource class. Similarly, if you
are using the RACF APPL class and that class restricts access to the local DB2 LU
name or generic LU name, you must permit READ access to the APPL resource for
the user IDs that DB2 uses.
Requirement: To use the RACF SERVAUTH class and TCP/IP Network Access
Control, you must have z/OS V1.5 (or later) installed.
| Example: To use the RACF SERVAUTH class and TCP/IP Network Access Control,
| perform the following steps:
| 1. Set up and configure TCP/IP Network Access Control by using the
| NETACCESS statement that is in your TCP/IP profile.
| For example, suppose that you need to allow z/OS system access only to IP
| addresses from 9.0.0.0 to 9.255.255.255. You want to define these IP addresses as
| a security zone, and you want to name the security zone IBM. Suppose also
| that you need to deny access to all IP addressed outside of the IBM security
| zone, and that you want to define these IP addresses as a separate security
| zone. You want to name this second security zone WORLD. To establish these
| security zones, use the following NETACCESS clause:
| NETACCESS INBOUND OUTBOUND
| ; NETWORK/MASK SAF
| 9.0.0.0/8 IBM
| DEFAULT WORLD
| ENDNETACCESS
| Now, suppose that USER5 has an IP address of 9.1.2.3. TCP/IP Network Access
| Control would determine that USER5 has an IP address that belongs to the IBM
| security zone. USER5 would be granted access to the system. Alternatively,
| suppose that USER6 has an IP address of 1.1.1.1. TCP/IP Network Access
| Control would determine that USER6 has an IP address that belongs to the
| WORLD security zone. USER6 would not be granted access to the system.
| 2. Activate the SERVAUTH class by issuing the following TSO command:
274 Administration Guide
| SETROPTS CLASSACT(SERVAUTH)
| 3. Activate RACLIST processing for the SERVAUTH class by issuing the following
| TSO command:
| SETROPTS RACLIST(SERVAUTH)
| 4. Define the IBM and WORLD general resource profiles in RACF to protect the
| IBM and WORLD security zones by issuing the following commands:
| RDEFINE SERVAUTH (EZB.NETACCESS.ZOSV1R5.TCPIP.IBM) UACC(NONE)
| RDEFINE SERVAUTH (EZB.NETACCESS.ZOSV1R5.TCPIP.WORLD) UACC(NONE)
| 5. Permit USER5 and SYSDSP read access to the IBM profile by using the
| following commands.
| PERMIT EZB.NETACCESS.ZOSV1R5.TCPIP.IBM ACCESS READ CLASS(SERVAUTH) ID(USER5)
| PERMIT EZB.NETACCESS.ZOSV1R5.TCPIP.IBM ACCESS READ CLASS(SERVAUTH) ID(SYSDSP)
| 6. Permit SYSDSP read access to the WORLD profile by using the following
| command:
| PERMIT EZB.NETACCESS.ZOSV1R5.TCPIP.WORLD ACCESS READ CLASS(SERVAUTH) ID(USER5)
| 7. For these permissions to take effect, refresh the RACF database by using the
| following command:
| SETROPTS CLASSACT(SERVAUTH) REFRESH RACLIST(SERVAUTH)
| For more information about the NETACCESS statement, see z/OS V1.5
| Communications Server: IP Configuration Reference.
| When RACF is used for access control, an ID must have appropriate RACF
| authorization on DB2 commands or must be granted authorization for DB2
| commands to issue commands from a logged-on MVS console or from TSO SDSF.
| You can ensure that an ID can issue DB2 commands from logged-on MVS consoles
| or TSO SDSF by using one of the following methods:
| v Grant authorization for DB2 commands to the primary or secondary
| authorization ID.
| v Define RACF classes and permits for DB2 commands.
| v Grant SYSOPR authority to appropriate IDs.
Control access to the DB2 subsystem through RRSAF by performing the following
steps:
1. If you have not already established a profile for controlling access from the RRS
attachment facility as described in “Define the names of protected access
profiles” on page 266, define ssnm.RRSAF in the DSNR resource class with a
universal access authority of NONE, as shown in the following command:
RDEFINE DSNR (DB2P.RRSAF DB2T.RRSAF) UACC(NONE)
2. Activate the resource class; use the following command:
SETROPTS RACLIST(DSNR) REFRESH
3. Add user IDs that are associated with the stored procedures address spaces to
the RACF Started Procedures Table, as shown in this example:
.
.
.
DC CL8’DSNWLM’ WLM-ESTABLISHED S.P. ADDRESS SPACE
DC CL8’SYSDSP’ USERID
DC CL8’DB2SYS’ GROUP NAME
DC X’00’ NO PRIVILEGED ATTRIBUTE
DC XL7’00’ RESERVED BYTES
.
.
.
4. Give read access to ssnm.RRSAF to the user ID that is associated with the
stored procedures address space:
PERMIT DB2P.RRSAF CLASS(DSNR) ID(SYSDSP) ACCESS(READ)
DB2 performs a resource authorization check using the DSNR RACF class as
follows:
v In a DB2 data sharing environment, DB2 uses the following RACF resource
name:
db2_groupname.WLMENV.wlm_environment
v In a non-data sharing environment, DB2 checks the following RACF resource
name:
db2_subsytem_id.WLMENV.wlm_environment
You can use the RACF RDEFINE command to create RACF profiles that prevent
users from creating stored procedures and user-defined functions in sensitive WLM
environments. For example, you can prevent all users on DB2 subsystem DB2A
(non-data sharing) from creating a stored procedure or user-defined function in the
WLM environment named PAYROLL; to do this, use the following command:
RDEFINE DSNR (DB2A.WLMENV.PAYROLL) UACC(NONE)
With WLM-established address spaces, you can specify that access to non-DB2
resources is controlled by the authorization ID of the caller rather than that of the
stored procedure address space. To do this, specify U in the
EXTERNAL_SECURITY column of table SYSIBM.SYSROUTINES for the stored
procedure.
EXTERNAL_SECURITY=C
User ID=zzzz
For WLM-established stored procedures address spaces, enable the RACF check for
the caller's ID when you want to access non-DB2 resources by performing the
following steps:
| 1. Use the ALTER PROCEDURE statement with the EXTERNAL_SECURITY USER
| clause.
2. Ensure that the ID of the stored procedure's caller has RACF authority to the
resources.
3. For the best performance, cache the RACF profiles in the virtual look-aside
facility (VLF) of z/OS. Do this by specifying the following keywords in the
COFVLFxx member of library SYS1.PARMLIB.
CLASS NAME(IRRACEE)
EMAJ(ACEE)
To give root authority to the DDF address space, you must specify a UID of 0.
The Kerberos security technology does not require passwords to flow in readable
text, so it provides security even for client/server environments. This flexibility is
possible because Kerberos uses an authentication technology that uses encrypted
tickets that contain authentication information for the end user.
| For information about Kerberos security, z/OS, and DB2 see the following books:
| v z/OS SecureWay Security Server Network Authentication Server Administration
| v z/OS SecureWay Security Server Network Authentication Server Programming
| v z/OS Security Server Security Administrator’s Guide
| v z/OS Security Server Command Language Reference
For information about Kerberos security, OS/390, and DB2, see the following
books:
v OS/3090 SecureWay Security Server Network Authentication and Privacy
Administration Guide
v OS/390 SecureWay Security Server (RACF) Security Administrator’s Guide
v OS/390 Security Server (RACF) Command Language Reference
In this example, the user ID that is used for the ssnmDIST started task address
space is SYSDSP. See “Define RACF user IDs for DB2 started tasks” on page
268 for more information, including how to determine the user ID for the
ssnmDIST started task.
4. Define foreign Kerberos authentication servers to the local Kerberos
authentication server by using REALM profiles. You must supply a password
for the key to be generated. REALM profiles define the trust relationship
between the local realm and the foreign Kerberos authentication servers.
PASSWORD is a required keyword, so all REALM profiles have a KERB
segment. The command is similar to the following command:
RDEFINE REALM /.../KERB390.ENDICOTT.IBM.COM/KRBTGT/KER2000.ENDICOTT.IBM.COM +
KERB(PASSWORD(realm0pw))
| The z/OS SecureWay Kerberos Security Server rejects ticket requests from users
| with revoked or expired passwords; therefore, plan password resets that use a
| method that avoids a password change at a subsequent logon. For example, use
| the TSO logon panel the PASSWORD command without a specified ID operand, or
| the ALTUSER command with NOEXPIRE specified.
Data sharing environment: Data sharing Sysplex environments that use Kerberos
security must have a Kerberos Security Server instance running on each system in
the Sysplex. The instances must either be in the same realm and share the same
RACF database, or have different RACF databases and be in different realms.
# Use RACF, or a similar external security system, to control access to the data sets
# just as RACF controls access to the DB2 subsystem. This section explains how to
# create RACF profiles for data sets and allow their use through DB2.
# Assume that the RACF groups DB2 and DB2USER, and the RACF user ID
# DB2OWNER, have been set up for DB2 IDs, as described under “Defining DB2
# resources to RACF” on page 265 (and shown in Figure 24 on page 264). Given that
# setting, the examples that follow show you how to:
# v Add RACF groups to control data sets that use the default DB2 qualifiers
# v Create generic profiles for different types of DB2 data sets and permit their use
# by DB2 started tasks
# v Permit use of the profiles by specific IDs
# v Allow certain IDs to create data sets.
# Although all of those commands are not absolutely necessary, the sample shows
# how you can create generic profiles for different types of data sets. Some
# parameters, such as universal access, could vary among the types. In the example,
# installation data sets (DSN810.*) are universally available for read access.
# To protect VSAM data sets, use the cluster name. You do not need to protect the
# data component names, because the cluster name is used for RACF checking.
# Access by stand-alone DB2 utilities: The following DB2 utilities access objects that
# are outside of DB2 control:
# v DSN1COPY and DSN1PRNT: table space and index space data sets
# v DSN1LOGP: active logs, archive logs, and bootstrap data sets
# v DSN1CHKR: DB2 directory and catalog table spaces
# v Change Log Inventory (DSNJU003) and Print Log Map (DSNJU004): bootstrap
# data sets
# The Change Log Inventory and Print Log Map utilities run as batch jobs that are
# protected by the USER and PASSWORD options on the JOB statement. To provide
# a value for the USER option, for example SVCAID, issue the following commands:
# v For DSN1COPY:
# PERMIT ’DSNC810.*’ ID(SVCAID) ACCESS(CONTROL)
# v For DSN1PRNT:
# PERMIT ’DSNC810.*’ ID(SVCAID) ACCESS(READ)
# v For DSN1LOGP:
# PERMIT ’DSNC810.LOGCOPY*’ ID(SVCAID) ACCESS(READ)
# PERMIT ’DSNC810.ARCHLOG*’ ID(SVCAID) ACCESS(READ)
# PERMIT ’DSNC810.BSDS*’ ID(SVCAID) ACCESS(READ)
# v For DSN1CHKR:
# PERMIT ’DSNC810.DSNDBDC.*’ ID(SVCAID) ACCESS(READ)
# v For Change Log Inventory:
# PERMIT ’DSNC810.BSDS*’ ID(SVCAID) ACCESS(CONTROL)
# v For Print Log Map:
# PERMIT ’DSNC810.BSDS*’ ID(SVCAID) ACCESS(READ)
# You can use RACF to permit programs, rather than user IDs, to access objects.
# When you use RACF in this manner, IDs that are not authorized to access the log
# data sets might be able to do so by running the DSN1LOGP utility. Permit access
# to database data sets through DSN1PRNT or DSN1COPY.
#
# Permitting DB2 authorization IDs to use the profiles
# Authorization IDs with installation SYSADM or installation SYSOPR authority
# need access to most DB2 data sets. (For a list of the privileges that go with those
# authorities, see “Explicit privileges and authorities” on page 133.) The following
# command adds the two default IDs that have the SYSADM and SYSOPR
# authorities if no other IDs are named when DB2 is installed:
# ADDUSER (SYSADM SYSOPR)
# The next two commands connect those IDs to the groups that control data sets,
# with the authority to create new RACF database profiles. The ID that has
# installation SYSOPR authority (SYSOPR) does not need that authority for the
# installation data sets.
# CONNECT (SYSADM SYSOPR) GROUP(DSNC810) AUTHORITY(CREATE) UACC(NONE)
# CONNECT (SYSADM) GROUP(DSN810) AUTHORITY(CREATE) UACC(NONE)
# The following set of commands gives the IDs complete control over DSNC810 data
# sets. The system administrator IDs also have complete control over the installation
# libraries. Additionally, you can give the system programmer IDs the same control.
# PERMIT ’DSNC810.LOGCOPY*’ ID(SYSADM SYSOPR) ACCESS(ALTER)
# PERMIT ’DSNC810.ARCHLOG*’ ID(SYSADM SYSOPR) ACCESS(ALTER)
# PERMIT ’DSNC810.BSDS*’ ID(SYSADM SYSOPR) ACCESS(ALTER)
# PERMIT ’DSNC810.DSNDBC.*’ ID(SYSADM SYSOPR) ACCESS(ALTER)
# PERMIT ’DSNC810.*’ ID(SYSADM SYSOPR) ACCESS(ALTER)
# PERMIT ’DSN810.*’ ID(SYSADM) ACCESS(ALTER)
#
# Allowing DB2 authorization IDs to create data sets
# The following command connects several IDs, which are already connected to the
# DB2USER group, to group DSNC810 with CREATE authority:
# CONNECT (USER1 USER2 USER3 USER4 USER5)
# GROUP(DSNC810) AUTHORITY(CREATE) UACC(NONE)
# Those IDs can now explicitly create data sets whose names have DSNC810 as the
# high-level qualifier. Any such data sets that are created by DB2 or by these RACF
# user IDs are protected by RACF. Other RACF user IDs are prevented by RACF
# from creating such data sets.
Who is privileged? You can find answers to the first question in the DB2 catalog,
which is the primary audit trail for the DB2 subsystem. Most of the catalog tables
describe the DB2 objects, such as tables, views, table spaces, packages, and plans.
Several other tables (every table with the character string “AUTH” in its name)
hold records of every granted privilege or authority. Every catalog record of a
grant contains the following information:
v Name of the object
v Type of privilege
v IDs that receive the privilege
v ID that grants the privilege
v Time of the grant
v Other information
You can retrieve data from catalog tables by writing SQL queries. For examples,
see “Finding catalog information about privileges” on page 187.
Who accessed data? You can find answers to the second question by using the
audit trace, another important audit trail for DB2. The audit trace can record:
v Changes in authorization IDs
v Changes to the structure of data (such as dropping a table)
v Changes to data values (such as updating or inserting records)
v Access attempts by unauthorized IDs
v Results of GRANT statements and REVOKE statements
v Mapping of Kerberos security tickets to IDs
v Other activities that are of interest to auditors
The DB2 audit trace can indicate who has accessed data. When started, the audit
trace records certain types of actions and sends the report to a named destination.
As with other DB2 traces, you can choose the following options for the audit trace:
v Categories of events to trace
v Particular authorization IDs or plan IDs to audit
v Ways to start and stop the trace
You can choose whether to audit the activity on a table by specifying an option of
the CREATE and ALTER statements.
The START TRACE command: As with other DB2 traces, you can start an audit
trace at any time with the START TRACE command. You can choose the audit
classes to trace and the destination for trace records. You can also include an
identifying comment.
Example: The following command starts an audit trace for classes 4 and 6 with
distributed activity:
-START TRACE (AUDIT) CLASS (4,6) DEST (GTF) LOCATION (*)
COMMENT (’Trace data changes; include text of dynamic DML statements.’)
Example: The following command stops the trace that the example in “Starting the
audit trace” started:
-STOP TRACE (AUDIT) CLASS (4,6) DEST (GTF)
If you did not saved the START command, you can determine the trace number
and stop the trace by its number. Use DISPLAY TRACE to find the number.
Example: DISPLAY TRACE (AUDIT) might return a message like the following
output:
TNO TYPE CLASS DEST QUAL
01 AUDIT 01 SMF NO
02 AUDIT 04,06 GTF YES
The message indicates that two audit traces are active. Trace 1 traces events in class
1 and sends records to the SMF data set. Trace 1 can be a trace that starts
automatically whenever DB2 starts. Trace 2 traces events in classes 4 and 6 and
sends records to GTF.
You can stop either trace by using its identifying number (TNO).
This auditing coverage is consistent with the goal of providing a moderate volume
of audit data with a low impact on performance. However, when you choose
classes of events to audit, consider that you might ask for more data than you are
willing to process.
Requests from your location to a remote DB2 are audited only if an audit trace is
active at the remote location. The output from the trace appears only in the records
at that location.
Example: DB2 audits the department table whenever the audit trace is on if you
create the table with the following statement:
CREATE TABLE DSN8810.DEPT
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) ,
PRIMARY KEY (DEPTNO) )
IN DSN8D81A.DSN8S81D
AUDIT CHANGES;
Because this statement includes the AUDIT CHANGES option, DB2 audits the
table for each access that inserts, updates, or deletes data (trace class 4).
The statement is effective regardless of whether the table was previously chosen
for auditing.
Example: To prevent all auditing of the table, issue the following statement:
ALTER TABLE DSN8810.DEPT
AUDIT NONE;
For the CREATE TABLE statement, the default audit option is NONE. For the
ALTER TABLE statement, no default option exists. If you do not use the AUDIT
clause in an ALTER TABLE statement, the audit option for the table is unchanged.
When CREATE TABLE statements or ALTER TABLE statements affect the audit of
a table, you can audit those statements. However, the results of those audits are in
audit class 3, not in class 4 or class 5. Use audit class 3 to determine whether
auditing was turned off for a table for an interval of time.
If an ALTER TABLE statement turns auditing on or off for a specific table, any
plans and packages that use the table are invalidated and must be rebound. If you
change the auditing status, the change does not affect plans, packages, or dynamic
SQL statements that are currently running. The change is effective only for plans,
packages, or dynamic SQL statements that begin running after the ALTER TABLE
statement has completed.
Exception: If a primary ID has been translated many times, you might not be able
to identify the primary ID that is associated with a change.
Example: Suppose that the server does not recognize the translated ID from the
requesting site. In this case, you cannot use the primary ID to gather all audit
records for a user that accesses remote data.
The AUTHCHG record shows the values of all secondary authorization IDs that
are established by an exit routine. See “Audit class descriptions” on page 287,
Audit Class 7, for a description of the AUTHCHG record.
With the audit trace, you can also determine which primary ID is responsible for
the action of a secondary ID or a current SQL ID.
Example: Suppose that the user with primary ID SMITHJ sets the current SQL ID
to TESTGRP to grant privileges over the table TESTGRP.TABLE01 to another user.
The DB2 catalog records the grantor of the privileges as TESTGRP. However, the
audit trace shows that SMITHJ issued the grant statement.
See Chapter 9, “Controlling access to DB2 objects,” on page 133 for more
information about privileges and authorities.
IFCIDs identify all DB2 trace records. For instructions on interpreting trace output
and mapping records for the IFCIDs, see Appendix D, “Interpreting DB2 trace
output,” on page 1101. Chapter 2 of DB2 Command Reference lists the IFCIDs for
each trace class and the description of the START TRACE command.
If you send trace records to SMF (the default), data might be lost in the following
circumstances:
v SMF fails while DB2 continues to run.
v An unexpected abend (such as a TSO interrupt) occurs while DB2 is transferring
records to SMF.
You can also control the type of data by assigning column data types and lengths.
Example: Alphabetic data cannot be entered into a column with one of the
numeric data types. Data that is inserted into a DATE column or a TIME column
must have an acceptable format, and so on.
For suggestions about assigning column data types and the NOT NULL attribute,
see DB2 SQL Reference.
Triggers are very powerful for defining and enforcing rules that involve different
states of DB2 data. For example, a rule can prevent a salary column from more
than a ten percent increase. A trigger can enforce this rule and provide the value of
the salary before and after the increase for comparison. See Chapter 5 of DB2 SQL
Reference for information using the CREATE TRIGGER statement to create a trigger.
A check constraint designates the values that specific columns of a base table can
contain. A check constraint can express simple constraints, such as a required
pattern or a specific range, and rules that refer to other columns of the same table.
An alternative technique is to create a view with the check option, and then insert
or update values only through that view.
Example: Suppose that, in table T, data in column C1 must be a number between
10 and 20. Suppose also that data in column C2 is an alphanumeric code that must
begin with A or B. Create view V1 with the following statement:
CREATE VIEW V1 AS
SELECT * FROM T
WHERE C1 BETWEEN 10 AND 20
AND (C2 LIKE ’A%’ OR C2 LIKE ’B%’)
WITH CHECK OPTION;
Because of the CHECK OPTION, view V1 allows only data that satisfies the
WHERE clause.
You cannot use the LOAD utility with a view, but that restriction does not apply to
user-written exit routines. Several types of user-written routines are pertinent here:
Validation routines
You can use validation routines to validate data values. Validation routines
access an entire row of data, check the current plan name, and return a
nonzero code to DB2 to indicate an invalid row.
Edit routines
Edit routines have the same access as validation routines, and can also
change the row that is to be inserted. Auditors typically use edit routines
to encrypt data and to substitute codes for lengthy fields. However, edit
routines can also validate data and return nonzero codes.
Field procedures
Field procedures access data that is intended for a single column; they
apply only to short-string columns. However, they accept input
parameters, so generalized procedures are possible. A column that is
defined with a field procedure can be compared only to another column
that uses the same procedure.
See Appendix B, “Writing exit routines,” on page 1015 for information about using
exit routines.
| DB2 does not enforce informational referential constraints across subsystems. For
| information about the means, implications, and limitations of enforcing referential
| integrity, see DB2 Application Programming and SQL Guide.
You can qualify a trigger by providing a list of column names when you define the
trigger. The qualified trigger is activated only when one of the named columns is
changed. A trigger that performs validation for changes that are made in an
UPDATE operation must access column values both before and after the update.
Transition variables (available only to row triggers) contain the column values of
the row change that activated the trigger. The old column values and the column
values from after the triggering operation are both available.
See DB2 SQL Reference for information about when to use triggers.
If you use repeatable read (RR), read stability (RS), or cursor stability (CS) as your
isolation level, DB2 automatically controls access to data by using locks. However,
if you use uncommitted read (UR) as your isolation level, users can access
uncommitted data and introduce inconsistent data. Auditors must know which
applications use UR isolation, and they must know whether these applications can
introduce inconsistent data or create security risks.
For static SQL, you can determine which plans and packages use UR isolation by
querying the catalog.
Example: For static SQL statements, use the following query to determine which
plans use UR isolation:
SELECT DISTINCT Y.PLNAME
FROM SYSIBM.SYSPLAN X, SYSIBM.SYSSTMT Y
WHERE (X.NAME = Y.PLNAME AND X.ISOLATION = ’U’)
OR Y.ISOLATION = ’U’
ORDER BY Y.PLNAME;
Example: For static SQL statements, use the following query to determine which
packages use UR isolation:
SELECT DISTINCT Y.COLLID, Y.NAME, Y.VERSION
FROM SYSIBM.SYSPACKAGE X, SYSIBM.SYSPACKSTMT Y
WHERE (X.LOCATION = Y.LOCATION AND
X.LOCATION = ’ ’ AND
X.COLLID = Y.COLLID AND
X.NAME = Y.NAME AND
X.VERSION = Y.VERSION AND
X.ISOLATION = ’U’)
OR Y.ISOLATION = ’U’
ORDER BY Y.COLLID, Y.NAME, Y.VERSION;
For dynamic SQL statements, turn on performance trace class 3 to determine which
plans and packages use UR isolation.
For more information about locks and concurrency, see Chapter 30, “Improving
concurrency,” on page 773.
DB2 has no automatic mechanism to calculate control totals and column balances
and compare them with transaction counts and field totals. Therefore, to use
database balancing, you must design these mechanisms into the application
program.
Example: Use your application program to maintain a control table. The control
table contains information to balance the control totals and field balances for
update transactions against a user's view. The control table might contain these
columns:
v View name
v Authorization ID
v Number of logical rows in the view (not the same as the number of physical
rows in the table)
v Number of insert transactions and update transactions
v Opening balances
v Totals of insert transaction amounts and update transaction amounts
v Relevant audit trail information such as date, time, workstation ID, and job
name
The program updates the transaction counts and amounts in the control table each
time it completes an insert or update to the view. To maintain coordination during
recovery, the program commits the work only after it updates the control table.
After the application processes all transactions, the application writes a report that
verifies the control total and balancing information.
If you suspect that a table contains inconsistent data, you can submit an SQL query
to search for a specific type of error.
Example: Consider the view that is created in “Ensuring that data fits a pattern or
value range” on page 293. The view allows an insert or update to table T1 only if
the value in column C1 is between 10 and 20 and if the value in C2 begins with A
or B. To check that the control has not been bypassed, issue the following
statement:
SELECT * FROM T1
WHERE NOT (C1 BETWEEN 10 AND 20
AND (C2 LIKE ’A%’ OR C2 LIKE ’B%’));
If the control has not been bypassed, DB2 returns no rows and thereby confirms
that the contents of the view are valid.
You can also use SQL statements to get information from the DB2 catalog about
referential constraints that exist. For several examples, see DB2 SQL Reference.
End of General-use Programming Interface
You can obtain this information from the system log (SYSLOG), the SMF data set,
or the automated job scheduling system. To obtain the information, use SMF
reporting, job-scheduler reporting, or a user-developed program. You should
review the log report daily and keep a history file for comparison. Because
abnormal DB2 termination can indicate integrity problems, you should implement
an immediate notification procedure to alert the appropriate personnel (DBA,
systems supervisor, and so on) of abnormal DB2 terminations.
For application programs: You should record any DB2 return codes that indicate
possible data integrity problems, such as inconsistency between index and table
information, physical errors on database disk, and so on. All programs must check
the SQLCODE or the SQLSTATE for the return code that is issued after an SQL
statement is run. DB2 records, on SMF, the occurrence (but not the cause) of
physical disk errors and application program abends. The program can retrieve
and report this information; the system log (SYSLOG) and the DB2 job output also
have this information. However, in some cases, only the program can provide
enough detail to identify the exact nature of problem.
For utilities: When a DB2 utility reorganizes or reconstructs data in the database, it
produces statistics to verify record counts and to report errors. The LOAD and
REORG utilities produce data record counts and index counts to verify that no
records were lost. In addition to that, keep a history log of any DB2 utility that
updates data, particularly REPAIR. Regularly produce and review these reports,
which you can obtain through SMF customized reporting or a user-developed
program.
Suppose that the Spiffy Computer Company management team determines the
following security objectives:
v Managers can see, but not update, all of the employee data for members of their
own departments. Managers of managers can see all of the data for employees
of departments that report to them.
v The employee table resides at a central location. Managers at remote locations
can query the data in that table.
v The payroll operations department makes changes to the employee table.
Members of the payroll operations department can update any column of the
employee table except for the salary, bonus, and commission columns.
Furthermore, members of payroll operations can update any row except for rows
that are for members of their own department. Because changes to the table are
made only from a central location, distributed access does not affect payroll
operations.
v Changes to the salary, bonus, and commission columns are made through a
process that involves the payroll update table. When an employee’s
compensation changes, a member of the payroll operations department can
insert rows in the payroll update table. For example, a member of the payroll
operations department might insert a row in the compensation table that lists an
employee ID and an updated salary. Next, the payroll management group can
verify inserted rows and transfer the changes to the employee table.
v No one else can see the employee data. The security plan cannot fully achieve
this objective because some ID must occasionally exercise SYSADM authority.
While exercising SYSADM authority, an ID can retrieve any data in the system.
The security plan uses the trace facility to monitor the use of that power.
The following sections discuss how to implement a security plan that accomplishes
the objectives:
v “Securing manager access”
v “Securing payroll operations access” on page 305
v “Securing administrator, owner, and other access” on page 308
The Spiffy security planners know that the functional approach is usually more
convenient in the following situations:
v Each function, such as the manager function, requires many different privileges.
When functional privileges are revoked from one user, they must be granted to
another user.
v Several users need the same set of privileges.
v The privileges are given with the grant option, or the privileges let users create
objects that must persist after their original owners leave or transfer. In both
cases, revoking the privileges might not be appropriate. The revokes cascade to
other users. To change ownership, you might need to drop objects and re-create
them.
Some of the Spiffy requirements for securing manager access suggest the functional
approach. However, in this case, the function needs only one privilege. The
privilege does not carry the grant option, and the privilege does not allow new
objects to be created.
Therefore, the Spiffy security planners choose the individual approach, and plan to
re-examine their decision later. Spiffy security planners grant all managers the
SELECT privilege on the views for their departments.
Example: To grant the SELECT privilege on the DEPTMGR view to the manager
with ID EMP0060, they use the following GRANT statement:
GRANT SELECT ON DEPTMGR TO EMP0060;
The Spiffy security planners answer those questions with the following decisions:
v IDs that are managed at the central location hold privileges on views for
departments that are at remote locations. For example, the ID MGRD11 has the
SELECT privilege on the view DEPTD11.
v If the manager of Department D11 uses a remote system, the ID at that system
must be translated to MGRD11. Then a request is sent to the central system. All
other IDs are translated to CLERK before they are sent to the central system.
v The communications database (CDB) manages the translated IDs, like MGRD11.
v An ID from a remote system must be authenticated on any request to the central
system.
The following sections describe how the Spiffy security planners plan to implement
their distributed access plan:
v “Actions at the central server location”
v “Actions at remote locations” on page 304.
Exception: For a product other than DB2 UDB for z/OS, the actions at the remote
location might be different. If you use a different product, check the documentation
for that product. The remote product must satisfy the requirements that are
imposed by the central subsystem.
The report highlights any number of accesses outside an expected range. The
Spiffy system operator makes a summary of the reports every two months, and
The Spiffy security planners can most easily implement these security objectives
for members of the payroll operations department by using views. The PAYDEPT
view shows all the columns of the employee table except for job, salary, bonus, and
commission. The view does not show the rows for members of the payroll
operations department.
Example: The WORKDEPT value for the payroll operations department is P013.
The owner of the employee table uses the following statement to create the
PAYDEPT view:
CREATE VIEW PAYDEPT AS
SELECT EMPNO, FIRSTNME, MIDINT, LASTNAME, WORKDEPT,
PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE
FROM DSN8810.EMP
WHERE WORKDEPT<>’P013’
WITH CHECK OPTION;
The CHECK OPTION ensures that every row that is inserted or updated through
the view conforms to the definition of the view.
A second view, the PAYMGR view, gives Spiffy payroll managers access to any
record, including records for the members of the payroll operations department.
Example: The owner of the employee table uses the following statement to create
the PAYMGR view:
CREATE VIEW PAYMGR AS
SELECT EMPNO, FIRSTNME, MIDINT, LASTNAME, WORKDEPT,
PHONENO, HIREDATE, JOB, EDLEVEL, SEX, BIRTHDATE
FROM DSN8810.EMP
WITH CHECK OPTION;
The Spiffy security plan documents an allowable range of salaries, bonuses, and
commissions for each job level. To keep the values within the allowable ranges, the
Spiffy security planners use table check constraints for the salaries, bonuses, and
commissions. The planners use this approach because it is both simple and easy to
control. See Part 1 of DB2 Application Programming and SQL Guide for information
about using table check constraints.
In a similar situation, you might also consider the following ways to ensure that
updates and inserts stay within certain ranges:
v Keep the ranges in a separate DB2 table. To verify changes, query the payroll
update table and the table of ranges. Retrieve any rows for which the planned
update is outside the allowed range.
v Build the ranges into a validation routine. Apply the validation routine to the
payroll update table to automatically reject any insert or update that is outside
the allowed range.
v Embody the ranges in a view of the payroll table, using WITH CHECK
OPTION, and make all updates to the view. The ID that owns the employee
table also owns the view.
v Create a trigger to prevent salaries, bonuses, and commissions from increasing
by more than the percent that is allowed for each job level. See DB2 SQL
Reference for more information about using triggers.
Therefore, the security plan calls for a RACF group for the payroll operations
department. DB2USER can define the group, as described in “Add RACF groups”
on page 271. DB2USER can retain ownership of the group, or it can assign the
ownership to an ID that is used by payroll management.
The owner of the employee table can grant the privileges that the group requires.
The owner grants all required privileges to the group ID, with the intent not to
revoke them. The primary IDs of new members of the department are connected to
the group ID, which becomes a secondary ID for each of them. The primary IDs of
members who leave the department are disconnected from the group ID.
Example: The following statement grants the SELECT, INSERT, UPDATE, and
DELETE privileges on the PAYDEPT view to the payroll operations group ID
PAYOPS:
GRANT SELECT, INSERT, UPDATE, DELETE ON PAYDEPT TO PAYOPS;
This statement grants the privileges without the GRANT OPTION to keep
members of payroll operations from granting privileges to other users.
The payroll managers require different privileges and a different RACF group ID.
The Spiffy security planners add a RACF group for payroll managers and name it
PAYMGRS. The security planners associate the payroll managers’ primary IDs with
the PAYMGRS secondary ID. Next, privileges on the PAYMGR view, the
compensation application, and the payroll update application are granted to
PAYMGRS. The payroll update application must have the appropriate privileges on
the update table.
Example: The following statement grants the SELECT, INSERT, UPDATE, and
DELETE privileges on the PAYMGR view to the payroll managers’ group ID
PAYMGRS:
GRANT SELECT, INSERT, UPDATE, DELETE ON PAYMGR TO PAYMGRS;
The Spiffy security planners prefer to grant DBCTRL authority on the database
because granting DBCTRL authority does not expose as many security risks as
granting DBADM authority. DBCTRL authority allows an ID to support the
database without allowing the ID to retrieve or change the data in the tables.
However, database DSN8D81A contains several additional tables (see Appendix A,
“DB2 sample tables,” on page 995). These additional tables require some of the
privileges that are included in DBADM authority but not included in DBCTRL
authority.
The Spiffy security planners decide to compromise between the greater security of
granting DBCTRL authority and the greater flexibility of granting DBADM
authority. To balance the benefits of each authority, the Spiffy security planners
create an administrative ID with some, but not all of the DBADM privileges. The
security plan calls for a RACF group ID with the following authorities and
privileges:
v DBCTRL authority over DSN8D81A
v The INDEX privilege on all tables in the database except the employee table and
the payroll update table
v The SELECT, INSERT, UPDATE, and DELETE privileges on certain tables,
excluding the employee table and the payroll update table
An ID with SYSADM authority grants the privileges to the group ID.
In a similar situation, you also might consider putting the employee table and the
payroll update table in a separate database. Then you can grant DBADM authority
on DSN8D81A, and grant DBCTRL authority on the database that contains the
employee table and the payroll update table.
To limit the number of users with SYSADM authority, the Spiffy security plan
grants the authority to DB2OWNER, the ID that is responsible for DB2 security.
That does not mean that only IDs that are connected to DB2OWNER can exercise
privileges that are associated with SYSADM authority. Instead, DB2OWNER can
grant privileges to a group, connect other IDs to the group as needed, and later
disconnect them.
The Spiffy security planners prefer to have multiple IDs with SYSCTRL authority
instead of multiple IDs with SYSADM authority. IDs with SYSCTRL authority can
exercise most of the SYSADM privileges and can assume much of the day-to-day
work. IDs with SYSCTRL authority cannot access data directly or run plans unless
the privileges for those actions are explicitly granted to them. However, they can
run utilities, examine the output data sets, and grant privileges that allow other
IDs to access data. Therefore, IDs with SYSCTRL authority can access some
sensitive data, but they cannot easily access the data. As part of the Spiffy security
plan, DB2OWNER grants SYSCTRL authority to selected IDs.
The Spiffy security planners also use the BINDAGENT privilege to relieve the need
to have SYSADM authority continuously available. IDs with the BINDAGENT
privilege can bind plans and packages on behalf of another ID. However, they
cannot run the plans that they bind without being explicitly granted the EXECUTE
privilege.
The Spiffy security planners want to limit the number of IDs that have privileges
on the employee table and the payroll update table to the smallest convenient
value. To meet that objective, they decide that the owner of the employee table
should issue all of the CREATE VIEW and GRANT statements. They also decide to
have the owner of the employee table own the plans and packages that are
associated with employee data. The employee table owner implicitly has the
following privileges, which the plans and packages require:
v The owner of the payroll update program must have the SELECT privilege on
the payroll update table and the UPDATE privilege on the employee table.
v The owner of the commission program must have the UPDATE privilege on the
payroll update table and the SELECT privilege on the employee table.
v The owners of several other payroll programs must have the proper privileges to
do payroll processing, such as printing payroll checks, writing summary reports,
and so on.
To bind these plans and packages, an ID must have the BIND or BINDADD
privileges.
The list of privileges that are required by the owner of the employee table suggests
the functional approach. The Spiffy security planners create a RACF group for the
owner of the employee table.
The audit report also lists denials of access to the tables. Those denials represent
attempts by unauthorized IDs to use the tables. Some are possibly accidental;
others can be attempts to violate the security system.
After running the periodic reports, the security planners archive the audit records.
The archives provide a complete audit trail of access to the employee data through
DB2.
The simplest elements of operation for DB2 UDB for z/OS are described in this
chapter; they include:
“Entering commands”
“Starting and stopping DB2” on page 322
“Submitting work” on page 325
“Receiving messages” on page 329
Normal operation also requires more complex tasks. They are described in:
v Chapter 16, “Monitoring and controlling DB2 and its connections,” on page 333,
which considers the control of connections to IRLM, to TSO, to IMS, and to
CICS, as well as connections to other database management systems.
v Chapter 17, “Managing the log and the bootstrap data set,” on page 393, which
describes the roles of the log and the bootstrap data set in preparing for restart
and recovery.
v Chapter 18, “Restarting DB2 after termination,” on page 411, which tells what
happens when DB2 terminates normally or abnormally and how to restart it
while maintaining data integrity.
v Chapter 19, “Maintaining consistency across multiple systems,” on page 423,
which explains the two-phase commit process and the resolution of indoubt
units of recovery.
v Chapter 20, “Backing up and recovering databases,” on page 439, which explains
how to prepare for recovery as well as how to recover.
Operating a data sharing group: Although many of the commands and operational
procedures described here are the same in a data sharing environment, some
special considerations are described in Chapter 5 of DB2 Data Sharing: Planning and
Administration. Consider the following issues when you are operating a data
sharing group
v New commands used for data sharing, and the concept of command scope
v Logging and recovery operations
v Restart after an abnormal termination
v Disaster recovery procedures
v Recovery procedures for coupling facility resources
Entering commands
You can control most of the operational environment by using DB2 commands.
You might need to use other types of commands, including:
v IMS commands that control IMS connections
v CICS commands that control CICS connections
From a z/OS console or z/OS application program: You can enter all DB2
commands from a z/OS console or a z/OS application program. The START DB2
command must be issued from a z/OS console (or from an APF-authorized
program, such as SDSF, that passes the START DB2 to the z/OS console). The
command group authorization level must be SYS.
More than one DB2 subsystem can run under z/OS. You prefix a DB2 command
with special characters that identify which subsystem to direct the command to.
The 1- to 8-character prefix is called the command prefix. Specify the command
prefix on installation panel DSNTIPM. The default character for the command
prefix is -DSN1. Examples in this book use the hyphen (-) for the command prefix.
From an IMS terminal or program: You can enter all DB2 commands except START
DB2 from either an IMS terminal or program. The terminal or program must be
authorized to enter the /SSR command.
An IMS subsystem can attach to more than one DB2 subsystem, so you must prefix
a command that is directed from IMS to DB2 with a special character that tells
which subsystem to direct the command to. That character is called the command
recognition character (CRC); specify it when you define DB2 to IMS, in the
subsystem member entry in IMS.PROCLIB. (For details, see Part 2 of DB2
Installation Guide.)
The examples in this book assume that both the command prefix and the CRC are
the hyphen (-) . However, if you can attach to more than one DB2 subsystem, you
must issue your commands using the appropriate CRC. In the following example,
the CRC is a question mark character:
You enter:
/SSR ?DISPLAY THREAD
From a CICS terminal: You can enter all DB2 commands except START DB2 from a
CICS terminal authorized to enter the DSNC transaction code.
CICS can attach to only one DB2 subsystem at a time; therefore CICS does not use
the DB2 command prefix. Instead, each command entered through the CICS
attachment facility must be preceded by a hyphen (-), as in previous the example.
The CICS attachment facility routes the commands to the connected DB2
subsystem and obtains the command responses.
From a TSO terminal: You can enter all DB2 commands except START DB2 from a
DSN session.
You enter:
DSN SYSTEM (subsystem-name)
You enter:
-DISPLAY THREAD
A TSO session can attach to only one DB2 subsystem at a time; therefore TSO does
not use the DB2 command prefix. Instead, each command that is entered through
the TSO attachment facility must be preceded by a hyphen (-), as the preceding
example demonstrates. The TSO attachment facility routes the command to DB2
and obtains the command response.
All DB2 commands except START DB2 can also be entered from a DB2I panel
using option 7, DB2 Commands. For more information about using DB2I, see
“Using DB2I (DB2 Interactive)” on page 325.
For example, to issue DISPLAY THREAD to the default DB2 subsystem from an
APF-authorized program run as a batch job, code:
MODESUPV DS 0H
MODESET MODE=SUP,KEY=ZERO
SVC34 SR 0,0
MGCR CMDPARM
EJECT
CMDPARM DS 0F
CMDFLG1 DC X’00’
CMDLENG DC AL1(CMDEND-CMDPARM)
CMDFLG2 DC X’0000’
CMDDATA DC C’-DISPLAY THREAD’
CMDEND DS 0C
In CICS, you can direct command responses to another terminal. Name the other
terminal as the destination (dest) in this command:
DSNC dest -START DATABASE
For APF-authorized programs that run in batch jobs, command responses are
returned to the master console and to the system log if hard copy logging is
available. Hard copy logging is controlled by the z/OS system command VARY.
See z/OS MVS System Commands for more information.
The individual authorities are listed in Figure 9 on page 139. Each administrative
authority has the individual authorities shown in its box, and the individual
authorities for all the levels beneath it. For example, DBADM has ALTER, DELETE,
INDEX, INSERT, SELECT, and UPDATE authorities as well as those listed for
DBCTRL and DBMAINT.
Any user with the STOPALL privilege can issue the STOP DB2 command. Besides
those who have been granted STOPALL explicitly, the privilege belongs implicitly
to anyone with SYSOPR authority or higher. When installing DB2, you can choose:
v One or two authorization IDs with installation SYSADM authority
v Zero, one, or two authorization IDs with installation SYSOPR authority
The IDs with those authorizations are contained in the load module for subsystem
parameters (DSNZPxxx).
The START DB2 command can be entered only at a z/OS console authorized to
enter z/OS system commands. The command group authorization level must be
SYS.
| DB2 commands entered from a logged-on z/OS console can be authorized using
secondary authorization IDs. The authorization ID associated with a z/OS console
is SYSOPR, which carries the authority to issue all DB2 commands except:
v RECOVER BSDS
v START DATABASE
v STOP DATABASE
v ARCHIVE LOG
The authority to start or stop any particular database must be specifically granted
to an ID with SYSOPR authority. Likewise, an ID with SYSOPR authority must be
granted specific authority to issue the RECOVER BSDS and ARCHIVE LOG
commands.
The SQL GRANT statement can be used to grant SYSOPR authority to other user
IDs such as the /SIGN user ID or the LTERM of the IMS master terminal.
For information about other DB2 authorization levels, see “Establishing RACF
protection for DB2” on page 264. DB2 Command Reference also has authorization
level information for specific commands.
This section describes the START DB2 and STOP DB2 commands, explains how
you can limit access to data at startup, and contains a brief overview of startup
after an abend.
Starting DB2
When installed, DB2 is defined as a formal z/OS subsystem. Afterward, the
following message appears during any IPL of z/OS:
DSN3100I - DSN3UR00 - SUBSYSTEM ssnm READY FOR -START COMMAND
where ssnm is the DB2 subsystem name. At that point, you can start DB2 from a
z/OS console that has been authorized to issue system control commands (z/OS
command group SYS), by entering the command START DB2. The command must
be entered from the authorized console and not submitted through JES or TSO.
It is not possible to start DB2 by a JES batch job or a z/OS START command. The
attempt is likely to start an address space for DB2 that then abends, probably with
reason code X'00E8000F'.
You can also start DB2 from an APF-authorized program by passing a START DB2
command to the MGCR (SVC 34) z/OS service.
Messages at start
The system responds with some or all of the following messages depending on
which parameters you chose:
$HASP373 xxxxMSTR STARTED
DSNZ002I - SUBSYS ssnm SYSTEM PARAMETERS
LOAD MODULE NAME IS dsnzparm-name
DSNY001I - SUBSYSTEM STARTING
DSNJ127I - SYSTEM TIMESTAMP FOR BSDS=87.267 14:24:30.6
DSNJ001I - csect CURRENT COPY n ACTIVE LOG DATA
SET IS DSNAME=...,
STARTRBA=...,ENDRBA=...
DSNJ099I - LOG RECORDING TO COMMENCE WITH
STARTRBA = xxxxxxxxxxxx
If any of the nnnn values in message DSNR004I are not zero, message DSNR007I is
issued to provide the restart status table.
The START DB2 command starts the system services address space, the database
services address space, and, depending upon specifications in the load module for
subsystem parameters (DSNZPARM by default), the distributed data facility
address space and the DB2-established stored procedures address space.
Optionally, another address space, the internal resource lock manager (IRLM), can
be started automatically.
Options at start
Starting invokes the load module for subsystem parameters. This load module
contains information specified when DB2 was installed. For example, the module
contains the name of the IRLM to connect to. In addition, it indicates whether the
distributed data facility (DDF) is available and, if it is, whether it should be
automatically started when DB2 is started. For information about using a
command to start DDF, see “Starting the DDF” on page 371. You can specify
PARM (module-name) on the START DB2 command to provide a parameter module
other than the one specified at installation.
To accomplish the check, compare the expanded JCL in the SYSOUT output against
the correct JCL provided in z/OS MVS JCL User's Guide or z/OS MVS JCL Reference.
Then, take the member name of the erroneous JCL procedure also provided in the
SYSOUT to the system programmer who maintains your procedure libraries. After
finding out which proclib contains the JCL in question, locate the procedure and
correct it.
When a power failure occurs, DB2 abends without being able to finish its work or
take a shutdown checkpoint. When DB2 is restarted after an abend, it refreshes its
knowledge of its status at termination using information on the recovery log and
notifies the operator of the status of various units of recovery.
You can indicate that you want DB2 to postpone some of the backout work
traditionally performed during system restart. You can delay the backout of long
running units of recovery (URs) using installation options LIMIT BACKOUT and
BACKOUT DURATION on panel DSNTIPL. For a description of these installation
parameters, see Chapter 2 of DB2 Installation Guide.
Normally, the restart process resolves all inconsistent states. In some cases, you
have to take specific steps to resolve inconsistencies. There are steps you can take
to prepare for those actions. For example, you can limit the list of table spaces that
are recovered automatically when DB2 is started. For an explanation of the causes
of database inconsistencies and how you can prepare to recover from them, see
Chapter 18, “Restarting DB2 after termination,” on page 411.
Stopping DB2
Before stopping, all DB2-related write to operator with reply (WTOR) messages
must receive replies. Then one of the following commands terminates the
subsystem:
-STOP DB2 MODE(QUIESCE)
-STOP DB2 MODE(FORCE)
For the effects of the QUIESCE and FORCE options, see “Normal termination” on
page 411. In a data sharing environment, see Data Sharing: Planning and
Administration.
Before DB2 can be restarted, the following message must also be returned to the
z/OS console that is authorized to enter the START DB2 command:
DSN3100I - DSN3EC00 - SUBSYSTEM ssnm READY FOR -START COMMAND
Submitting work
An application program running under TSO, IMS, or CICS can make use of DB2
resources by executing embedded SQL statements. How to run application
programs from those environments is explained under:
“Running TSO application programs” on page 325
“Running IMS application programs” on page 326
“Running CICS application programs” on page 327
“Running batch application programs” on page 328
“Running application programs using CAF” on page 328
“Running application programs using RRSAF” on page 329
In each case, there are some conditions that the application program must meet to
embed SQL statements and to authorize the use of DB2 resources and data.
All application programming defaults, including the subsystem name that the
programming attachments discussed here use, are in the DSNHDECP load module.
Make sure your JCL specifies the proper set of program libraries.
You control each operation by entering the parameters that describe it on the
panels provided. DB2 also provides help panels to:
v Explain how to use each operation
v Provide the syntax for and examples of DSN subcommands, DB2 operator
commands, and DB2 utility control statements
To access the help panels, press the HELP PF key. (The key can be set locally, but
typically is PF1.)
A TSO application program that you run in a DSN session must be link-edited
with the TSO language interface program (DSNELI). The program cannot include
IMS DL/I calls because that requires the IMS language interface module
(DFSLI000).
The DSN command starts a DSN session, which in turn provides a variety of
subcommands and other functions. The DSN subcommands are:
ABEND
Causes the DSN session to terminate with a DB2 X'04E' abend completion
code and with a DB2 abend reason code of X'00C50101'.
BIND PACKAGE
Generates an application package.
BIND PLAN
Generates an application plan.
DCLGEN
Produces SQL and host language declarations.
END Ends the DB2 connection and returns to TSO.
FREE PACKAGE
Deletes a specific version of a package.
FREE PLAN
Deletes an application plan.
REBIND PACKAGE
Regenerates an existing package.
REBIND PLAN
Regenerates an existing plan.
RUN Executes a user application program.
SPUFI Invokes a DB2I facility for executing SQL statements not embedded in an
application program.
You can also issue the following DB2 and TSO commands from a DSN session:
v Any TSO command except TIME, TEST, FREE, and RUN.
v Any DB2 command except START DB2. For a list of those commands, see “DB2
operator commands” on page 316.
DB2 uses the following sources to find an authorization for access by the
application program. DB2 checks the first source listed; if it is unavailable, it
checks the second source, and so on.
1. RACF USER parameter supplied at logon
2. TSO logon user ID
3. Site-chosen default authorization ID
4. IBM-supplied default authorization ID
Either the RACF USER parameter or the TSO user ID can be modified by a locally
defined authorization exit routine.
The program must be link-edited with the IMS language interface module
(DFSLI000). It can write to and read from other database management systems
using the distributed data facility, in addition to accessing DL/I and Fast Path
resources.
Running IMS batch work: You can run batch DL/I jobs to access DB2 resources;
DB2-DL/I batch support uses the IMS attach package.
See Part 5 of DB2 Application Programming and SQL Guide for more information
about application programs and DL/I batch. See IMS Application Programming:
Design Guide for more information about recovery and DL/I batch.
CICS transactions that issue SQL statements must be link-edited with the CICS
attachment facility language interface module, DSNCLI, and the CICS command
language interface module. CICS application programs can issue SQL, DL/I, or
CICS commands. After CICS has connected to DB2, any authorized CICS
transaction can issue SQL requests that can write to and read from multiple DB2
instances using the distributed data facility. The application programs run as CICS
applications.
Batch work is run in the TSO background under the TSO terminal monitor
program (TMP). The input stream can invoke TSO command processors,
particularly the DSN command processor for DB2, and can include DSN
subcommands such as RUN. The following is an example of a TMP job:
//jobname JOB USER=SYSOPR ...
//GO EXEC PGM=IKJEFT01,DYNAMNBR=20
.
user DD statements
.
//SYSTSPRT DD SYSOUT=A
//SYSTSIN DD *
DSN SYSTEM (ssid)
.
subcommand (for example, RUN)
.
END
/*
In the example,
v IKJEFT01 identifies an entry point for TSO TMP invocation. Alternate entry
points defined by TSO are also available to provide additional return code and
ABEND termination processing options. These options permit the user to select
the actions to be taken by the TMP upon completion of command or program
execution.
Because invocation of the TSO TMP using the IKJEFT01 entry point might not be
suitable for all user environments, refer to the TSO publications to determine
which TMP entry point provides the termination processing options best suited
to your batch execution environment.
v USER=SYSOPR identifies the user ID (SYSOPR in this case) for authorization
checks.
v DYNAMNBR=20 indicates the maximum number of data sets (20 in this case)
that can be dynamically allocated concurrently.
v z/OS checkpoint and restart facilities do not support the execution of SQL
statements in batch programs invoked by RUN. If batch programs stop because
of errors, DB2 backs out any changes made since the last commit point. For
information about backup and recovery, see Chapter 20, “Backing up and
recovering databases,” on page 439. For an explanation of backing out changes
to data when a batch program run in the TSO background abends, see Part 5 of
DB2 Application Programming and SQL Guide.
v (ssid) is the subsystem name or group attachment name.
The call attachment facility (CAF) allows you to customize and control your
execution environments more extensively than the TSO, CICS, or IMS attachment
facilities. Programs executing in TSO foreground or TSO background can use either
the DSN session or CAF; each has advantages and disadvantages. z/OS batch and
started task programs can use only CAF.
To use CAF, you must first make available a load module known as the call
attachment language interface or DSNALI. When the language interface is
available, your program can use CAF to connect to DB2 in two ways:
v Implicitly, by including SQL statements or IFI calls in your program just as you
would any program
v Explicitly, by writing CALL DSNALI statements
For an explanation of CAF’s capabilities and how to use it, see Part 6 of DB2
Application Programming and SQL Guide.
Before you can run an RRSAF application, z/OS RRS must be started. RRS runs in
its own address space and can be started and stopped independently of DB2.
| To use RRSAF, you must first make available a load module known as the RRSAF
| language interface or DSNRLI. When the language interface is available, your
| program can use RRSAF to connect to DB2 in two ways:
| v Implicitly, by including SQL statements or IFI calls in your program just as you
| would any program.
| v Explicitly, by using CALL DSNRLI statements to invoke RRSAF functions. Those
| functions establish a connection between DB2 and RRS and allocate DB2
| resources.
Receiving messages
DB2 message identifiers have the form DSNcxxxt, where:
DSN Is the unique DB2 message prefix.
c Is a 1-character code identifying the DB2 subcomponent that issued the
message. For example:
2 CICS attachment facility that is shipped with CICS
M IMS attachment facility
U Utilities
Chapter 15. Basic operation 329
xxx Is the message number
t Is the message type, with these values and meanings:
A Immediate action
D Immediate decision
E Eventual action
I Information only
A command prefix, identifying the DB2 subsystem, precedes the message identifier,
except in messages from the CICS and IMS attachment facilities. (The CICS
attachment facility issues messages in the form DSN2xxxt, and the IMS attachment
facility issues messages in the form DSNMxxxt.) CICS and IMS attachment facility
messages identify the z/OS subsystem that generated the message.
The IMS attachment facility issues messages that are identified as SSNMxxxx and
as DFSxxxx. The DFSxxxx messages are produced by IMS, under which the IMS
attachment facility operates.
Some DB2 messages sent to the z/OS console are defined as critical using the
WTO descriptor code (11). This code signifies “critical eventual action requested”
by DB2. Preceded by an at sign (@) or an asterisk (*), critical DB2 messages remain
on the screen until specifically deleted. This prevents them from being missed by
the operator, who is required to take a specific action.
Notes:
1. Except START DB2. Commands issued from IMS must have the prefix /SSR. Commands
issued from CICS must have the prefix DSNC.
2. Using outstanding WTOR.
3. “Attachment facility unsolicited output” does not include “DB2 unsolicited output”; for
the latter, see “Receiving unsolicited DB2 messages” on page 330.
4. Use the z/OS command MODIFY jobname, CICS command. The z/OS console must
already be defined as a CICS terminal.
5. Specify the output destination for the unsolicited output of the CICS attachment facility
in the RCT.
Chapter 15, “Basic operation,” on page 315 tells you how to start DB2, submit
work to be processed, and stop DB2. The following operations, described in this
chapter, require more understanding of what DB2 is doing:
“Controlling DB2 databases and buffer pools”
“Controlling user-defined functions” on page 343
“Controlling DB2 utilities” on page 345
“Controlling the IRLM” on page 346
This chapter also introduces the concept of a thread, a DB2 structure that makes the
connection between another subsystem and DB2. A thread describes an
application’s connection, traces its progress, and delimits its accessibility to DB2
resources and services. Most DB2 functions execute under a thread structure. The
use of threads in making, monitoring, and breaking connections is described in the
following sections:
“Monitoring threads” on page 349
“Controlling TSO connections” on page 350
“Controlling CICS connections” on page 353
“Controlling IMS connections” on page 358
“Controlling RRS connections” on page 367
“Controlling connections to remote systems” on page 370
“Controlling traces” on page 388, tells you how to start and stop traces, and points
to other books for help in analyzing their results.
“Controlling the resource limit facility (governor)” on page 391, tells how to start
and stop the governor, and how to display its current status.
A final section, “Changing subsystem parameter values” on page 391, tells how to
change subsystem parameters dynamically even when DB2 is running.
The START and STOP DATABASE commands can be used with the SPACENAM
and PART options to control table spaces, index spaces, or partitions. For example,
the following command starts two partitions of table space DSN8S81E in the
database DSN8D81A:
-START DATABASE (DSN8D81A) SPACENAM (DSN8S81E) PART (1,2)
Starting databases
The command START DATABASE (*) starts all databases for which you have the
STARTDB privilege. The privilege can be explicitly granted, or can belong
implicitly to a level of authority (DBMAINT and above, as shown in Figure 9 on
page 139). The command starts the database, but not necessarily all the objects it
contains. Any table spaces or index spaces in a restricted mode remain in a
restricted mode and are not started.
START DATABASE (*) does not start the DB2 directory (DSNDB01), the DB2
catalog (DSNDB06), or the DB2 work file database (called DSNDB07, except in a
data sharing environment). These databases have to be started explicitly using the
SPACENAM option. Also, START DATABASE (*) does not start table spaces or
index spaces that have been explicitly stopped by the STOP DATABASE command.
The PART keyword of the command START DATABASE can be used to start
individual partitions of a table space. It can also be used to start individual
partitions of a partitioning index or logical partitions of a nonpartitioning index.
The started or stopped state of other partitions is unchanged.
Databases, table spaces, and index spaces are started with RW status when they
are created. You can make any of them unavailable by using the command STOP
DATABASE. DB2 can also make them unavailable when it detects an error.
In cases when the object was explicitly stopped, you can make them available
again using the command START DATABASE. For example, the following
command starts all table spaces and index spaces in database DSN8D81A for
read-only access:
-START DATABASE (DSN8D81A) SPACENAM(*) ACCESS(RO)
These restrictions are a necessary part of protecting the integrity of the data. If you
start an object that has restrictions, the data in the object might not be reliable.
The command releases most restrictions for the named objects. These objects must
be explicitly named in a list following the SPACENAM option.
For more information about resolving postponed units of recovery, see “Resolving
postponed units of recovery” on page 420.
Monitoring databases
You can use the command DISPLAY DATABASE to obtain information about the
status of databases and the table spaces and index spaces within each database. If
applicable, the output also includes information about physical I/O errors for
those objects.
Chapter 16. Monitoring and controlling DB2 and its connections 335
11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT361I - * DISPLAY DATABASE SUMMARY
11:44:32 * report_type_list
11:44:32 DSNT360I - ****************************************************
11:44:32 DSNT362I - DATABASE = dbname STATUS = xx
DBD LENGTH = yyyy
11:44:32 DSNT397I -
Using ONLY: The keyword ONLY can be added to the command DISPLAY
DATABASE. When ONLY is specified with the DATABASE keyword but not the
SPACENAM keyword, all other keywords except RESTRICT, LIMIT, and AFTER
are ignored. Use DISPLAY DATABASE ONLY as follows:
-DISPLAY DATABASE(*S*DB*) ONLY
See Chapter 2 of DB2 Command Reference for detailed descriptions of these and
other options on the DISPLAY DATABASE command.
Using RESTRICT: You can use the RESTRICT option of the DISPLAY DATABASE
command to limit the display to objects that are currently set to a restrictive status.
You can additionally specify one or more keywords to limit the display further to
include only objects that are set to a particular restrictive status. For information
about resetting a restrictive status, see Appendix C of DB2 Utility Guide and
Reference.
Using ADVISORY: You can use the ADVISORY option on the DISPLAY
DATABASE command to limit the display to table spaces or indexes that require
some corrective action. Use the DISPLAY DATABASE ADVISORY command
without the RESTRICT option to determine when:
v An index space is in the informational copy pending (ICOPY) advisory status
v A base table space is in the auxiliary warning (AUXW) advisory status
For information about resetting an advisory status, see Appendix C of DB2 Utility
Guide and Reference.
| Using OVERVEW: To display all objects within a database, you can use the
| OVERVIEW option of the DISPLAY DATABASE command. This option shows each
| object in the database on one line, does not break down an object by partition, and
| does not show exception states. The OVERVIEW option displays only object types
| and the number of data set partitions in each object. OVERVIEW is mutually
| exclusive with all keywords other than SPACENAM, LIMIT, and AFTER. Use
| DISPLAY DATABASE OVERVIEW as follows:
| -DISPLAY DATABASE(DB486A) SPACENAM(*) OVERVIEW
| The display indicates that there are five objects in database DB486A, two table
| spaces and three indexes. Table space TS486A has four parts, and table space
| TS486C is nonpartitioned. Index IX486A is a nonpartitioning index for table space
| TS486A, and index IX486B is a partitioned index with four parts for table space
| TS486A. Index IX486C is a nonpartitioned index for table space TS486C.
Chapter 16. Monitoring and controlling DB2 and its connections 337
Obtaining information about application programs
You can obtain various kinds of information about application programs using
particular databases or table or index spaces with the DISPLAY DATABASE
command. This section describes how you can identify who or what is using the
object and what locks are being held on the objects.
Who and what is using the object? You can obtain the following information:
v The names of the application programs currently using the database or space
v The authorization IDs of the users of these application programs
v The logical unit of work IDs of the database access threads accessing data on
behalf of the remote locations specified.
To obtain this information, issue a command, for example, that names partitions 2,
3, and 4 in table space TPAUGF01 in database DBAUGF01:
-DISPLAY DATABASE (DBAUGF01) SPACENAM (TPAUGF01) PART (2:4) USE
DSNT360I : ***********************************
DSNT361I : * DISPLAY DATABASE SUMMARY
* GLOBAL USE
DSNT360I : ***********************************
DSNT362I : DATABASE = DBAUGF01 STATUS = RW
DBD LENGTH = 8066
DSNT397I :
NAME TYPE PART STATUS CONNID CORRID USERID
-------- ---- ----- ----------------- -------- ------------ --------
TPAUGF01 TS 0002 RW BATCH S3341209 ADMF001
- MEMBER NAME V61A
TPAUGF01 TS 0003 RW BATCH S3341209 ADMF001
- MEMBER NAME V61A
TPAUGF01 TS 0004 RW BATCH S3341209 ADMF001
- MEMBER NAME V61A
******* DISPLAY OF DATABASE DBAUGF01 ENDED **********************
DSN9022I : DSNTDDIS ’DISPLAY DATABASE’ NORMAL COMPLETION
Which programs are holding locks on the objects? To determine which application
programs are currently holding locks on the database or space, issue a command
like the following, which names table space TSPART in database DB01:
-DISPLAY DATABASE(DB01) SPACENAM(TSPART) LOCKS
See Chapter 2 of DB2 Command Reference for detailed descriptions of these and
other options of the DISPLAY DATABASE command.
If the cause of the problem is undetermined, the error is first recorded in the LPL.
If recovery from the LPL is unsuccessful, the error is then recorded on the error
page range.
Write errors for large object data type (LOB) table spaces defined with LOG NO
cause the unit of work to be rolled back. Because the pages are written during
normal deferred write processing, they can appear in the LPL and WEPR. The LOB
data pages for a LOB table space with the LOG NO attribute are not written to
LPL or WEPR. The space map pages are written during normal deferred write
processing and can appear in the LPL and WEPR.
A program that tries to read data from a page listed on the LPL or WEPR receives
an SQLCODE for “resource unavailable”. To access the page (or pages in the error
range), you must first recover the data from the existing database copy and the
log.
Displaying the logical page list: You can check the existence of LPL entries by
issuing the DISPLAY DATABASE command with the LPL option. The ONLY
option restricts the output to objects that have LPL pages. For example:
-DISPLAY DATABASE(DBFW8401) SPACENAM(*) LPL ONLY
Chapter 16. Monitoring and controlling DB2 and its connections 339
DSNT360I = ***********************************************************
DSNT361I = * DISPLAY DATABASE SUMMARY
* GLOBAL LPL
DSNT360I = ***********************************************************
DSNT362I = DATABASE = DBFW8401 STATUS = RW,LPL
DBD LENGTH = 8066
DSNT397I =
NAME TYPE PART STATUS LPL PAGES
-------- ---- ----- ----------------- ------------------
TPFW8401 TS 0001 RW,LPL 000000-000004
ICFW8401 IX L0001 RW,LPL 000000,000003
IXFW8402 IX RW,LPL 000000,000003-000005
---- 000007,000008-00000B
---- 000080-000090
******* DISPLAY OF DATABASE DBFW8401 ENDED **********************
DSN9022I = DSNTDDIS ’DISPLAY DATABASE’ NORMAL COMPLETION
The display indicates that the pages listed in the LPL PAGES column are
unavailable for access. For the syntax and description of DISPLAY DATABASE, see
Chapter 2 of DB2 Command Reference.
| Removing pages from the LPL: The DB2 subsystem always attempts automated
| recovery of LPL pages when the pages are added to the LPL. Manual recover can
| also be performed. When an object has pages on the LPL, there are several ways to
| manually remove those pages and make them available for access when DB2 is
running:
v Start the object with access (RW) or (RO). That command is valid even if the
table space is already started.
When you issue the command START DATABASE, you see message DSNI006I,
indicating that LPL recovery has begun. Message DSNI022I is issued periodically
to give you the progress of the recovery. When recovery is complete, you see
DSNI021I.
When you issue the command START DATABASE for a LOB table space that is
defined as LOG NO, and DB2 detects log records required for LPL recovery are
missing due to the LOG NO attribute, the LOB table space is placed in AUXW
status and the LOB is invalidated.
v Run the RECOVER or REBUILD INDEX utility on the object.
The only exception to this is when a logical partition of a nonpartitioned index
has both LPL and RECP status. If you want to recover the logical partition using
REBUILD INDEX with the PART keyword, you must first use the command
START DATABASE to clear the LPL pages.
v Run the LOAD utility with the REPLACE option on the object.
v Issue an SQL DROP statement for the object.
Only the following utilities can be run on an object with pages in the LPL:
LOAD with the REPLACE option
MERGECOPY
REBUILD INDEX
RECOVER, except:
RECOVER...PAGE
RECOVER...ERROR RANGE
REPAIR with the SET statement
REPORT
Displaying a write error page range: Use DISPLAY DATABASE to display the
range of error pages. For example:
-DISPLAY DATABASE (DBPARTS) SPACENAM (TSPART01) WEPR
For additional information about this list, see the description of message DSNT392I
in Part 2 of DB2 Messages.
Stopping databases
Databases, table spaces, and index spaces can be made unavailable with the STOP
DATABASE command. You can also use STOP DATABASE with the PART option
to stop the following types of partitions:
v Physical partitions within a table space
v Physical partitions within an index space
v Logical partitions within a nonpartitioning index associated with a partitioned
table space.
This prevents access to individual partitions within a table or index space while
allowing access to the others. When you specify the PART option with STOP
DATABASE on physically partitioned spaces, the data sets supporting the given
physical partitions are closed and do not affect the remaining partitions. However,
STOP DATABASE with the PART option does not close data sets associated with
logically partitioned spaces. To close these data sets, you must execute STOP
DATABASE without the PART option.
If you specify AT(COMMIT), DB2 takes over access to an object when all jobs
release their claims on it and when all utilities release their drain locks on it. If you
do not specify AT(COMMIT), the objects are not stopped until all existing
applications have deallocated. New transactions continue to be scheduled, but they
receive SQLCODE -904 SQLSTATE '57011' (resource unavailable) on the first SQL
statement that references the object or when the plan is prepared for execution.
STOP DATABASE waits for a lock on an object that it is attempting to stop. If the
Chapter 16. Monitoring and controlling DB2 and its connections 341
wait time limit for locks (15 timeouts) is exceeded, then the STOP DATABASE
command terminates abnormally and leaves the object in stop pending status
(STOPP).
DB2 subsystem databases (catalog, directory, work file) can also be stopped. After
the directory is stopped, installation SYSADM authority is required to restart it.
The data sets containing a table space are closed and deallocated by the preceding
commands.
For example:
-DISPLAY BUFFERPOOL(BP0)
See Chapter 2 of DB2 Command Reference for descriptions of the options you can
use with this command and the information you find in the summary and detail
reports.
Use the START FUNCTION SPECIFIC command to activate all or a specific set of
stopped external functions. For example, issue a command like the following,
which starts functions USERFN1 and USERFN2 in the PAYROLL schema:
START FUNCTION SPECIFIC(PAYROLL.USERFN1,PAYROLL.USERFN2)
Chapter 16. Monitoring and controlling DB2 and its connections 343
The following output is produced:
DSNX973I START FUNCTION SPECIFIC SUCCESSFUL FOR PAYROLL.USERFN1
DSNX973I START FUNCTION SPECIFIC SUCCESSFUL FOR PAYROLL.USERFN2
Use the DISPLAY FUNCTION SPECIFIC command to list the range of functions
that are stopped because of a STOP FUNCTION SPECIFIC command. For example,
issue a command like the following, which displays information about functions in
the PAYROLL schema and the HRPROD schema:
-DISPLAY FUNCTION SPECIFIC(PAYROLL.*,HRPROD.*)
Use the STOP FUNCTION SPECIFIC command to stop access to all or a specific
set of external functions. For example, issue a command like the following, which
stops functions USERFN1 and USERFN3 in the PAYROLL schema:
STOP FUNCTION SPECIFIC(PAYROLL.USERFN1,PAYROLL.USERFN3)
DB2 utilities are classified into two groups: online and stand-alone. The online
utilities require DB2 to be running and can be invoked in several different ways.
The stand-alone utilities do not require DB2 to be up, and they can be invoked
only by means of JCL. The online utilities are described in Part 2 of DB2 Utility
Guide and Reference, and the stand-alone utilities are described in Part 3 of DB2
Utility Guide and Reference.
If a utility is not running, you need to determine whether the type of utility
access is allowed on an object of a specific status. Table 92 shows the compatibility
of utility types and object status.
Table 92. Compatibility of utility types and object status
Utility type Object access
Read-only RO
All RW1
DB2 UT
Note:
1. RW is the default access type for an object.
To change the status of an object, use the ACCESS option of the START
DATABASE command to start the object with a new status. For example:
-START DATABASE (DSN8D61A) ACCESS(RO)
Chapter 16. Monitoring and controlling DB2 and its connections 345
Stand-alone utilities
The following stand-alone utilities can be run only by means of JCL:
DSN1CHKR
DSN1COPY
DSN1COMP
DSN1PRNT
DSN1SDMP
DSN1LOGP
DSNJLOGF
DSNJU003 (change log inventory)
DSNJU004 (print log map)
Most of the stand-alone utilities can be used while DB2 is running. However, for
consistency of output, the table spaces and index spaces must be stopped first
because these utilities do not have access to the DB2 buffer pools. In some cases,
DB2 must be running or stopped before you invoke the utility. See Part 3 of DB2
Utility Guide and Reference for detailed environmental information about these
utilities.
Stand-alone utility job streams require that you code specific data set names in the
JCL. To determine the fifth qualifier in the data set name, you need to query the
DB2 catalog tables SYSIBM.SYSTABLEPART and SYSIBM.SYSINDEXPART to
determine the IPREFIX column that corresponds to the required data set.
The change log inventory utility (DSNJU003) enables you to change the contents of
the bootstrap data set (BSDS). This utility cannot be run while DB2 is running
because inconsistencies could result. Use STOP DB2 MODE(QUIESCE) to stop the
DB2 subsystem, run the utility, and then restart DB2 with the START DB2
command.
The print log map utility (DSNJU004) enables you to print the the bootstrap data set
contents. The utility can be run when DB2 is active or inactive; however, when it is
run with DB2 active, the user’s JCL and the DB2 started task must both specify
DISP=SHR for the BSDS data sets.
Data sharing: In a data sharing environment, the IRLM handles global locking,
and each DB2 member has its own corresponding IRLM. See DB2 Data Sharing:
Planning and Administration for more information about configuring IRLM in a data
sharing environment.
You can use the following z/OS commands to control the IRLM. irlmproc is the
IRLM procedure name, and irlmnm is the IRLM subsystem name. See Chapter 2 of
DB2 Command Reference for more information about these commands.
MODIFY irlmproc,ABEND,DUMP
Abends the IRLM and generates a dump.
When DB2 is installed, you normally specify that the IRLM be started
automatically. Then, if the IRLM is not available when DB2 is started, DB2 starts it,
and periodically checks whether it is up before attempting to connect. If the
attempt to start the IRLM fails, DB2 terminates.
If an automatic IRLM start has not been specified, start the IRLM before starting
DB2, using the z/OS START irlmproc command.
When started, the IRLM issues this message to the z/OS console:
DXR117I irlmnm INITIALIZATION COMPLETE
Consider starting the IRLM manually if you are having problems starting DB2 for
either of these reasons:
v An IDENTIFY or CONNECT to a data sharing group fails.
v DB2 experiences a failure that involves the IRLM.
When you start the IRLM manually, you can generate a dump to collect diagnostic
information because IRLM does not stop automatically.
Chapter 16. Monitoring and controlling DB2 and its connections 347
MODIFY irlmproc,SET,DEADLOCK=nnnn
Sets the time for the local deadlock detection cycle.
MODIFY irlmproc,SET,LTE=nnnn
Sets the number of LOCK HASH entries that this IRLM can use on the
next connect to the XCF LOCK structure. Use only for data sharing.
MODIFY irlmproc,SET,TIMEOUT=nnnn,subsystem-name
Sets the timeout value for the specified DB2 subsystem. Display the
subsystem-name by using MODIFY irlmproc,STATUS.
MODIFY irlmproc,SET,TRACE=nnn
Sets the maximum number of trace buffers used for this IRLM.
If you try to stop the IRLM while DB2 or IMS is still using it, you get the
following message:
DXR105E irlmnm STOP COMMAND REJECTED. AN IDENTIFIED SUBSYSTEM
IS STILL ACTIVE
If that happens, issue the STOP irlmproc command again, when the subsystems are
finished with the IRLM.
Alternatively, if you must stop the IRLM immediately, enter the following
command to force the stop:
MODIFY irlmproc,ABEND,NODUMP
DB2 abends. An IMS subsystem using the IRLM does not abend and can be
reconnected.
IRLM uses the z/OS Automatic Restart Manager (ARM) services. However, it
de-registers from ARM for normal shutdowns. IRLM registers with ARM during
initialization and provides ARM with an event exit. The event exit must be in the
link list. It is part of the IRLM DXRRL183 load module. The event exit will make
sure that the IRLM name is defined to z/OS when ARM restarts IRLM on a target
z/OS that is different from the failing z/OS. The IRLM element name used for the
ARM registration depends on the IRLM mode. For local mode IRLM, the element
name is a concatenation of the IRLM subsystem name and the IRLM ID. For global
mode IRLM, the element name is a concatenation of the IRLM data sharing group
name, IRLM subsystem name, and the IRLM ID.
IRLM de-registers from ARM when one of the following events occurs:
v PURGE irlmproc is issued.
v MODIFY irlmproc,ABEND,NODUMP is issued.
v DB2 automatically stops IRLM.
The command MODIFY irlmproc,ABEND,NODUMP specifies that IRLM de-register
from ARM before terminating, which prevents ARM from restarting IRLM.
However, it does not prevent ARM from restarting DB2, and, if you set the
automatic restart manager to restart IRLM, DB2 automatically starts IRLM.
Monitoring threads
The DB2 command DISPLAY THREAD displays current information about the
status of threads, including information about:
v Threads that are processing locally
v Threads that are processing distributed requests
v Stored procedures or user-defined functions if the thread is executing one of
those
v Parallel tasks
The output of the command DISPLAY THREAD can also indicate that a system
quiesce is in effect as a result of the ARCHIVE LOG command. For more
information, see “Archiving the log” on page 400.
The command DISPLAY THREAD allows you to select which type of information
you wish to include in the display using one or more of the following standards:
v Active, indoubt, postponed abort, or pooled threads
Chapter 16. Monitoring and controlling DB2 and its connections 349
v Allied threads associated with the address spaces whose connection-names are
specified
v Allied threads
v Distributed threads
v Distributed threads associated with a specific remote location
v Detailed information about connections with remote locations
v A specific logical unit of work ID (LUWID).
To use the TYPE, LOCATION, DETAIL, and LUWID keywords, you must have
SYSOPR authority or higher. For detailed information, see Chapter 2 of DB2
Command Reference.
Example: The DISPLAY THREAD command displays information about active and
pooled threads in the following format:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS:
DSNV402I - ACTIVE THREADS:
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
conn-name s * req-ct corr-id auth-id pname asid token
conn-name s * req-ct corr-id auth-id pname asid token
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I - module_name ’-DISPLAY THREAD’ NORMAL COMPLETION
More information about how to interpret this output can be found in the sections
describing the individual connections and in the description of message DSNV408I
in Part 2 of DB2 Messages.
When a DSN session is active, you can enter DSN subcommands, DB2 commands,
and TSO commands, as described under “Running TSO application programs” on
page 325.
The DSN command can be given in the foreground or background, when running
under the TSO terminal monitor program (TMP). The full syntax of the command
is:
DSN SYSTEM (subsystemid) RETRY (n1) TEST (n2)
DB2I invokes a DSN session when you select any of these operations:
v SQL statements using SPUFI
v DCLGEN
v BIND/REBIND/FREE
v RUN
v DB2 commands
v Program preparation and execution
In carrying out those operations, the DB2I panels invoke CLISTs, which start the
DSN session and invoke appropriate subcommands.
Chapter 16. Monitoring and controlling DB2 and its connections 351
Table 93. Differences in display thread information for TSO and batch (continued)
Connection Name AUTHID Corr-ID1 Plan1
Notes:
1. After the application has connected to DB2 but before a plan has been allocated, this
field is blank.
The name of the connection can have one of the following values:
Name Connection to
TSO Program running in TSO foreground
BATCH Program running in TSO background
DB2CALL Program using the call attachment facility and running in the same
address space as a program using the TSO attachment facility
The following command displays information about TSO and CAF threads,
including those processing requests to or from remote locations:
-DISPLAY THREAD(BATCH,TSO,DB2CALL)
TSO displays:
READY
You enter:
DSN SYSTEM (DSN)
DSN displays:
DSN
You enter:
RUN PROGRAM (MYPROG)
DSN displays:
DSN
You enter:
END
TSO displays:
READY
Chapter 16. Monitoring and controlling DB2 and its connections 353
| CICS command responses are sent to the terminal from which the corresponding
| command was entered, unless the DSNC DISPLAY command specifies an
| alternative destination. For details on specifying alternate destinations for output,
| see the DSNC DISPLAY command in the DB2 Command Reference.
| For detailed information about controlling CICS connections, see ″Defining the CICS
| DB2 connection″ in the CICS DB2 Guide.
| ssid specifies a DB2 subsystem ID to override that specified in the CICS INITPARM
| macro.
You can also start the attachment facility automatically at CICS initialization using
a program list table (PLT). For details, see Part 2 of DB2 Installation Guide.
Restarting CICS
One function of the CICS attachment facility is to keep data in synchronization
between the two systems. If DB2 completes phase 1 but does not start phase 2 of
the commit process, the units of recovery being committed are termed indoubt. An
indoubt unit of recovery might occur if DB2 terminates abnormally after
completing phase 1 of the commit process. CICS might commit or roll back work
without DB2's knowledge.
DB2 cannot resolve those indoubt units of recovery (that is, commit or roll back the
changes made to DB2 resources) until the connection to CICS is restarted. This
means that CICS should always be auto-started (START=AUTO in the DFHSIT
table) to get all necessary information for indoubt thread resolution available from
its log. Avoid cold starting. The START option can be specified in the DFHSIT
table, as described in CICS Transaction Server for z/OS Resource Definition Guide.
If there are CICS requests active in DB2 when a DB2 connection terminates, the
corresponding CICS tasks might remain suspended even after CICS is reconnected
to DB2. You should purge those tasks from CICS using a CICS-supplied transaction
such as:
CEMT SET TASK(nn) FORCE
See CICS Transaction Server for z/OS CICS Supplied Transactions for more information
about transactions that CICS supplies.
If any unit of work is indoubt when the failure occurs, the CICS attachment facility
automatically attempts to resolve the unit of work when CICS is reconnected to
DB2. Under some circumstances, however, CICS cannot resolve indoubt units of
| recovery. You must manually recover these indoubt units of recovery (see
“Recovering indoubt units of recovery manually” on page 355 for more
information).
For an explanation of the displayed list, see the description of message DSNV408I
in Part 2 of DB2 Messages.
The default value for connection-name is the connection name from which you
entered the command. Correlation-id is the correlation ID of the thread to be
recovered. It can be determined by issuing the command DISPLAY THREAD. Your
choice for the ACTION parameter tells whether to commit or roll back the
associated unit of recovery. For more details, see “Resolving indoubt units of
recovery” on page 427.
One of the following messages can be used after you use the RECOVER command:
DSNV414I - THREAD correlation-id COMMIT SCHEDULED
DSNV415I - THREAD correlation-id ABORT SCHEDULED
For more information about manually resolving indoubt units of recovery, see
“Manually recovering CICS indoubt units of recovery” on page 495. For
information about the two-phase commit process, as well as indoubt units of
recovery, see “Multiple system consistency” on page 423.
For an explanation of the displayed list, see the description of message DSNV408I
in Part 2 of DB2 Messages.
Chapter 16. Monitoring and controlling DB2 and its connections 355
The DSNC STRT command starts the CICS DB2 attachment facility, which allows
CICS application programs to access DB2 databases.
Threads are created at the first DB2 request from the application if there is not one
already available for the specific DB2 plan.
| For complete information about defining CICS threads with DB2, see CICS DB2
| Guide.
Using CICS attachment facility commands: Any authorized CICS user can monitor the
threads and change the connection parameters as needed. Operators can use the
following CICS attachment facility commands to monitor one of the following the
threads:
DSNC DISPLAY PLAN plan-name destination
DSNC DISPLAY TRANSACTION transaction-id destination
These commands display the threads that the resource or transaction is using. The
following information is provided for each created thread:
v Authorization ID for the plan associated with the transaction (8 characters).
v PLAN/TRAN name (8 characters).
v A or I (1 character).
If A is displayed, the thread is within a unit of work. If I is displayed, the
thread is waiting for a unit of work, and the authorization ID is blank.
Disconnecting applications
There is no way to disconnect a particular CICS transaction from DB2 without
abending the transaction. Two ways to disconnect an application are described
here:
v The DB2 command CANCEL THREAD can be used to cancel a particular
thread. CANCEL THREAD requires that you know the token for any thread you
want to cancel. Enter the following command to cancel the thread identified by
the token indicated in the display output.
-CANCEL THREAD(46)
When you issue CANCEL THREAD for a thread, that thread is scheduled to be
terminated in DB2.
v The command DSNC DISCONNECT terminates the threads allocated to a plan
ID, but it does not prevent new threads from being created. This command frees
DB2 resources shared by the CICS transactions and allows exclusive access to
them for special-purpose processes such as utilities or data definition statements.
The thread is not canceled until the application releases it for reuse, either at
SYNCPOINT or end-of-task.
| For complete information about the use of CICS attachment commands with
| DB2, see CICS DB2 Guide.
Orderly termination
It is recommended that you do orderly termination whenever possible. An orderly
termination of the connection allows each CICS transaction to terminate before
thread subtasks are detached. This means there should be no indoubt units of
recovery at reconnection time. An orderly termination occurs when you:
v Enter the DSNC STOP QUIESCE command. CICS and DB2 remain active.
v Enter the CICS command CEMT PERFORM SHUTDOWN, and the CICS
attachment facility is also named to shut down during program list table (PLT)
processing. DB2 remains active. For information about the CEMT PERFORM
SHUTDOWN command, see CICS for MVS/ESA CICS-Supplied Transactions.
v Enter the DB2 command CANCEL THREAD. The thread is abended.
The following example stops the DB2 subsystem (QUIESCE). allows the currently
identified tasks to continue normal execution, and does not allow new tasks to
identify themselves to DB2:
-STOP DB2 MODE (QUIESCE)
This message appears when the stop process starts and frees the entering terminal
(option QUIESCE):
DSNC012I THE ATTACHMENT FACILITY STOP QUIESCE IS PROCEEDING
When the stop process ends and the connection is terminated, this message is
added to the output from the CICS job:
DSNC025I THE ATTACHMENT FACILITY IS INACTIVE
Forced termination
Although it is not recommended, there might be times when it is necessary to force
the connection to end. A forced termination of the connection can abend CICS
transactions connected to DB2. Therefore, indoubt units of recovery can exist at
reconnect. A forced termination occurs in the following situations:
v You enter the DSNC STOP FORCE command. This command waits 15 seconds
before detaching the thread subtasks, and, in some cases, can achieve an orderly
termination. DB2 and CICS remain active.
v You enter the CICS command CEMT PERFORM SHUTDOWN IMMEDIATE. For
information about this command, see CICS for MVS/ESA CICS-Supplied
Transactions. DB2 remains active.
v You enter the DB2 command STOP DB2 MODE (FORCE). CICS remains active.
v A DB2 abend occurs. CICS remains active.
v A CICS abend occurs. DB2 remains active.
v STOP is issued to the DB2 or CICS attachment facility, and the CICS transaction
overflows to the pool. The transaction isssues an intermediate commit. The
thread is terminated at commit time, and further DB2 access is not allowed.
This message appears when the stop process starts and frees the entering terminal
(option FORCE):
DSNC022I THE ATTACHMENT FACILITY STOP FORCE IS PROCEEDING
When the stop process ends and the connection is terminated, this message is
added to the output from the CICS job:
Chapter 16. Monitoring and controlling DB2 and its connections 357
DSNC025I THE ATTACHMENT FACILITY IS INACTIVE
For more information about those commands, see Chapter 2 of DB2 Command
Reference or, in the IMS library, to IMS Command Reference.
IMS command responses are sent to the terminal from which the corresponding
command was entered. Authorization to enter IMS commands is based on IMS
security.
The message is issued regardless of whether DB2 is active and does not imply
that the connection is established.
The order of starting IMS and DB2 is not vital. If IMS is started first, then when
DB2 comes up, it posts the control region modify task, and IMS again tries to
reconnect.
If DB2 is stopped by the STOP DB2 command, the /STOP SUBSYS command, or a
DB2 abend, then IMS cannot reconnect automatically. You must make the
connection by using the /START command.
The following messages can be produced when IMS attempts to connect a DB2
subsystem:
v If DB2 is active, these messages are sent:
imsid is the IMS connection name. RC=00 means that a notify request has been
queued. When DB2 starts, IMS is also notified.
No message goes to the z/OS console.
Thread attachment
Execution of the program’s first SQL statement causes the IMS attachment facility
to create a thread and allocate a plan, whose name is associated with the IMS
application program module name. DB2 sets up control blocks for the thread and
loads the plan.
Using the DB2 command DISPLAY THREAD: The DB2 command DISPLAY
THREAD can be used to display IMS attachment facility threads.
Notes:
1. After the application has connected to DB2 but before sign-on processing has completed,
this field is blank.
2. After sign-on processing has completed but before a plan has been allocated, this field is
blank.
The following command displays information about IMS threads, including those
accessing data at remote locations:
-DISPLAY THREAD(imsid)
Chapter 16. Monitoring and controlling DB2 and its connections 359
DSNV401I -STR DISPLAY THREAD REPORT FOLLOWS -
DSNV402I -STR ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
1SYS3 T * 3 0002BMP255 ADMF001 PROGHR1 0019 99
SYS3 T * 4 0001BMP255 ADMF001 PROGHR2 0018 97
2SYS3 N 5 SYSADM 0065 0
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I -STR DSNVDT ’-DIS THD’ NORMAL COMPLETION
Key:
1 This is a message-driven BMP.
2 This thread has completed sign-on processing, but a DB2 plan has not been allocated.
Thread termination
When an application terminates, IMS invokes an exit routine to disconnect the
application from DB2. There is no way to terminate a thread without abending the
IMS application with which it is associated. Two ways of terminating an IMS
application are described here:
v Termination of the application
The IMS commands /STOP REGION reg# ABDUMP or /STOP REGION reg#
CANCEL can be used to terminate an application running in an online
environment. For an application running in the DL/I batch environment, the
z/OS command CANCEL can be used. See IMS Command Reference for more
information about terminating IMS applications.
v Use of the DB2 command CANCEL THREAD
CANCEL THREAD can be used to cancel a particular thread or set of threads.
CANCEL THREAD requires that you know the token for any thread you want to
cancel. Enter the following command to cancel the thread identified by a token
in the display output:
-CANCEL THREAD(46)
When you issue CANCEL THREAD for a thread, that thread is scheduled to be
terminated in DB2.
For an explanation of the list displayed, see the description of message DSNV408I
in Part 2 of DB2 Messages.
End of General-use Programming Interface
Here imsid is the connection name and pst#.psbname is the correlation ID listed by
the command DISPLAY THREAD. Your choice of the ACTION parameter tells
whether to commit or roll back the associated unit of recovery. For more details,
see “Resolving indoubt units of recovery” on page 427.
One of the following messages can be used after you issue the RECOVER
command:
DSNV414I - THREAD pst#.psbname COMMIT SCHEDULED
DSNV415I - THREAD pst#.psbname ABORT SCHEDULED
Chapter 16. Monitoring and controlling DB2 and its connections 361
V449-HAS NID= DSN:0001.0 AND ID= RUNP10
BATCH P-ABORT 00017854AA2E ADMF001
V449-HAS NID= DSN:0002.0 AND ID= RUNP90
BATCH P-ABORT 0001785CD711 ADMF001
V449-HAS NID= DSN:0004.0 AND ID= RUNP12
DISPLAY POSTPONED ABORT REPORT COMPLETE
DSN9022I -STR DSNVDT ’-DIS THD’ NORMAL COMPLETION
For an explanation of the displayed list, see the description of messages in Part 2
of DB2 Messages.
Two threads can have the same correlation ID (pst#.psbname) if all of these
conditions occur:
v Connections have been broken several times.
v Indoubt units of recovery were not recovered.
v Applications were subsequently scheduled in the same region.
The NID is shown in a condensed form on the messages issued by the DB2
DISPLAY THREAD command processor. The IMS subsystem name (imsid) is
displayed as the net_node. The net_node is followed by the 8-byte OASN, displayed
in hexadecimal format (16 characters), with all leading zeros omitted. The net_node
and the OASN are separated by a period.
For example, if the net_node is IMSA, and the OASN is 0003CA670000006E, the
NID is displayed as IMSA.3CA670000006E on the DB2 DISPLAY THREAD
command output.
If two threads have the same corr-id, use the NID instead of corr-id on the
RECOVER INDOUBT command. The NID uniquely identifies the work unit.
The OASN is a 4-byte number that represents the number of IMS scheduling since
the last IMS cold start. The OASN is occasionally found in an 8-byte format, where
the first four bytes contain the scheduling number, and the last four bytes contain
the number of IMS sync points (commits) during this schedule. The OASN is part
of the NID.
The NID is a 16-byte network ID that originates from IMS. The NID contains the
4-byte IMS subsystem name, followed by four bytes of blanks, followed by the
8-byte version of the OASN. In communications between IMS and DB2, the NID
serves as the recovery token.
End of General-use Programming Interface
where nnnn is the originating application sequence number listed in the display.
That is the schedule number of the program instance, telling its place in the
sequence of invocations of that program since the last cold start of IMS. IMS
cannot have two indoubt units of recovery with the same schedule number.
Those commands reset the status of IMS; they do not result in any
communication with DB2.
Chapter 16. Monitoring and controlling DB2 and its connections 363
If DB2 is not active, or if resources are not available when the first SQL statement
is issued from an application program, the action taken depends on the error
option specified on the SSM user entry. The options are:
Option Action
R The appropriate return code is sent to the application, and the SQL code is
returned.
Q The application is abended. This is a PSTOP transaction type; the input
transaction is re-queued for processing and new transactions are queued.
A The application is abended. This is a STOP transaction type; the input
transaction is discarded and new transactions are not queued.
The region error option can be overridden at the program level via the resource
translation table (RTT). See Part 2 of DB2 Installation Guide for further details.
From DB2:
-DISPLAY THREAD (imsid)
From IMS:
/SSR -DISPLAY THREAD (imsid)
For an explanation of the DISPLAY THREAD status information displayed, see the
description of message DSNV404I in Part 2 of DB2 Messages. More detailed
information regarding use of this command and the reports it produces is available
in “The command DISPLAY THREAD” on page 375.
The connection between IMS and DB2 is shown as one of the following states:
CONNECTED
NOT CONNECTED
CONNECT IN PROGRESS
STOPPED
The thread status from each dependent region is show as one of the following
states:
CONN
CONN, ACTIVE (includes LTERM of user)
The following four examples show the output that might be generated when an
IMS /DISPLAY SUBSYS command is issued.
Figure 30 shows the output that is returned for a DSN subsystem that is not
connected. The IMS attachment facility issues message DSNM003I in this example.
Figure 31 shows the output that is returned for a DSN subsystem that is connected.
The IMS attachment facility issues message DSNM001I in this example.
Figure 32 shows the output that is returned for a DSN subsystem that is in a
stopped status. The IMS attachment facility issues message DSNM002I in this
example.
Figure 32. Example of output from the IMS /DISPLAY SUBSYS command
Figure 33 on page 366 shows the output that is returned for a DSN subsystem that
is connected and region 1. You can use the values from the REGID and the
PROGRAM fields to correlate the output of the command to the LTERM that is
involved.
Chapter 16. Monitoring and controlling DB2 and its connections 365
0000 16.09.35 JOB 56 R 59,/DIS SUBSYS ALL
0000 16.09.35 JOB 56 IEE600I REPLY TO 59 IS;/DIS SUBSYS ALL
0000 16.09.38 JOB 56 DFS000I SUBSYS CRC REGID PROGRAM LTERM STATUS SYS3
0000 16.09.38 JOB 56 DFS000I DSN : CONN SYS3
0000 16.09.38 JOB 56 DFS000I 1 CONN SYS3
0000 16.09.38 JOB 56 DFS000I *83228/160938* SYS3
0000 16.09.38 JOB 56 *60 DFS996I *IMS READY* SYS3
0000 16.09.38 JOB 56
Figure 33. Example of output from IMS /DISPLAY SUBSYS processing for a DSN subsystem that is connected and
the region ID (1) that is included.
That command sends the following message to the terminal that entered it, usually
the master terminal operator (MTO):
DFS058I STOP COMMAND IN PROGRESS
In implicit or explicit disconnect, this message is sent to the IMS master terminal:
DSNM002I IMS/TM imsid DISCONNECTED FROM SUBSYSTEM subsystem-name - RC=z
If an application attempts to access DB2 after the connection ended and before a
thread is established, the attempt is handled according to the region error option
specification (R, Q, or A).
For more information about those functions, see Part 6 of DB2 Application
Programming and SQL Guide.
An RRSAF connection can be started or restarted at any time after RRS is started.
If RRS is not started, an IDENTIFY request fails with reason code X'00F30091'.
Chapter 16. Monitoring and controlling DB2 and its connections 367
Restarting DB2 and RRS
If DB2 abnormally terminates but RRS remains active, RRS might commit or roll
back work without DB2's knowledge. In a similar manner, if RRS abnormally
terminates after DB2 has completed phase 1 of commit processing for an
application, then DB2 does not know whether to commit or roll back the work. In
either case, when DB2 restarts, that work is termed indoubt.
DB2 cannot resolve those indoubt units of recovery (that is, commit or roll back the
changes made to DB2 resources) until DB2 restarts with RRS.
If any unit of work is indoubt when a failure occurs, DB2 and RRS automatically
resolve the unit of work when DB2 restarts with RRS.
or
-RECOVER INDOUBT (RRSAF) ACTION (ABORT) ID (correlation-id)
The ACTION parameter indicates whether to commit or roll back the associated
unit of recovery. For more details, see “Resolving indoubt units of recovery” on
page 427.
If you recover a thread that is part of a global transaction, all threads in the global
transaction are recovered.
The following messages can occur when you issue the RECOVER INDOUBT
command:
DSNV414I - THREAD correlation-id COMMIT SCHEDULED
DSNV415I - THREAD correlation-id ABORT SCHEDULED
For information about the two-phase commit process, as well as indoubt units of
recovery, see “Multiple system consistency” on page 423.
DB2 stores information in an RRS CONTEXT about an RRSAF thread so that DB2
can locate the thread later. An application or application monitor can then invoke
CTXSWCH to dissociate the CONTEXT from the current TCB and then associate
the CONTEXT with the same TCB or a different TCB.
Chapter 16. Monitoring and controlling DB2 and its connections 369
DSNV401I = DISPLAY THREAD REPORT FOLLOWS -
DSNV402I = ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
1RRSAF T 4 RRSTEST2-111 ADMF001 ?RRSAF 0024 13
2RRSAF T 6 RRSCDBTEST01 USRT001 TESTDBD 0024 63
3RRSAF DI 3 RRSTEST2-100 USRT002 ?RRSAF 001B 99
4RRSAF TR 9 GT01XP05 SYSADM TESTP05 001B 235
V444-DB2NET.LUND0.AA8007132465=16 ACCESSING DATA AT
V446-SAN_JOSE:LUND1
DISPLAY ACTIVE REPORT COMPLETE
Key:
1 This is an application that used CREATE THREAD to allocate the special plan used by
RRSAF (plan name = ?RRSAF).
2 This is an application that connected to DB2 and allocated a plan with the name TESTDBD.
3 This is an application that is currently not connected to a TCB (shown by status DI).
4 This is an active connection that is running plan TESTP05. The thread is accessing data at a
remote site.
When you issue CANCEL THREAD, DB2 schedules the thread for termination.
You can control connections to remote systems, which use distributed data, by
controlling the threads. Two types of threads are involved with connecting to other
systems, allied threads and database access threads. An allied thread is a thread that is
connected locally to your DB2 subsystem, that is from TSO, CICS, IMS, or a stored
procedures address space. A database access thread is a thread initiated by a
remote DBMS to your DB2 subsystem. The following topics are covered here:
“Starting the DDF” on page 371
“Suspending and resuming DDF server activity” on page 371
“Monitoring connections to other systems” on page 372, which describes the
use of the following commands:
– DISPLAY DDF
– DISPLAY LOCATION
– DISPLAY THREAD
– CANCEL THREAD
– VARY NET,TERM (VTAM command)
“Monitoring and controlling stored procedures” on page 383
“Using NetView to monitor errors” on page 386
“Stopping the DDF” on page 387
When DDF is started and is responsible for indoubt thread resolution with remote
partners, one or both of messages DSNL432I and DSNL433I is generated. These
messages summarize DDF’s responsibility for indoubt thread resolution with
remote partners. See Chapter 19, “Maintaining consistency across multiple
systems,” on page 423 for information about resolving indoubt threads.
Using the START DDF command requires authority of SYSOPR or higher. The
following messages are associated with this command:
DSNL003I - DDF IS STARTING
GENERICLU netname.gluname
DOMAIN domain
TCPPORT tcpport
RESPORT resport
If the distributed data facility has not been properly installed, the START DDF
command fails, and message DSN9032I, - REQUESTED FUNCTION IS NOT AVAILABLE
is issued. If the distributed data facility has already been started, the START DDF
command fails, and message DSNL001I, - DDF IS ALREADY STARTED is issued. Use
the DISPLAY DDF command to display the status of DDF.
When you install DB2, you can request that the distributed data facility start
automatically when DB2 starts. For information about starting the distributed data
facility automatically, see Part 2 of DB2 Installation Guide.
Chapter 16. Monitoring and controlling DB2 and its connections 371
Monitoring connections to other systems
The following DB2 commands give you information about distributed threads:
DISPLAY DDF
Displays information about the status and configuration of the distributed
data facility (DDF), and about the connections or threads controlled by
DDF. For its use, see “The command DISPLAY DDF.”
DISPLAY LOCATION
Displays statistics about threads and conversations between remote DB2
subsystem and the local subsystem. For its use, see “The command
DISPLAY LOCATION” on page 373.
DISPLAY THREAD
Displays information about DB2, distributed subsystem connections, and
parallel tasks. For its use, see “The command DISPLAY THREAD” on page
375.
To issue the DISPLAY DDF command, you must have SYSOPR authority or higher.
Issue the following command:
-DISPLAY DDF
DB2 returns output similar to this sample when DDF is not started:
DSNL080I - DSNLTDDF DISPLAY DDF REPORT FOLLOWS-
DSNL081I 1STATUS=STOPDQ
DSNL082I 2LOCATION 3LUNAME 4GENERICLU
DSNL083I SVL650A -NONE.SYEC650A -NONE
DSNL084I 5IPADDR 6TCPPORT 7RESPORT
DSNL085I -NONE 447 5002
DSNL086I 8SQL DOMAIN=-NONE
DSNL086I 9RESYNC DOMAIN=-NONE
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
For more DISPLAY DDF message information, see Part 2 of DB2 Messages.
The DISPLAY DDF DETAIL command is especially useful because it reflects the
presence of new inbound connections that are not reflected by other commands.
For example, if DDF is in INACTIVE MODE, as denoted by a DT value of I in the
DSNL090I message, and DDF is stopped, suspended, or the maximum number of
active data base access threads has been reached, then new inbound connections
are not yet reflected in the DISPLAY THREAD report. However, the presence of
these new connections is reflected in the DISPLAY DDF DETAIL report, although
specific details regarding the origin of the connections, such as the client system IP
address or LU name, are not available until the connections are actually associated
with a database access thread.
Chapter 16. Monitoring and controlling DB2 and its connections 373
USIBMSTODB23 DSN04010 LUND1 0 0 0
DRDALOC SQL03030 124.63.51.17 3 0 3
124.63.51.17 SQL03030 124.63.51.17 0 15 15
DISPLAY LOCATION REPORT COMPLETE
You can use an asterisk (*) in place of the end characters of a location name. For
example, use DISPLAY LOCATION(SAN*) to display information about all active
connections between your DB2 and a remote location that begins with “SAN”. This
includes the number of conversations and the role for each non-system
conversation, requester or server.
When DB2 connects with a remote location, information about that location,
including LOCATION, PRDID and LINKNAME (LUNAME or IP address), persists
in the report even if no active connections exist.
DB2 does not receive a location name from non-DB2 requesting DBMSs that are
connected to DB2. In this case, it displays instead the LUNAME of the requesting
DBMS, enclosed in less-than (<) and greater-than (>) symbols.
For example, suppose there are two threads at location USIBMSTODB21. One is a
distributed access thread from a non-DB2 DBMS, and the other is an allied thread
going from USIBMSTODB21 to the non-DB2 DBMS. The DISPLAY LOCATION
command issued at USIBMSTODB21 displays the following output:
DSNL200I - DISPLAY LOCATION REPORT FOLLOWS -
LOCATION PRDID LINKNAME REQUESTERS SERVERS CONVS
NONDB2DBMS LUND1 1 0 1
<LULA> DSN04010 LULA 0 1 1
DISPLAY LOCATION REPORT COMPLETE
The DISPLAY LOCATION command displays information for each remote location
that currently is, or once was, in contact with DB2. If a location is displayed with
zero conversations, one of the following conditions exist:
v Sessions currently exist with the partner location but there are currently no
active conversations allocated to any of the sessions.
v Sessions no longer exist with the partner because contact with the partner has
been lost.
If you use the DETAIL parameter, each line is followed by information about
conversations owned by DB2 system threads, including those used for
resynchronization of indoubt units of work.
You can use an asterisk (*) after the THD and LOCATION keywords just as in the
DISPLAY LOCATION command previously described. For example, enter:
-DISPLAY THREAD(*) LOCATION(*) DETAIL
Chapter 16. Monitoring and controlling DB2 and its connections 375
DSNV401I - DISPLAY THREAD REPORT FOLLOWS -
DSNV402I - ACTIVE THREADS -
NAME ST1A2 REQ ID AUTHID PLAN ASID TOKEN
SERVER RA * 2923 DB2BP ADMF001 DISTSERV 0036 203
V437-WORKSTATION=ARRAKIS, USERID=ADMF001,
APPLICATION NAME=DB2BP
V436-PGM=NULLID.SQLC27A4, SEC=201, STMNT=210
V445-09707265.01BE.889C28200037=203 ACCESSING DATA FOR 9.112.12.101
V447-LOCATION SESSID A ST TIME
V448-9.112.12.1014 446:13005 W S2 9802812045091
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I - DSNVDT ’-DIS THD’ NORMAL COMPLETION
Key Description
1 The ST (status) column contains characters that indicate the connection status of the local
site. The TR indicates that an allied, distributed thread has been established. The RA
indicates that a distributed thread has been established and is in receive mode. The RD
indicates that a distributed thread is performing a remote access on behalf of another
location (R) and is performing an operation involving DCE services (D). Currently, DB2
supports the optional use of DCE services to authenticate remote users.
2 The A (active) column contains an asterisk indicating that the thread is active within DB2. It
is blank when the thread is inactive within DB2 (active or waiting within the application).
3 This LUWID is unique across all connected systems. This thread has a token of 20 (it appears
in two places in the display output).
4 This is the location of the data that the local application is accessing. If the RDBNAME is not
known, the location column contains either a VTAM LUNAME or a dotted decimal IP
address.
5 If the connection uses TCP/IP, the sessid column contains ″local:remote″, where ″local″
specifies DB2’s TCP/IP port number and ″remote″ specifies the partner’s TCP/IP port
number.
For more information about this sample output and connection status codes, see
message DSNV404I, DSNV444I, and DSNV446I, in Part 2 of DB2 Messages.
Displaying information for non-DB2 locations: Because DB2 does not receive a
location name from non-DB2 locations, you must enter the LUNAME or IP address
of the location for which you want to display information. The LUNAME is
enclosed by the less-than (<) and greater-than (>) symbols. The IP address is in the
dotted decimal format. For example, if you wanted to display information about a
non-DB2 DBMS with the LUNAME of LUSFOS2, you would enter the following
command:
-DISPLAY THREAD (*) LOCATION (<LUSFOS2>)
DB2 returns the following message, indicating that the local site application is
waiting for a conversation to be allocated in DB2, and a DB2 server that is accessed
by a DRDA client using TCP/IP.
For more DISPLAY THREAD message information, see messages DSNV447I and
DSNV448I, Part 2 of DB2 Messages.
Chapter 16. Monitoring and controlling DB2 and its connections 377
USIBMSTODB23
SDA
This output indicates that the application is waiting for data to be returned by the
server at USIBMSTODB22.
This output indicates that the server at USIBMSTODB22 is waiting for data to be
returned by the secondary server at USIBMSTODB24.
This output indicates that the secondary server at USIBMSTODB23 is not currently
active.
The secondary server at USIBMSTODB24 is also accessing data for the primary
server at USIBMSTODB22. If you enter the DISPLAY THREAD command with the
DETAIL keyword from USIBMSTODB24, you receive the output that Figure 39:
The conversation status might not change for a long time. The conversation could
be hung, or the processing could just be taking a long time. To see whether the
conversation is hung, issue DISPLAY THREAD again and compare the new
timestamp to the timestamps from previous output messages. If the timestamp is
changing, but the status is not, the job is still processing. If you need to terminate a
distributed job, perhaps because it is hung and has been holding database locks for
a long period of time, you can use the CANCEL DDF THREAD command if the
thread is in DB2 (whether active or suspended) or the VARY NET TERM command
if the thread is within VTAM. See “The command CANCEL THREAD” on page
381.
Displaying threads by LUWIDs: Use the LUWID optional keyword, which is only
valid when DDF has been started, to display threads by logical unit of work
identifiers. The LUWIDs are assigned to the thread by the site that originated the
thread.
You can use an asterisk (*) in an LUWID as in a LOCATION name. For example,
use -DISPLAY THREAD TYPE(INDOUBT) LUWID(NET1.*) to display all the
indoubt threads whose LUWID has a network name of NET1. The command
Chapter 16. Monitoring and controlling DB2 and its connections 379
DISPLAY THREAD TYPE(INDOUBT) LUWID(IBM.NEW*) displays all indoubt
threads whose LUWID has a network name of ″IBM″ and whose LUNAME begins
with ″NEW.″
The DETAIL keyword can also be used with the DISPLAY THREAD LUWID
command to show the status of every conversation connected to each thread
displayed and to indicate whether a conversation is using DRDA access or DB2
private protocol access.
| You can cancel only dynamic SQL that excludes transaction level statements
| (CONNECT, COMMIT, ROLLBACK) and bind statements from a client application.
| For more information about SQLCancel(), see DB2 ODBC Guide and Reference. For
| more information about the JDBC cancel method, see DB2 Application Programming
| Guide and Reference for Java.
A database access thread can also be in the prepared state waiting for the commit
decision from the coordinator. When you issue CANCEL THREAD for a database
access thread in the prepared state, the thread is converted from active to indoubt.
The conversation with the coordinator, and all conversations with downstream
participants, are terminated and message DSNL450I is returned. The resources held
by the thread are not released until the indoubt state is resolved. This is
accomplished automatically by the coordinator or by using the command
RECOVER INDOUBT. See “Resolving indoubt units of recovery” on page 427 for
more information.
When the command is entered at the DB2 subsystem that has a database access
thread servicing requests from a DB2 subsystem that owns the allied thread, the
database access thread is terminated. Any active SQL request, and all later
requests, from the allied thread result in a ″resource not available″ return code.
Alternatively, you can use the following version of the command with either the
token or LUW ID:
-CANCEL DDF THREAD (token or luwid)
The token is a 1-character to 5-character number that identifies the thread. When
DB2 schedules the thread for termination, you will see the following message for a
distributed thread:
DSNL010I - DDF THREAD token or luwid HAS BEEN CANCELED
For more information about CANCEL THREAD, see Chapter 2 of DB2 Command
Reference.
For more detailed information about diagnosing DDF failures, see Part 3 of DB2
Diagnosis Guide and Reference.
Chapter 16. Monitoring and controlling DB2 and its connections 381
DSNL022I
To do this, you need to know the VTAM session IDs that correspond to the thread.
Follow these steps:
1. Issue the DB2 command DISPLAY THREAD(nnnn) LOC(*) DETAIL.
This gives you the VTAM session IDs that must be canceled. As is shown in the
DISPLAY THREAD output in Figure 41, these sessions are identified by the
column header SESSID.
2. Record positions 3 through 16 of SESSID for the threads to be canceled. (In the
preceding DISPLAY THREAD output, the values are D3590EA1E89701 and
D3590EA1E89822.)
3. Issue the VTAM command DISPLAY NET to display the VTAM session IDs
(SIDs). The ones you want to cancel match the SESSIDs in positions 3 through
16. In Figure 42, the corresponding session IDs (DSD3590EA1E89701 and
D2D3590EA1E89822) are shown in bold.
D NET,ID=LUND0,SCOPE=ACT
4. Issue the VTAM command VARY NET,TERM SID= for each of the VTAM SIDs
associated with the DB2 thread. For more information about VTAM commands,
see VTAM for MVS/ESA Operation.
For more information about stored procedures, see Part 6 of DB2 Application
Programming and SQL Guide.
The DB2 command DISPLAY PROCEDURE: This command can display the
following information about stored procedures:
v Status (started, stop-queue, stop-reject, stop-abend)
v Number of requests currently running and queued
v Maximum number of threads running a stored procedure load module and
queued
v Count of timed-out SQL CALLs
The following command displays information about all stored procedures in all
schemas that have been accessed by DB2 applications:
-DISPLAY PROCEDURE
| DSNX940I csect - DISPLAY PROCEDURE REPORT FOLLOWS-
|
| ------ SCHEMA=PAYROLL
| PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
| PAYRPRC1
| STARTED 0 0 1 0 0 PAYROLL
| PAYRPRC2
| STOPQUE 0 5 5 3 0 PAYROLL
| PAYRPRC3
| STARTED 2 0 6 0 0 PAYROLL
| USERPRC4
| STOPREJ 0 0 1 0 1 SANDBOX
|
| ------ SCHEMA=HRPROD
| PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
| HRPRC1
| STARTED 0 0 1 0 1 HRPROCS
| HRPRC2
| STOPREJ 0 0 1 0 0 HRPROCS
| DISPLAY PROCEDURE REPORT COMPLETE
| DSN9022I = DSNX9COM ’-DISPLAY PROC’ NORMAL COMPLETION
Chapter 16. Monitoring and controlling DB2 and its connections 383
This example shows two schemas (PAYROLL and HRPROD) that have been
accessed by DB2 applications. You can also display information about specific
stored procedures.
The SP status indicates that the thread is executing within the stored procedure. An
SW status indicates that the thread is waiting for the stored procedure to be
scheduled.
The z/OS command DISPLAY WLM: Use the command DISPLAY WLM to
determine the status of an application environment in which a stored procedure
runs. The output from DISPLAY WLM lets you determine whether a stored
procedure can be scheduled in an application environment.
For example, you can issue this command to determine the status of application
environment WLMENV1:
D WLM,APPLENV=WLMENV1
The output tells you that WLMENV1 is available, so WLM can schedule stored
procedures for execution in that environment.
The method that you use to perform these tasks for stored procedures depends on
whether you are using WLM-established or DB2-established address spaces.
For DB2-established address spaces: Use the DB2 commands START PROCEDURE
and STOP PROCEDURE to perform all of these tasks.
For WLM-established address spaces: You use the VARY WLM command to
complete the following tasks,
v To refresh the Language Environment when you need to load a new version of a
stored procedure, use the z/OS command
VARY WLM,APPLENV=name,REFRESH
You also need to use the VARY WLM command with the RESUME option when
WLM puts an application environment in the unavailable state. An application
environment in which stored procedures run becomes unavailable when WLM
detects five abnormal terminations within 10 minutes. When an application
environment is in the unavailable state, WLM does not schedule stored
procedures for execution in it.
See z/OS MVS Planning: Workload Management for more information about the
command VARY WLM.
Chapter 16. Monitoring and controlling DB2 and its connections 385
You can obtain the dump information by stopping the stored procedures address
space in which the stored procedure is running. See “Refreshing the environment
for stored procedures or user-defined functions” on page 385 for information about
how to stop and start stored procedures address spaces in the DB2-established and
WLM-established environments.
Alerts for DDF are displayed on NetView’s Hardware Monitor panels and are
logged in the hardware monitor database. Figure 43 is an example of the
Alerts-Static panel in NetView.
Figure 43. Alerts-static panel in NetView. DDF errors are denoted by the resource name AS
(server) and AR (requester). For DB2-only connections, the resource names would be RS
(server) and RQ (requester).
To see the recommended action for solving a particular problem, enter the selection
number, and then press ENTER. This displays the Recommended Action for
Selected Event panel shown in Figure 44 on page 387.
Figure 44. Recommended action for selected event panel in NetView. In this example, the AR
(USIBMSTODB21) is reporting the problem, which is affecting the AS (USIBMSTODB22).
Key Description
1 The system reporting the error. The system reporting the error is always on
the left side of the panel. That system’s name appears first in the messages.
Depending on who is reporting the error, either the LUNAME or the
location name is used.
2 The system affected by the error. The system affected by the error is
always displayed to the right of the system reporting the error. The
affected system’s name appears second in the messages. Depending on
what type of system is reporting the error, either the LUNAME or the
location name is used.
If no other system is affected by the error, then this system will not appear
on the panel.
3 DB2 reason code. For information about DB2 reason codes, see Part 3 of
DB2 Codes. For diagnostic information, see Part 3 of DB2 Diagnosis Guide
and Reference.
For more information about using NetView, see Tivoli NetView for z/OS User's
Guide.
You need SYSOPR authority or higher to stop the distributed data facility. Use one
of the following commands:
-STOP DDF MODE (QUIESCE)
-STOP DDF MODE (FORCE)
Use the QUIESCE option whenever possible; it is the default. With QUIESCE, the
STOP DDF command does not complete until all VTAM or TCP/IP requests have
completed. In this case, no resynchronization work is necessary when you restart
DDF. If there are indoubt units of work that require resynchronization, the
QUIESCE option produces message DSNL035I. Use the FORCE option only when
you must stop DDF quickly. Restart times are longer if you use FORCE.
Chapter 16. Monitoring and controlling DB2 and its connections 387
When DDF is stopped with the FORCE option, and DDF has indoubt thread
responsibilities with remote partners, one or both of messages DSNL432I and
DSNL433I is generated.
DSNL432I shows the number of threads that DDF has coordination responsibility
over with remote participants who could have indoubt threads. At these
participants, database resources that are unavailable because of the indoubt threads
remain unavailable until DDF is started and resolution occurs.
DSNL433I shows the number of threads that are indoubt locally and need
resolution from remote coordinators. At the DDF location, database resources are
unavailable because the indoubt threads remain unavailable until DDF is started
and resolution occurs.
To force the completion of outstanding VTAM or TCP/IP requests, use the FORCE
option, which cancels the threads associated with distributed requests.
When the FORCE option is specified with STOP DDF, database access threads in
the prepared state that are waiting for the commit or abort decision from the
coordinator are logically converted to the indoubt state. The conversation with the
coordinator is terminated. If the thread is also a coordinator of downstream
participants, these conversations are terminated. Automatic indoubt resolution is
initiated when DDF is restarted. See “Resolving indoubt units of recovery” on page
427 for more information about this topic.
If the distributed data facility has already been stopped, the STOP DDF command
fails and message DSNL002I - DDF IS ALREADY STOPPED appears.
Stopping DDF using VTAM commands: Another way to force DDF to stop is to
issue the VTAM VARY NET,INACT command. This command makes VTAM
unavailable and terminates DDF. VTAM forces the completion of any outstanding
VTAM requests immediately.
where db2lu is the VTAM LU name for the local DB2 system.
When DDF has stopped, the following command must be issued before START
DDF can be attempted:
VARY NET,ACT,ID=db2lu
Controlling traces
These traces can be used for problem determination:
DB2 trace
IMS attachment facility trace
CICS trace
Three TSO attachment facility traces
388 Administration Guide
CAF trace stream
RRS trace stream
z/OS component trace used for IRLM
DB2 trace allows you to trace and record subsystem data and events. There are five
different types of trace. For classes of events traced by each type see the
description of the START TRACE command in Chapter 2 of DB2 Command
Reference. For more information about the trace output produced, see Appendix D,
“Interpreting DB2 trace output,” on page 1101. In brief, DB2 records the following
types of data:
Statistics
Data that allows you to conduct DB2 capacity planning and to tune the
entire set of DB2 programs.
Accounting
Data that allows you to conduct DB2 capacity planning and to tune the
entire set of DB2 programs.
Performance
Data about subsystem events, which can be used to do program, resource,
user, and subsystem-related tuning.
Audit Data that can be used to monitor DB2 security and access to data.
Monitor
Data that can be used to monitor DB2 security and access to data.
DB2 provides commands for controlling the collection of this data. To use the trace
commands you must have one of the following types of authority:
v SYSADM or SYSOPR authority
v Authorization to issue start and stop trace commands (the TRACE privilege)
v Authorization to issue the display trace command (the DISPLAY privilege).
Several parameters can be specified to further qualify the scope of a trace. Specific
events within a trace type can be traced as well as events within specific DB2
plans, authorization IDs, resource manager IDs and location. The destination to
which trace data is sent can also be controlled. For a discussion of trace
commands, see Chapter 2 of DB2 Command Reference.
Chapter 16. Monitoring and controlling DB2 and its connections 389
When you install DB2, you can request that any trace type and class start
automatically when DB2 starts. For information about starting traces automatically,
see Part 2 of DB2 Installation Guide.
End of General-use Programming Interface
Recommendations:
v Do not use the external component trace writer to write traces to the data set.
v Activate all traces during IRLM startup. Use the command START
irlmproc,TRACE=YES to activate all traces.
The governor allows the system administrator to limit the amount of time
permitted for the execution of the SELECT, UPDATE, DELETE, and INSERT
dynamic SQL statements.
The limits are defined in resource limit specification tables and can vary for
different users. One resource limit specification table is used for each invocation of
the governor and is identified on the START RLIMIT command.
See “Resource limit facility (governor)” on page 683 for more information about
the governor.
End of General-use Programming Interface
When you install DB2, you can request that the governor start automatically when
DB2 starts. For information about starting the governor automatically, see Part 2 of
DB2 Installation Guide.
Chapter 16. Monitoring and controlling DB2 and its connections 391
2. Assemble and link-edit the DSNTIJUZ job produced in Step 1, and then submit
the job to create the new load module with the new subsystem parameter
values.
3. Issue the SET SYSPARM command to change the subsystem parameters
dynamically:
SET SYSPARM LOAD(load-module-name)
where you specify the load-module-name to be the same as the output member
name in Step 1.
If you want to specify the load module name that is used during DB2 start up,
you can issue the following command:
SET SYSPARM RELOAD
For more information, see Part 2 of DB2 Installation Guide and Chapter 2 of DB2
Command Reference.
DB2 writes each log record to a disk data set called the active log. When the active
log is full, DB2 copies its contents to a disk or tape data set called the archive log.
That process is called offloading. This chapter describes:
“How database changes are made”
“Establishing the logging environment” on page 395
“Managing the bootstrap data set (BSDS)” on page 403
“Discarding archive log records” on page 405
For information about the physical and logical records that make up the log, see
Appendix C, “Reading log records,” on page 1077. That appendix also contains
information about how to write a program to read log records.
Units of recovery
A unit of recovery is the work, done by a single DB2 DBMS for an application, that
changes DB2 data from one point of consistency to another. A point of consistency
(also, sync point or commit point) is a time when all recoverable data that an
application program accesses is consistent with other data. (For an explanation of
maintaining consistency between DB2 and another subsystem such as IMS or CICS,
see “Multiple system consistency” on page 423.)
A unit of recovery begins with the first change to the data after the beginning of
the job or following the last point of consistency and ends at a later point of
consistency. An example of units of recovery within an application program is
shown in Figure 45.
Application process
Unit of recovery
SQL transaction 1 SQL transaction 2
Time
line
For example, a bank transaction might transfer funds from account A to account B.
First, the program subtracts the amount from account A. Next, it adds the amount
to account B. After subtracting the amount from account A, the two accounts are
inconsistent. These accounts are inconsistent until the amount is added to account
B. When both steps are complete, the program can announce a point of consistency
and thereby make the changes visible to other application programs.
Time
Database updates Back out updates
line
The effects of inserts, updates, and deletes to large object (LOB) values are backed
out along with all the other changes made during the unit of work being rolled
back, even if the LOB values that were changed reside in a LOB table space with
the LOG NO attribute.
An operator or an application can issue the CANCEL THREAD command with the
NOBACKOUT option to cancel long running threads without backing out data
changes. DB2 backs out changes to catalog and directory tables regardless of the
NOBACKOUT option. As a result, DB2 does not read the log records and does not
write or apply the compensation log records. After CANCEL THREAD
NOBACKOUT processing, DB2 marks all objects associated with the thread as
refresh pending (REFP) and puts the objects in a logical page list (LPL). For
information about how to reset the REFP status, see DB2 Utility Guide and Reference.
The NOBACKOUT request might fail for either of the following two reasons:
v DB2 does not completely back out updates of the catalog or directory (message
DSNI032I with reason 00C900CC).
v The thread is part of a global transaction (message DSNV439I).
If you change or create data that is compressed, the data logged is also
compressed. Changes to compressed rows like inserts, updates, and deletes are
also logged as compressed data.
When DB2 is initialized, the active log data sets named in the BSDS are
dynamically allocated for exclusive use by DB2 and remain allocated exclusively to
DB2 (the data sets were allocated as DISP=OLD) until DB2 terminates. Those active
Chapter 17. Managing the log and the bootstrap data set 395
log data sets cannot be replaced, nor can new ones be added, without terminating
and restarting DB2. The size and number of log data sets is indicated by what was
specified by installation panel DSNTIPL.
Write to Triggering
active log event
Offload
process
Write to
archive log
Record on
BSDS
Triggering offload
An offload of an active log to an archive log can be triggered by several events.
The most common are when:
v An active log data set is full
v Starting DB2 and an active log data set is full
v The command ARCHIVE LOG is issued
When all active logs become full, the DB2 subsystem runs an offload and halts
processing until the offload is completed. If the offload processing fails when the
active logs are full, then DB2 cannot continue doing any work that requires writing
to the log. For additional information, see “Active log failure recovery” on page
499.
The operator can respond by canceling the offload. In that case, if the allocation is
for the first copy of dual archive data sets, the offload is merely delayed until the
next active log data set becomes full. If the allocation is for the second copy, the
| archive process switches to single copy mode, but for the one data set only.
| Delay of log offload task: When DB2 switches active logs and finds that the
| offload task has been active since the last log switch, it issues the following
| message to notify the operator that there might be an outstanding tape mount or
| some other problem that prevents the offload of the previous active log data set.
| DSNJ017E - csect-name WARNING - OFFLOAD TASK HAS BEEN ACTIVE SINCE
| date-time AND MAY HAVE STALLED
Messages returned during offloading: The following messages are sent to the
z/OS console by DB2 and the offload process. With the exception of the DSNJ139I
message, these messages can be used to find the RBA ranges in the various log
data sets.
v The following message appears during DB2 initialization when the current active
log data set is found, and after a data set switch. During initialization, the
STARTRBA value in the message does not refer to the beginning of the data set,
but to the position in the log where logging will begin.
DSNJ001I - csect-name CURRENT COPY n ACTIVE LOG DATA SET IS
DSNAME=..., STARTRBA=..., ENDRBA=...
v The following message appears when an active data set is full:
DSNJ002I - FULL ACTIVE LOG DATA SET DSNAME=...,
STARTRBA=..., ENDRBA=...
v The following message appears when offload reaches end-of-volume or
end-of-data-set in an archive log data set:
Non-data sharing version is:
DSNJ003I - FULL ARCHIVE LOG VOLUME DSNAME=..., STARTRBA=..., ENDRBA=...,
STARTTIME=..., ENDTIME=..., UNIT=..., COPYnVOL=...,
VOLSPAN=..., CATLG=...
Data sharing version is:
DSNJ003I - FULL ARCHIVE LOG VOLUME DSNAME=..., STARTRBA=..., ENDRBA=...,
STARTLRSN=..., ENDLRSN=..., UNIT=..., COPYnVOL=...,
VOLSPAN=..., CATLG=...
v The following message appears when one data set of the next pair of active logs
is not available because of a delay in offloading, and logging continues on one
copy only:
Chapter 17. Managing the log and the bootstrap data set 397
DSNJ004I - ACTIVE LOG COPY n INACTIVE, LOG IN SINGLE MODE,
ENDRBA=...
v The following message appears when dual active logging resumes after logging
has been carried on with one copy only:
DSNJ005I - ACTIVE LOG COPY n IS ACTIVE, LOG IN DUAL MODE,
STARTRBA=...
v The following message indicates that the offload task has ended:
DSNJ139I LOG OFFLOAD TASK ENDED
Interruptions and errors while offloading: Here is how DB2 handles the
following interruptions in the offloading process:
v The command STOP DB2 does not take effect until offloading is finished.
v A DB2 failure during offload causes offload to begin again from the previous
start RBA when DB2 is restarted.
v Offload handling of read I/O errors on the active log is described under “Active
log failure recovery” on page 499, or write I/O errors on the archive log, under
“Archive log failure recovery” on page 503.
v An unknown problem that causes the offload task to hang means that DB2
cannot continue processing the log. This problem might be resolved by retrying
the offload, which you can do by using the option CANCEL OFFLOAD of the
command ARCHIVE LOG, described in “Canceling log off-loads” on page 402.
Output archive log data sets are dynamically allocated, with names chosen by DB2.
The data set name prefix, block size, unit name, and disk sizes needed for
allocation are specified when DB2 is installed, and recorded in the DSNZPxxx
module. You can also choose, at installation time, to have DB2 add a date and time
to the archive log data set name. See installation panel DSNTIPH in Part 2 of DB2
Installation Guide for more information.
| Restrictions: Consider the following restrictions for archive log data sets and
| volumes:
| v You cannot specify specific volumes for new archive logs. If allocation errors
| occur, offloading is postponed until the next time loading is triggered.
| v Do not use partitioned data set extended (PDSE) for archive log data. PDSEs are
| not supported for archive logs.
Using dual archive logging: If you specify dual archive logs at installation time,
each log CI retrieved from the active log is written to two archive log data sets.
The log records that are contained on a pair of dual archive log data sets are
identical, but end-of-volumes are not synchronized for multivolume data sets.
Archiving to disk offers faster recoverability but is more expensive than archiving
to tape. If you use dual logging, you can specify on installation panel DSNTIPA
that the primary copy of the archive log go to disk and the secondary copy go to
tape.
Archiving to tape: If the unit name reflects a tape device, DB2 can extend to a
maximum of twenty volumes. DB2 passes a file sequence number of 1 on the
catalog request for the first file on the next volume. Though that might appear to
be an error in the integrated catalog facility catalog, it causes no problems in DB2
processing.
If you choose to offload to tape, consider adjusting the size of your active log data
sets such that each set contains the amount of space that can be stored on a nearly
full tape volume. That adjustment minimizes tape handling and volume mounts
and maximizes the use of tape resources. However, such an adjustment is not
always necessary.
If you want the active log data set to fit on one tape volume, consider placing a
copy of the BSDS on the same tape volume as the copy of the active log data set.
Adjust the size of the active log data set downward to offset the space required for
the BSDS.
Archiving to disk volumes: All archive log data sets allocated on disk must be
cataloged. If you choose to archive to disk, then the field CATALOG DATA of
installation panel DSNTIPA must contain YES. If this field contains NO, and you
decide to place archive log data sets on disk, you receive message DSNJ072E each
time an archive log data set is allocated, although the DB2 subsystem still catalogs
the data set.
If you use disk storage, be sure that the primary and secondary space quantities
and block size and allocation unit are large enough so that the disk archive log
data set does not attempt to extend beyond 15 volumes. That minimizes the
possibility of unwanted z/OS B37 or E37 abends during the offload process.
Primary space allocation is set with the PRIMARY QUANTITY field of the
DSNTIPA installation panel. The primary space quantity must be less than 64K
tracks because of the DFSMS Direct Access Device Space Management limit of 64K
tracks on a single volume when allocating a sequential disk data set.
| Using SMS to manage archive log data sets: You can use DFSMS (Data Facility
| Storage Management Subsystem) to manage archive log data sets. When archiving
| to disk, DB2 uses the number of online storage volumes for the specified unit to
| determine a count of candidate volumes, up to a maximum of 15 volumes. If you
| are using SMS to direct archive log data set allocation, you should override this
| candidate volume count by specifying YES for the field SINGLE VOLUME on
| installation panel DSNTIPA. This will allow SMS to manage the allocation volume
| count appropriately when creating multi-volume disk archive log data sets.
Because SMS requires disk data sets to be cataloged, you must make sure the field
CATALOG DATA on installation panel DSNTIPA contains YES. Even if it does not,
message DSNJ072E is returned and the data set is forced to be cataloged by DB2.
DB2 uses the basic direct access method (BDAM) to read archive logs from disk.
DFSMS does not support reading of compressed data sets using BDAM. You
should not, therefore, use DFSMS hardware compression on your archive log data
Chapter 17. Managing the log and the bootstrap data set 399
| sets. Also, BDAM does not support extended sequential format data sets (which
| are used for striped or compressed data), so do not have DFSMS assign the
| extended sequential format to your archive log data sets.
Ensure that DFSMS does not alter the LRECL, BLKSIZE, or RECFM of the archive
log data sets. Altering these attributes could result in read errors when DB2
attempts to access the log data.
| Attention: DB2 does not issue an error or a warning if you write or alter archive
| data to an unreadable format. For example, if DB2 successfully writes archive log
| data to an extended format data set, DB2 issues an error message only when you
| attempt to read that data.
A properly authorized operator can archive the current DB2 active log data sets,
whenever required, by issuing the ARCHIVE LOG command. Using ARCHIVE
LOG can help with diagnosis by allowing you to quickly offload the active log to
the archive log where you can use DSN1LOGP to further analyze the problem.
To issue this command, you must have either SYSADM authority, or have been
granted the ARCHIVE privilege.
-ARCHIVE LOG
When you issue the preceding command, DB2 truncates the current active log data
sets, then runs an asynchronous offload, and updates the BSDS with a record of
the offload. The RBA that is recorded in the BSDS is the beginning of the last
complete log record written in the active log data set being truncated.
You could use the ARCHIVE LOG command as follows to capture a point of
consistency for the MSTR01 and XUSR17 databases:
-STOP DATABASE (MSTR01,XUSR17)
-ARCHIVE LOG
-START DATABASE (MSTR01,XUSR17)
In this simple example, the STOP command stops activity for the databases before
archiving the log.
Quiescing activity before offloading: Another method of ensuring that activity has
stopped before the log is archived is the MODE(QUIESCE) option of ARCHIVE
LOG. With this option, DB2 users are quiesced after a commit point, and the
resulting point of consistency is captured in the current active log before it is
offloaded. Unlike the QUIESCE utility, ARCHIVE LOG MODE(QUIESCE) does not
force all changed buffers to be written to disk and does not record the log RBA in
SYSIBM.SYSCOPY. It does record the log RBA in the bootstrap data set.
400 Administration Guide
Consider using MODE(QUIESCE) when planning for offsite recovery. It creates a
system-wide point of consistency, which can minimize the number of data
inconsistencies when the archive log is used with the most current image copy
during recovery.
The MODE(QUIESCE) option suspends all new update activity on DB2 up to the
maximum period of time specified on the installation panel DSNTIPA, described in
Part 2 of DB2 Installation Guide. If the time needed to quiesce is less than the time
specified, then the command completes successfully; otherwise, the command fails
when the time period expires. This time amount can be overridden when you issue
the command, by using the TIME option:
-ARCHIVE LOG MODE(QUIESCE) TIME(60)
Important
Use of this option during prime time, or when time is critical, can cause a
significant disruption in DB2 availability for all jobs and users that use DB2
resources.
By default, the command is processed asynchronously from the time you submit
the command. (To process the command synchronously with other DB2
commands, use the WAIT(YES) option with QUIESCE; the z/OS console is then
locked from DB2 command input for the entire QUIESCE period.)
As shown in the following example, the DISPLAY THREAD output issues message
DSNV400I to indicate that a quiesce is in effect:
DSNV401I - DISPLAY THREAD REPORT FOLLOWS -
DSNV400I - ARCHIVE LOG QUIESCE CURRENTLY ACTIVE
DSNV402I - ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
BATCH T * 20 TEPJOB SYSADM DSNTEP3 0012 12
DISPLAY ACTIVE REPORT COMPLETE
DSN9022I - DSNVDT ’-DISPLAY THREAD’ NORMAL COMPLETION
Chapter 17. Managing the log and the bootstrap data set 401
When all updates are quiesced, the quiesce history record in the BSDS is updated
with the date and time that the active log data sets were truncated, and with the
last-written RBA in the current active log data sets. DB2 truncates the current
active log data sets, switches to the next available active log data sets, and issues
message DSNJ311E, stating that offload started.
If updates cannot be quiesced before the quiesce period expires, DB2 issues
message DSNJ317I, and archive log processing terminates. The current active log
data sets are not truncated and not switched to the next available log data sets,
and offload is not started.
Whether the quiesce was successful or not, all suspended users and jobs are then
resumed, and DB2 issues message DSNJ312I, stating that the quiesce is ended and
update activity is resumed.
If ARCHIVE LOG is issued when the current active log is the last available active
log data set, the command is not processed, and DB2 issues this message:
DSNJ319I - csect-name CURRENT ACTIVE LOG DATA SET IS THE LAST
AVAILABLE ACTIVE LOG DATA SET. ARCHIVE LOG PROCESSING WILL
BE TERMINATED.
To avoid this problem, use the following command to cancel (and retry) an offload:
-ARCHIVE LOG CANCEL OFFLOAD
When you enter the command, DB2 restarts the offload again, beginning with the
oldest active log data set and proceeding through all active log data sets that need
offloading. If the offload fails again, you must fix the problem that is causing the
failure before the command can work.
End of General-use Programming Interface
For example, during prime shift, your DB2 shop might have a low logging rate,
but require that DB2 restart quickly if it terminates abnormally. To meet this restart
requirement, you can decrease the LOGLOAD value to force a higher checkpoint
frequency. In addition, during off-shift hours the logging rate might increase as
batch updates are processed, but the restart time for DB2 might not be as critical.
In that case, you can increase the LOGLOAD value which lowers the checkpoint
frequency.
The CHKFREQ value that is altered by the SET LOG command persists only while
DB2 is active. On restart, DB2 uses the CHKFREQ value in the DB2 subsystem
parameter load module. See Chapter 2 of DB2 Command Reference for detailed
information about this command.
| DB2 continues processing. This situation can result in a very long restart if logging
| continues without a system checkpoint. If DB2 continues logging beyond the
| defined checkpoint frequency, you should quiesce activity and terminate DB2 to
| minimize the restart time.
| You can issue the DISPLAY LOG command or run the Print Log Map utility
| (DSNJU004) to display the most recent checkpoint. For additional information, see
| “Displaying log information.”
You can obtain additional information about log data sets and checkpoints from
the Print Log Map utility (DSNJU004). See Part 3 of DB2 Utility Guide and Reference
for more information about utility DSNJU004.
Chapter 17. Managing the log and the bootstrap data set 403
Normally, DB2 keeps duplicate copies of the BSDS. If an I/O error occurs, DB2
deallocates the failing copy and continues with a single BSDS. However, you can
restore the dual mode as follows:
1. Use access method services to rename or delete the failing BSDS.
2. Define a new BSDS with the same name as the deleted BSDS.
3. Issue the DB2 command RECOVER BSDS to make a copy of the good BSDS in
the newly allocated data set.
The active logs are first registered in the BSDS by job DSNTIJID, when DB2 is
installed. They cannot be replaced, nor new ones added, without terminating and
restarting DB2.
Archive log data sets are dynamically allocated. When one is allocated, the data set
name is registered in the BSDS in separate entries for each volume on which the
archive log resides. The list of archive log data sets expands as archives are added,
and wraps around when a user-determined number of entries has been reached.
| The maximum number of archive log data sets depends on whether the BSDS
| conversion utility (DSNJCNVB) has been run. The maximum number of data sets
| is 10000 if the BSDS has been converted (20000 for dual logging); otherwise the
| limits are 1000 for single archive logging and 2000 for dual logging. For more
| information about the BSDS conversion utility, see Part 3 of DB2 Utility Guide and
| Reference.
The inventory of archive log data sets can be managed by use of the change log
inventory utility (DSNJU003). For further information, see “Changing the BSDS log
inventory” on page 405.
A wide variety of tape management systems exist, along with the opportunity for
external manual overrides of retention periods. Because of that, DB2 does not have
an automated method to delete the archive log data sets from the BSDS inventory
of archive log data sets. Thus, the information about an archive log data set can be
in the BSDS long after the archive log data set has been scratched by a tape
management system following the expiration of the data set’s retention period.
Conversely, the maximum number of archive log data sets could have been
exceeded, and the data from the BSDS dropped long before the data set has
reached its expiration date. For additional information, refer to “Deleting archive
logs automatically” on page 406.
If you specified at installation that archive log data sets are cataloged when
allocated, the BSDS points to the integrated catalog facility catalog for the
information needed for later allocations. Otherwise, the BSDS entries for each
volume register the volume serial number and unit information that is needed for
later allocation.
For better offload performance and space utilization, it is recommended that you
use the default archive block size of 28672. If required, this value can be changed
The data set names of the BSDS copy and the archive log are the same, except that
the first character of the last data set name qualifier in the BSDS name is B instead
of A, as in the following example:
Archive log name
DSNCAT.ARCHLOG1.A0000001
BSDS copy name
DSNCAT.ARCHLOG1.B0000001
If a read error while copying the BSDS, the copy is not created. Message DSNJ125I
is issued, and the offload to the new archive log data set continues without the
BSDS copy.
The utility DSNJU004, print log map, lists the information that is stored in the
BSDS. For instructions on using it, see Part 3 of DB2 Utility Guide and Reference.
You can change the BSDS by running the DB2 batch change log inventory
(DSNJU003) utility. This utility should not be run when DB2 is active. If it is run
when DB2 is active, inconsistent results can be obtained. For instructions on how
to use the change log inventory utility, see Part 3 of DB2 Utility Guide and Reference.
You can copy an active log data set using the access method services IDCAMS
REPRO statement. The copy can only be performed when DB2 is down, because
DB2 allocates the active log data sets as exclusive (DISP=OLD) at DB2 startup. For
more information about the REPRO statement, see DFSMS/MVS: Access Method
Services for the Integrated Catalog and z/OS DFSMS Access Method Services for
Catalogs.
To recover units of recovery, you need log records at least until all current actions
are completed. If DB2 abends, restart requires all log records since the previous
checkpoint or the beginning of the oldest UR that was active at the abend,
whichever is first on the log.
Chapter 17. Managing the log and the bootstrap data set 405
To tell whether all units of recovery are complete, read the status counts in the
DB2 restart messages (shown in “Starting DB2” on page 322). If all counts are zero,
no unit of recovery actions are pending. If there are indoubt units of recovery
remaining, identify and recover them by the methods described in Chapter 16,
“Monitoring and controlling DB2 and its connections,” on page 333.
To recover databases, you need log records and image copies of table spaces. How
long you keep log records depends, then, on how often you make those image
copies. Chapter 20, “Backing up and recovering databases,” on page 439 gives
suggestions about recovery cycles; the following sections assume that you know
what records you want to keep and describe only how to delete the records you do
not want.
The default for the retention period keeps archive logs forever. If you use any
other retention period, it must be long enough to contain as many recovery cycles
as you plan for. For example, if your operating procedures call for a full image
copy every sixty days of the least-frequently-copied table space, and you want to
keep two complete image copy cycles on hand at all times, then you need an
archive log retention period of at least 120 days. For more than two cycles, you
need a correspondingly longer retention period.
If archive log data sets or tapes are deleted automatically, the operation does not
update the archive log data set inventory in the BSDS. If you wish, you can update
the BSDS with the change log inventory utility, as described in “Changing the
BSDS log inventory” on page 405. The update is not really necessary; it wastes
space in the BSDS to record old archive logs, but does no other harm as the
archive log data set inventory wraps and automatically deletes the oldest entries.
See “Managing the bootstrap data set (BSDS)” on page 403 for more details.
You need an answer in terms of the log RBA ranges of the archive data sets. The
earliest log record you want is identified by a log RBA. You can discard any
archive log data sets that contain only records with log RBAs less than that.
Step 1: Resolve indoubt units of recovery: If DB2 is running with TSO, continue
with “Find the startup log RBA” on page 407. If DB2 is running with IMS, CICS,
or distributed data, the following procedure applies:
1. The period between one startup and the next must be free of any indoubt units
of recovery. Ensure that no DB2 activity is going on until you finish this
procedure. (You might plan this procedure for a non-prime shift, for minimum
impact on users.) To find out whether indoubt units exist, issue the DB2
Step 2: Find the startup log RBA: Keep at least all log records with log RBAs
greater than the one given in this message, issued at restart:
DSNR003I RESTART...PRIOR CHECKPOINT RBA=xxxxxxxxxxxx
If you suspended DB2 activity while performing step 1, you can restart it now.
Step 3: Find the minimum log RBA needed: Suppose that you have determined to
keep some number of complete image copy cycles of your least-frequently-copied
table space. You now need to find the log RBA of the earliest full image copy you
want to keep.
1. If you have any table spaces so recently created that no full image copies of
them have ever been taken, take full image copies of them. If you do not take
image copies of them, and you discard the archive logs that log their creation,
DB2 can never recover them.
Chapter 17. Managing the log and the bootstrap data set 407
Find the START_RBA for the earliest full image copy (ICTYPE=F) that you
intend to keep. If your least-frequently-copied table space is partitioned, and
you take full image copies by partition, use the earliest date for all the
partitions.
If you are going to discard records from SYSIBM.SYSCOPY and
SYSIBM.SYSLGRNX, note the date of the earliest image copy you want to keep.
Step 4: Copy catalog and directory tables: Take full image copies of the DB2 table
spaces listed in Table 95 to ensure that copies of these table spaces are included in
the range of log records you will keep.
Table 95. Catalog and directory tables to copy
Database name Table space names
DSNDB01 DBD01 SYSUTILX
SCT02 SYSLGRNX
SPT01
DSNDB06 SYSCOPY SYSPLAN
SYSDBASE SYSSTATS
SYSDBAUT SYSSTR
SYSGPAUT SYSUSER
SYSGROUP SYSVIEWS
SYSPKAGE
Step 5: Locate and discard archive log volumes: Now that you know the minimum
LOGRBA, from step 3, suppose that you want to find archive log volumes that
contain only log records earlier than that. Proceed as follows:
1. Execute the print log map utility to print the contents of the BSDS. For an
example of the output, see the description of print log map (DSNJU004) in Part
3 of DB2 Utility Guide and Reference.
2. Find the sections of the output titled “ARCHIVE LOG COPY n DATA SETS”.
(If you use dual logging, there are two sections.) The columns labelled
STARTRBA and ENDRBA show the range of log RBAs contained in each
volume. Find the volumes (two, for dual logging) whose ranges include the
minimum log RBA you found in step 3; these are the earliest volumes you need
to keep.
If no volumes have an appropriate range, one of these cases applies:
v The minimum LOGRBA has not yet been archived, and you can discard all
archive log volumes.
v The list of archive log volumes in the BSDS wrapped around when the
number of volumes exceeded the number allowed by the RECORDING MAX
field of installation panel DSNTIPA. If the BSDS does not register an archive
log volume, it can never be used for recovery. Therefore, you should consider
adding information about existing volumes to the BSDS. For instructions, see
Part 3 of DB2 Utility Guide and Reference.
You should also consider increasing the value of MAXARCH. For
information, see information about installation panel DSNTIPA in Part 2 of
DB2 Installation Guide.
3. Delete any archive log data set or volume (both copies, for dual logging) whose
ENDRBA value is less than the STARTRBA value of the earliest volume you
want to keep.
Because BSDS entries wrap around, the first few entries in the BSDS archive log
section might be more recent than the entries at the bottom. Look at the
Chapter 17. Managing the log and the bootstrap data set 409
410 Administration Guide
Chapter 18. Restarting DB2 after termination
This chapter tells what to expect when DB2 terminates normally or abnormally,
and how to start it again. The concepts are important background for Chapter 19,
“Maintaining consistency across multiple systems,” on page 423 and Chapter 20,
“Backing up and recovering databases,” on page 439. This chapter includes the
following topics:
“Termination”
“Normal restart and recovery” on page 412
“Deferring restart processing” on page 418
“Restarting with conditions” on page 419
Termination
DB2 terminates normally in response to the command STOP DB2. If DB2 stops for
any other reason, the termination is considered abnormal.
Normal termination
In a normal termination, DB2 stops all activity in an orderly way. You can use
either STOP DB2 MODE (QUIESCE) or STOP DB2 MODE (FORCE). The effects are
given in Table 96.
Table 96. Termination using QUIESCE and FORCE
Thread type QUIESCE FORCE
Active threads Run to completion Roll back
New threads Permitted Not permitted
New connections Not permitted Not permitted
You can use either command to prevent new applications from connecting to DB2.
When you issue the command STOP DB2 MODE(QUIESCE), current threads can
run to completion, and new threads can be allocated to an application that is
running.
With IMS and CICS, STOP DB2 MODE(QUIESCE) allows a current thread to run
only to the end of the unit of recovery, unless either of the following conditions are
true:
v There are open, held cursors.
Before DB2 can come down, all held cursors must be closed and all special
registers must be in their original state, or the transaction must complete.
With CICS, QUIESCE mode brings down the CICS attachment facility, so an active
task will not necessarily run to completion.
For example, assume that a CICS transaction opens no cursors declared WITH
HOLD and modifies no special registers as follows:
EXEC SQL
.
. ← -STOP DB2 MODE(QUIESCE) issued here
.
.
SYNCPOINT
.
.
.
EXEC SQL ← This receives an AETA abend
When you issue the command STOP DB2 MODE(FORCE), no new threads are
allocated, and work on existing threads is rolled back.
During shutdown, use the command DISPLAY THREAD to check its progress. If
shutdown is taking too long, you can issue STOP DB2 MODE (FORCE), but rolling
back work can take as much or more time as the completion of QUIESCE.
A data object could be left in an inconsistent state, even after a shutdown with
mode QUIESCE, if it was made unavailable by the command STOP DATABASE, or
if DB2 recognized a problem with the object. MODE (QUIESCE) does not wait for
asynchronous tasks that are not associated with any thread to complete, before it
stops DB2. This can result in data commands such as STOP DATABASE and
START DATABASE having outstanding units of recovery when DB2 stops. These
will become inflight units of recovery when DB2 is restarted, then be returned to
their original states.
Abends
An abend can leave data in an inconsistent state for any of the following reasons:
v Units of recovery might be interrupted before reaching a point of consistency.
v Committed data might not be written to external media.
v Uncommitted data might be written to external media.
After DB2 is initialized, the restart process goes through four phases, which are
described in the following sections:
In the descriptions that follow, the terms inflight, indoubt, in-commit, and in-abort
refer to statuses of a unit of work that is coordinated between DB2 and another
system, such as CICS, IMS, or a remote DBMS. For definitions of those terms, see
“Maintaining consistency after termination or failure” on page 425.
At the end of the fourth phase recovery, a checkpoint is taken, and committed
changes are reflected in the data.
Application programs that do not commit often enough cause long running units
of recovery (URs). These long running URs might be inflight after a DB2 failure.
Inflight URs can extend DB2 restart time. You can restart DB2 more quickly by
postponing the backout of long running URs. Use installation options LIMIT
BACKOUT and BACKOUT DURATION to establish what work to delay during
restart.
If your DB2 subsystem has the UR checkpoint count option enabled, DB2 generates
console message DSNR035I and trace records for IFCID 0313 to inform you about
long running URs. The UR checkpoint count option is enabled at installation time,
through field UR CHECK FREQ on panel DSNTIPL. See Part 2 of DB2 Installation
Guide for more information about enabling this option.
# If your DB2 subsystem has the UR log threshold option enabled, DB2 generates
# console message DSNB260I when an inflight UR writes more than the
# installation-defined number of log records. DB2 also generates trace records for
# IFCID 0313 to inform you about these long running URs. You can enable the UR
# log threshold option at installation time, through field UR LOG WRITE CHECK on
# panel DSNTIPL. See Part 2 of DB2 Installation Guide for more information about
# enabling this option.
You can restart a large object (LOB) table space like other table spaces. LOB table
spaces defined with LOG NO do not log LOB data, but they log enough control
information (and follow a force-at-commit policy) so that they can restart without
loss of data integrity.
During phase 2, no database changes are made, nor are any units of recovery
completed. DB2 determines what processing is required by phase 3 forward log
recovery before access to databases is allowed.
If DB2 encounters a problem while applying log records to an object during phase
3, the affected pages are placed in the logical page list. Message DSNI001I is issued
once per page set or partition, and message DSNB250E is issued once per page.
Restart processing continues.
If DB2 encounters a problem while applying a log record to an object during phase
4, the affected pages are placed in the logical page list. Message DSNI001I is issued
once per page set or partition, and message DSNB250E is issued once per page.
Restart processing continues.
Restarting automatically
If you are running DB2 in a Sysplex, you can have the automatic restart function of
z/OS automatically restart DB2 or IRLM after a failure.
When DB2 or IRLM stops abnormally, z/OS determines whether z/OS failed too,
and where DB2 or IRLM should be restarted. It then restarts DB2 or IRLM.
You must have DB2 installed with a command prefix scope of S to take advantage
of automatic restart. See Part 2 of DB2 Installation Guide for instruction on
specifying command scope.
Using an automatic restart policy: You control how automatic restart works by
using automatic restart policies. When the automatic restart function is active, the
default action is to restart the subsystems when they fail. If this default action is
not what you want, then you must create a policy defining the action you want
taken.
To create a policy, you need the element names of the DB2 and IRLM subsystems:
v For a non-data-sharing DB2, the element name is 'DB2$' concatenated by the
subsystem name (DB2$DB2A, for example). To specify that a DB2 subsystem is
not to be restarted after a failure, include RESTART_ATTEMPTS(0) in the policy
for that DB2 element.
v For local mode IRLM, the element name is a concatenation of the IRLM
subsystem name and the IRLM ID. For global mode IRLM, the element name is
a concatenation of the IRLM data sharing group name, IRLM subsystem name,
and the IRLM ID.
For instructions on defining automatic restart policies, see z/OS MVS Setting Up a
Sysplex.
If some specific object is causing problems, you should consider deferring its
restart processing by starting DB2 without allowing that object to go through
restart processing. When you defer restart of an object, DB2 puts pages necessary
for the object’s restart in the logical page list (LPL). Only those pages are
inaccessible; the rest of the object can still be accessed after restart.
# There are restrictions to DB2’s activation of restart processing for available objects.
# When you say DEFER ALL at a site that is designated as RECOVERYSITE in
# DSNZPxxx, all pages are placed in the LPL (as a page range, not as a large list of
# individual pages). All pages for the index are also placed in the LPL. The following
# conditions apply:
# v If DB2 cannot open and read the DBD01 table space it will not put DB2 into
# ACCESS(MAINT), and DSNX204I is not be issued. Instead either DSNT500I or
# DSNT501I ’resource unavailable’ are issued.
# v For a deferring restart scenario that needs to recover all DB2 objects after DB2 is
# up, it is recommend that you set the ZPARM DEFER ALL and start DB2 with
# the ACCESS(MAINT) option.
# v If DEFER ALL is specified, DSNX204I is not issued.
# v With DEFER ALL, DB2 will not open any data sets, including SYSLGRNX and
# DSNRTSTS, during any phase of restart, and will not attempt to apply any log
# records.
DB2 can also defer restart processing for particular objects. DB2 puts pages in the
LPL for any object (or specific pages of an object) with certain problems, such as
an open or I/O error during restart. Again, only pages that are affected by the
error are placed on the LPL.
You can defer an object’s restart processing with any of the following actions:
VARY the device (or volume) on which the objects reside OFFLINE. If the data
sets containing an object are not available, and the object requires recovery during
restart, DB2 flags it as stopped and requiring deferred restart. DB2 then restarts
without it.
Delay the backout of a long running UR. On installation panel DSNTIPL, you can
use the following options:
v LIMIT BACKOUT defined as YES or AUTO indicates that some backout
processing will be postponed when restarting DB2. Users must issue the
RECOVER POSTPONED command to complete the backout processing when
the YES option is selected. DB2 does the backout work automatically after DB2
is running and receiving new work when the AUTO option is selected.
v BACKOUT DURATION indicates the number of log records, specified as a
multiplier, to be read during restart’s backward log scan.
Selecting a limited backout affects log processing during restart. The backward
processing of the log proceeds until the oldest inflight or in-abort UR with activity
against the catalog or directory is backed out, and the requested number of log
records have been processed.
Name the object with DEFER when installing DB2. On installation panel
DSNTIPS, you can use the following options:
v DEFER ALL defers restart log apply processing for all objects, including DB2
catalog and directory objects.
v DEFER list_of_objects defers restart processing only for objects in the list.
DEFER does not affect processing of the log during restart. Therefore, even if you
specify DEFER ALL, DB2 still processes the full range of the log for both the
forward and backward log recovery phases of restart. However, logged operations
are not applied to the data set.
In unusual cases, you might choose to make inconsistent objects available for use
without recovering them. For example, the only inconsistent object might be a table
space that is dropped as soon as DB2 is restarted, or the DB2 subsystem might be
used only for testing application programs still under development. In cases like
those, where data consistency is not critical, normal recovery operations can be
partially or fully bypassed by using conditional restart control records in the BSDS.
The procedure is:
1. While DB2 is stopped, run the change log inventory utility using the
CRESTART control statement to create a new conditional restart control record.
2. Restart DB2. The type of recovery operations that take place is governed by the
current conditional restart control record.
For an example of the messages that are written to the DB2 console during restart
processing, see “Messages at start” on page 322.
| Restart considerations for identity columns: Cold starts and conditional restarts
| that skip forward recovery can cause additional data inconsistency within identity
| columns and sequence objects. After such restarts, DB2 might assign duplicate
| identity column values and create gaps in identity column sequences. For
| information about how to correct this data inconsistency, see “Recovering catalog
| and directory tables” on page 465.
This section gives an overview of the available options for conditional restart. For
more detail, see information about the change log inventory utility (DSNJU003) in
Part 3 of DB2 Utility Guide and Reference. For information about data sharing
considerations, see Chapter 5 of DB2 Data Sharing: Planning and Administration.
When an error prevents logging of a compensation log record, DB2 abends. If DB2
abends:
| If the RECOVER POSTPONED processing lasts for an extended period, the output
| includes DSNR047I messages, as shown in Figure 48, to help you monitor backout
| processing. These messages show the current RBA that is being processed and the
| target RBA.
A conditional restart record that specifies left truncation of the log causes any
postponed abort units of recovery that began earlier than the truncation RBA to
end without resolution. The combination of unresolved postponed abort units of
recovery can cause more records than requested by the BACKODUR system
parameter to be processed. The left truncation RBA takes precedence over
BACKODUR in this case.
Be careful about doing a conditional restart that discards log records. If the
discarded log records contain information from an image copy of the DB2
directory, a future execution of the RECOVER utility on the directory will fail. For
more information, see “Recovering the catalog and directory” on page 461.
Use the utility DSN1LOGP to read information about checkpoints and conditional
restart control records. See Part 3 of DB2 Utility Guide and Reference for information
about that utility.
If data in more than one subsystem is to be consistent, then all update operations
at all subsystems for a single logical unit of work must either be committed or
backed out.
Time
1 2 3 4 5 6 7 8 9 10 11 12 13
line
Participant
Phase 1 Phase 2
Figure 49. Time line illustrating a commit that is coordinated with another subsystem
There are occasions when the coordinator invokes the participant when no
participant resource has been altered since the completion of the last commit
process. This can happen, for example, when SYNCPOINT is issued after
performance of a series of SELECT statements or when end-of-task is reached
immediately after SYNCPOINT has been issued. When this occurs, the participant
performs both phases of the two-phase commit during the first commit phase and
records that the user or job is read-only at the participant.
The status of a unit of recovery after a termination or failure depends upon the
moment at which the incident occurred. Figure 49 on page 424 helps to illustrate
the following possible statuses:
Status Description and Processing
Inflight
The participant or coordinator failed before finishing phase 1 (period a or
b); during restart, both systems back out the updates.
Indoubt
The participant failed after finishing phase 1 and before starting phase 2
(period c); only the coordinator knows whether the failure happened before
or after the commit (point 9). If it happened before, the participant must
back out its changes; if it happened afterward, it must make its changes
and commit them. After restart, the participant waits for information from
the coordinator before processing this unit of recovery.
In-commit
The participant failed after it began its own phase 2 processing (period d);
it makes committed changes.
In-abort
The participant or coordinator failed after a unit of recovery began to be
At the end of this phase, indoubt activity is reflected in the database as though the
decision was made to commit the activity, but the activity has not yet been
committed. The data is locked and cannot be used until DB2 recognizes and acts
upon the indoubt decision. (For a description of indoubt units of recovery, see
“Resolving indoubt units of recovery” on page 427.)
If removal of the changes has been postponed, the units of recovery become
known as postponed abort units of recovery. The data with pending backout work is
in a restrictive state (restart pending) which makes the data unavailable. The data
becomes available upon completion of backout work or upon cold or conditional
restart of DB2.
Check the console for message DSNR036I for unresolved units of recovery
encountered during a checkpoint. This message might occur to remind operators of
existing indoubt threads. See Part 2 of DB2 Installation Guide for details.
Important
If the TCP/IP address that is associated with a DRDA server is subject to
change, the domain name of each DRDA server must be defined in the CDB.
This allows DB2 to recover from situations where the server’s IP address
changes prior to successful resynchronization.
During the current status rebuild phase of DB2 restart, the DB2 participant makes
a list of indoubt units of recovery. IMS builds its own list of residual recovery
entries (RREs). The RREs are logged at IMS checkpoints until all entries are
resolved.
The IMS attachment facility writes all the records involved in indoubt processing
to the IMS log tape as type X'5501FE'.
For all resolved units, DB2 updates databases as necessary and releases the
corresponding locks. For threads that access offline databases, the resolution is
logged and acted on when the database is started.
DB2 maintains locks on indoubt work that was not resolved. This can create a
backlog for the system if important locks are being held. Use the DISPLAY
DATABASE LOCKS command to find out which tables and table spaces are locked
by indoubt units of recovery. The connection remains active so you can clean up
the IMS RREs. Recover the indoubt threads by the methods described in
“Controlling IMS connections” on page 358.
All indoubt work should be resolved unless there are software or operating
problems, such as with an IMS cold start. Resolution of indoubt units of recovery
from IMS can cause delays in SQL processing. Indoubt resolution by the IMS
control region takes place at two times:
v At the start of the connection to DB2, during which resolution is done
synchronously
v When a program fails, during which the resolution is done asynchronously.
In the first case, SQL processing is prevented in all dependent regions until the
indoubt resolution is completed. IMS does not allow connections between IMS
dependent regions and DB2 before the indoubt units are resolved.
For all resolved units, DB2 updates databases as necessary and releases the
corresponding locks. For threads that access offline databases, the resolution is
logged and acted on when the database is started. Unresolved units of work can
remain after restart; resolve them by the methods described in “Manually
recovering CICS indoubt units of recovery” on page 495.
| If a communication failure occurs between the first phase (prepare) and the second
| phase (commit decision) of a commit, an indoubt transaction is created on the
| resource manager that experienced the failure. When an indoubt transaction is
| created, a message like this is displayed on the console of the resource manager:
Normally, if your subsystem fails while communicating with a remote system, you
should wait until both systems and their communication link become operational.
Your system then automatically recovers its indoubt units of recovery and
continues normal operation. When DB2 restarts while any unit of recovery is
indoubt, the data required for that unit remains locked until the unit of recovery is
resolved.
If automatic recovery is not possible, DB2 alerts you to any indoubt units of
recovery that you need to resolve. If it is imperative that you release locked
resources and bypass the normal recovery process, you can resolve indoubt
situations manually.
In order to make a correct decision, you must be absolutely sure that the action
you take on indoubt units of recovery is the same as the action taken at the
coordinator. Validate your decision with the administrator of the other systems
involved with the logical unit of work.
For example, to purge information on two indoubt threads, the first with an
LUWID=DB2NET.LUNSITE0.A11A7D7B2057.0002 and a resync port number of 123;
and the second with a token of 442, enter:
-RESET INDOUBT LUWID(DB2NET.LUNSITE0.A11A7D7B2057.0002:123,442)
Normally, automatic resolution of indoubt units of recovery occurs when DB2 and
RRS reestablish communication with each other. If something prevents this, then
you can manually resolve an indoubt unit of recovery. This process is not
recommended because it might lead to inconsistencies in recoverable resources.
Both DB2 and RRS can display information about indoubt units of recovery. Both
also provide techniques for manually resolving these indoubt units of recovery.
In DB2, the DISPLAY THREAD command provides information about indoubt DB2
thread. The display output includes RRS unit of recovery IDs for those DB2
threads that have RRS either as a coordinator or as a participant. If DB2 is a
participant, the RRS unit of recovery ID displayed can be used to determine the
outcome of the RRS unit of recovery. If DB2 is the coordinator, you can determine
the outcome of the unit of recovery from the DISPLAY THREAD output.
In DB2, the RECOVER INDOUBT command lets you manually resolve a DB2
indoubt thread. You can use RECOVER INDOUBT to commit or roll back a unit of
recovery after you determine what the correct decision is.
If even one system indicates that it cannot commit, then the DB2 coordinator sends
out the decision to roll back the unit of work at all systems. This process ensures
that data among multiple DBMSs remains consistent. When DB2 is the participant,
it follows the decision of the coordinator, whether the coordinator is another DB2
or another DBMS.
DB2 is always the participant when interacting with IMS or CICS systems, or
WebSphere Application Server. However, DB2 can also serve as the coordinator for
other DBMSs or DB2 subsystems in the same unit of work. For example, if DB2
receives a request from a coordinating system that also requires data manipulation
on another system, DB2 propagates the unit of work to the other system and
serves as the coordinator for that system.
In Figure 50, DB2A is the participant for an IMS transaction, but becomes the
coordinator for the two database servers (AS1 and AS2), DB2B, and its respective
DB2 servers (DB2C, DB2D, and DB2E).
DB2C
AS1
Server
IMS/ DB2D
DB2A DB2B
CICS Server
DB2E
AS2 Server
If the connection goes down between DB2A and the coordinating IMS system, the
connection becomes an indoubt thread. However, DB2A’s connections to the other
systems are still waiting and are not considered indoubt. Wait for automatic
recovery to occur to resolve the indoubt thread. When the thread is recovered, the
unit of work commits or rolls back and this action is propagated to the other
systems involved in the unit of work.
Coordinator
Phase 1 Phase 2
Prepare Commit
Time 1 2 3 4 5
line
Figure 51. Illustration of multi-site update. C is the coordinator; P1 and P2 are the
participants.
Phase 1:
1. When an application commits a logical unit of work, it signals the DB2
coordinator. The coordinator starts the commit process by sending messages to
the participants to determine whether they can commit.
2. A participant (Participant 1) that is willing to let the logical unit of work be
committed, and which has updated recoverable resources, writes a log record.
It then sends a request commit message to the coordinator and waits for the
final decision (commit or roll back) from the coordinator. The logical unit of
work at the participant is now in the prepared state.
If a participant (Participant 2) has not updated recoverable resources, it sends a
forget message to the coordinator, releases its locks and forgets about the
logical unit of work. A read-only participant writes no log records. As far as
this participant is concerned, it does not matter whether the logical unit of
work ultimately gets rolled back or committed.
If a participant wants to have the logical unit of work rolled back, it writes a
log record and sends a message to the coordinator. Because a message to roll
back acts like a veto, the participant in this case knows that the logical unit of
work will be rolled back by the coordinator. The participant does not need any
more information from the coordinator and therefore rolls back the logical unit
of work, releases its locks, and forgets about the logical unit of work. (This case
is not illustrated in the figure.)
Phase 2:
3. After the coordinator receives request commit or forget messages from all its
participants, it starts the second phase of the commit process. If at least one of
the responses is request commit, the coordinator writes a log record and sends
committed messages to all the participants who responded to the prepare
Important: If you try to resolve any indoubt threads manually, you need to know
whether the participants committed or rolled back their units of work. With this
information, you can make an appropriate decision regarding processing at your
site.
For all commands and utility statements, the complete syntax and parameter
descriptions can be found in DB2 Command Reference and DB2 Utility Guide and
Reference.
The principal tools for recovery are the utilities QUIESCE, REPORT, COPY,
RECOVER, and MERGECOPY. This section also gives an overview of these utilities
to help you with your backup and recovery planning.
This section covers the following topics, which you should consider when you
plan for backup and recovery:
v “Considerations for recovering distributed data” on page 440
v “Considerations for recovering indexes” on page 441
v “Preparing for recovery” on page 441
v “Maximizing data availability during backup and recovery” on page 445
v “Events that occur during recovery” on page 442
v “How to find recovery information” on page 448
v “Preparing to recover to a prior point of consistency” on page 449
Point in time recovery (to the last image copy or to an RBA) presents other
problems. You cannot control a utility in one subsystem from another subsystem.
In practice, you cannot quiesce two sets of table spaces, or make image copies of
them, in two different subsystems at exactly the same instant. Neither can you
recover them to exactly the same instant, because there are two different logs, and
a relative byte address (RBA) does not mean the same thing for both of them.
In planning, then, the best approach is to consider carefully what the QUIESCE,
COPY, and RECOVER utilities do for you and then plan not to place data that
must be closely coordinated on separate subsystems. After that, recovery planning
is a matter of agreement among database administrators at separate locations.
Because DB2 is responsible for recovering DB2 data only, it does not recover
non-DB2 data. Non-DB2 systems do not always provide equivalent recovery
capabilities.
Care must be taken to prevent DB2 from being started on the alternate processor
until the DB2 system on the active, failing processor terminates. A premature start
can cause severe integrity problems in data, the catalog, and the log. The use of
global resource serialization (GRS) helps avoid the integrity problems by
You can use the REBUILD INDEX utility to recover any index and you not need to
prepare image copies of those indexes.
To use the RECOVER utility to recover indexes, you must include the following
actions in your normal database operation:
v Create or alter indexes using the SQL statement ALTER INDEX with the option
COPY YES before you can copy and recover them.
v Create image copies of all indexes that you plan to recover using the RECOVER
utility. The COPY utility makes full image copies or concurrent copies of
indexes. Incremental copies of indexes is not supported. If full image copies of
the index are taken at timely intervals, recovering a large index might be faster
than rebuilding the index.
Tip: You can recover indexes and table spaces in a single list when you use the
RECOVER utility. If you use a single list of objects in one RECOVER utility control
statement, the logs for all of the indexes and table spaces are processed in one
pass.
DB2 can recover a page set by using a backup copy or the recovery log or both.
The DB2 recovery log contains a record of all changes made to the page set. If DB2
fails, it can recover the page set by restoring the backup copy and applying the log
changes to it from the point of the backup copy.
The DB2 catalog and directory page sets must be copied at least as frequently as
the most critical user page sets. Moreover, it is your responsibility to periodically
copy the tables in the communications database (CDB), the application registration
table, the object registration table, and the resource limit facility (governor), or to
maintain the information necessary to re-create them. Plan your backup strategy
accordingly.
Imagine that you are the database administrator for DBASE1. Table space TSPACE1
in DBASE1 has been available all week. On Friday, a disk write operation for
TSPACE1 fails. You need to recover the table space to the last consistent point
Monday morning: You start the DBASE1 database and make a full image copy of
TSPACE1 and all indexes immediately. That gives you a starting point from which
to recover. Use the COPY utility with the SHRLEVEL CHANGE option to improve
availability. See Part 2 of DB2 Utility Guide and Reference for more information
about the COPY utility.
Tuesday morning: You run COPY again. This time you make an incremental image
copy to record only the changes made since the last full image copy that you took
on Monday. You also make a full index copy.
TSPACE1 can be accessed and updated while the image copy is being made. For
maximum efficiency, however, you schedule the image copies when online use is
minimal.
Wednesday morning: You make another incremental image copy, and then create a
full image copy by using the MERGECOPY utility to merge the incremental image
copy with the full image copy.
Thursday and Friday mornings: You make another incremental image copy and a
full index copy each morning.
Friday afternoon: An unsuccessful write operation occurs and you need to recover
the table space. Run the RECOVER utility, as described in Part 2 of DB2 Utility
Guide and Reference. The utility restores the table space from the full image copy
made by MERGECOPY on Wednesday and the incremental image copies made on
Thursday and Friday, and includes all changes made to the recovery log since
Friday morning.
Later Friday afternoon: The RECOVER utility issues a message announcing that it
has successfully recovered TSPACE1 to current point in time.
This imaginary scenario is somewhat simplistic. You might not have taken daily
incremental image copies on just the table space that failed. You might not
ordinarily recover an entire table space. However, it illustrates this important point:
with proper preparation, recovery from a failure is greatly simplified.
If the log has been damaged or discarded, or if data has been changed erroneously
and then committed, you can recover to a particular point in time by limiting the
range of log records to be applied by the RECOVER utility.
Figure 52. Overview of DB2 recovery. The figure shows one complete cycle of image copies;
the SYSIBM.SYSCOPY catalog table can record many complete cycles.
In deciding how often to take image copies, consider the time needed to recover a
table space. It is determined by all of the following factors:
v The amount of log to traverse
v The time it takes an operator to mount and remove archive tape volumes
v The time it takes to read the part of the log needed for recovery
v The time needed to reprocess changed pages
In general, the more often you make image copies, the less time recovery takes;
but, of course, the more time is spent making copies. If you use LOG NO without
the COPYDDN keyword when you run the LOAD or REORG utilities, DB2 places
the table space in copy pending status. You must remove the copy pending status
of the table space by making an image copy before making further changes to the
data. However, if you run REORG or LOAD REPLACE with the COPYDDN
keyword, DB2 creates a full image copy of a table space during execution of the
utility, so DB2 does not place the table space in copy pending status. Inline copies
of indexes during LOAD and REORG are not supported.
If you use LOG YES and log all updates for table spaces, then an image copy of
the table space is not required for data integrity. However, taking an image copy
makes the recovery process more efficient. The process is even more efficient if you
use MERGECOPY to merge incremental image copies with the latest full image
copy. You can schedule the MERGECOPY operation at your own convenience,
whereas the need for a recovery can come upon you unexpectedly. The
MERGECOPY operation does not apply to indexes.
Recommendation: Copy your indexes after the associated utility has run. Indexes
are placed in informational copy pending (ICOPY) status after running LOAD
TABLESPACE, REORG TABLESPACE, REBUILD INDEX, or REORG INDEX
Use the CHANGELIMIT option of the COPY utility to let DB2 determine when an
image copy should be performed on a table space and whether a full or
incremental copy should be taken. Use the CHANGELIMIT and REPORTONLY
options together to let DB2 recommend what types of image copies to make. When
you specify both CHANGELIMIT and REPORTONLY, DB2 makes no image copies.
The CHANGELIMIT option does not apply to indexes.
In determining how many complete copy and log cycles to keep, you are guarding
against damage to a volume containing an important image copy or a log data set.
A retention period of at least two full cycles is recommended. For further security,
keep records for three or more copy cycles.
In the example, the user’s most critical tables are copied daily. Hence, the DB2
catalog and directory are also copied daily.
Table 97. DB2 log management example
Table space Update Full image
name Content activity copy period
ORDERINF Invoice line: part and Heavy Daily
quantity ordered
SALESINF Invoice description Heavy Daily
SALESQTA Quota information for Moderate Weekly
each sales person
SALESDSC Customer Moderate Weekly
descriptions
PARTSINV Parts inventory Moderate Weekly
PARTSINF Parts suppliers Light Monthly
PARTS Parts descriptions Light Monthly
SALESCOM Commission rates Light Monthly
EMPLOYEE Employee descriptive Light Monthly
data
EMPSALS Employee salaries Light Bimonthly
If you do a full recovery, you do not need to recover the indexes unless they are
damaged. If you recover to a prior point in time, you do need to recover the
indexes. See “Considerations for recovering indexes” on page 441 for information
about indexes.
DFSMShsm manages your disk space efficiently by moving data sets that have not
been used recently to less expensive storage. It also makes your data available for
recovery by automatically copying new or changed data sets to tape or disk. It can
delete data sets, or move them to another device. Its operations occur daily, at a
specified time, and allow for keeping a data set for a predetermined period before
deleting or moving it.
DFSMShsm:
v Uses cataloged data sets
v Operates on user tables, image copies, and logs
v Supports VSAM data sets
If a volume has a DB2 storage group specified, the volume should only be recalled
to like devices of the same VOLSER defined by CREATE or ALTER STOGROUP.
DB2 can recall user page sets that have been migrated. Whether DFSMShsm recall
occurs automatically is determined by the values of the RECALL DATABASE and
RECALL DELAY fields of installation panel DSNTIPO. If the value of the RECALL
DATABASE field is NO, automatic recall is not performed and the page set is
considered an unavailable resource. It must be recalled explicitly before it can be
used by DB2. If the value of the RECALL DATABASE field is YES, DFSMShsm is
invoked to recall the page sets automatically. The program waits for the recall for
the amount of time specified by the RECALL DELAY parameter. If the recall is not
completed within that time, the program receives an error message indicating the
page set is unavailable but that recall was initiated.
The deletion of DFSMShsm migrated data sets and the DB2 log retention period
must be coordinated with use of the MODIFY utility. If not, you could need
recovery image copies or logs that have been deleted. See “Discarding archive log
records” on page 405 for suggestions.
Decide on the level of availability you need: To do this, start by determining the
primary types of outages you are likely to experience. Then, for each of those types
of outages, decide on the maximum amount of time that you can spend on
recovery. Consider the trade-off between cost and availability. Recovery plans for
continuous availability are very costly, so you need to think about what percentage
of the time your systems really need to be available.
Practice for recovery: You cannot know whether a backup and recovery plan is
workable unless you practice it. In addition, the pressure of a recovery situation
Minimize preventable outages: One aspect of your backup and recovery plan
should be eliminating the need to recover whenever possible. One way to do that
is to prevent outages caused by errors in DB2. Be sure to check available
maintenance often and apply fixes for problems that are likely to cause outages.
Determine the required backup frequency: Use your recovery criteria to decide how
often to make copies of your databases. For example, if the maximum acceptable
recovery time after you lose a volume of data is two hours, your volumes typically
hold about 4 GB of data, and you can read about 2 GB of data per hour, then you
should make copies after every 4 GB of data written. You can use the COPY option
SHRLEVEL CHANGE or DFSMSdss concurrent copy to make copies while
transactions and batch jobs are running. You should also make a copy after
running jobs that make large numbers of changes. In addition to copying your
table spaces, you should also consider copying your indexes.
You can make additional backup image copies from a primary image copy by
using the COPYTOCOPY utility. This capability is especially useful when the
backup image is copied to a remote site that is to be used as a disaster recovery
site for the local site. Applications can run concurrently with the COPYTOCOPY
utility. Only utilities that write to the SYSCOPY catalog table cannot run
concurrently with COPYTOCOPY.
Minimize the elapsed time of RECOVER jobs: The RECOVER utility supports the
recovery of a list of objects in parallel. For those objects in the list that can be
processed independently, multiple subtasks are created to restore the image copies
| for the objects. The parallel function can be used for either disk or tape.
Minimize the elapsed time for copy jobs: You can use the COPY utility to make
| image copies of a list of objects in parallel. Image copies can be made to either disk
| or tape.
Minimize DB2 restart time: Many recovery processes involve restart of DB2. You
need to minimize the time that DB2 shutdown and startup take.
For non-data-sharing systems, you can limit the backout activity during DB2
system restart. You can postpone the backout of long running URs until after the
DB2 system is operational. See “Deferring restart processing” on page 418 for an
explanation of how to use the installation options LIMIT BACKOUT and
BACKOUT DURATION to determine what backout work will be delayed during
restart processing.
These are some major factors that influence the speed of DB2 shutdown:
You can also use REPORT to obtain recovery information about the catalog and
directory.
Details about the REPORT utility and examples showing the results obtained when
using the RECOVERY option are contained in Part 2 of DB2 Utility Guide and
Reference.
With that preparation, recovery to the point of consistency is as quick and simple
as possible. DB2 begins recovery with the copy you made and reads the log only
up to the point of consistency. At that point, there are no indoubt units of recovery
to hinder restarting.
For indexes that have been set to REBUILD-pending status at any time before the
point of consistency, see “Recovering indexes on altered tables” on page 465.
You can use the CONCURRENT option of the COPY utility to make a backup,
with DFSMSdss concurrent copy, that is recorded in the DB2 catalog. For more
information about using this option, see DB2 Utility Guide and Reference.
If you allow updates while copying, then step 3 is essential. With concurrent
updates, the copy can include uncommitted changes. Those might be backed out
after copying ends. Thus, the copy is not necessarily consistent data, and recovery
QUIESCE writes changed pages from the page set to disk. The catalog table
SYSIBM.SYSCOPY records the current RBA and the timestamp of the quiesce point.
At that point, neither page set contains any uncommitted data. A row with ICTYPE
Q is inserted into SYSCOPY for each table space quiesced. Page sets
DSNDB06.SYSCOPY, DSNDB01.DBD01, and DSNDB01.SYSUTILX, are an
exception: their information is written to the log. Indexes are quiesced
automatically when you specify WRITE(YES) on the QUIESCE statement. A
SYSIBM.SYSCOPY row with ICTYPE Q is inserted for indexes that have the COPY
YES attribute.
QUIESCE allows concurrency with many other utilities; however, it does not allow
concurrent updates until it has quiesced all specified page sets. Depending upon
the amount of activity, that can take considerable time. Try to run QUIESCE when
system activity is low.
Also, consider using the MODE(QUIESCE) option of the ARCHIVE LOG command
when planning for offsite recovery. It creates a system-wide point of consistency,
which can minimize the number of data inconsistencies when the archive log is
used with the most current image copy during recovery. See “Archiving the log”
on page 400 for more information about using the MODE(QUIESCE) option of the
ARCHIVE LOG command.
You can provide shorter restart times after system failures by using the installation
options LIMIT BACKOUT and BACKOUT DURATION. These options postpone
the backout processing of long running URs during DB2 restart. See Part 2 of DB2
Installation Guide for the details on how to use these parameters.
Data sharing
In a data sharing environment, you can use the LIGHT(YES) parameter to
quickly bring up a DB2 member to recover retained locks. Restart light is not
recommended for a restart in place and is intended only for a cross-system
restart for a system that does not have adequate capacity to sustain the DB2
IRLM pair. Restart light can be used for normal restart and recovery. See
Chapter 5 of DB2 Data Sharing: Planning and Administration for more details.
For data sharing, you need to consider whether you want the DB2 group to
use light mode at the recovery site. A light start might be desirable if you
have configured only minimal resources at the remote site. If this is the case,
you might run a subset of the members permanently at the remote site. The
other members are restarted and then directly shutdown. The procedure for a
light start at the remote site is:
1. Start the members that run permanently with the LIGHT(NO) option. This
is the default.
2. Start other members with LIGHT(YES). The members started with
LIGHT(YES) use a smaller storage footprint. After their restart processing
completes, they automatically shutdown. If ARM is in use, ARM does not
automatically restart the members with LIGHT(YES) again.
3. Members started with LIGHT(NO) remain active and are available to run
new work.
Disaster
Figure 53. Preparing for disaster recovery. The information you need to recover is contained
in the copies of data (including the DB2 catalog and directory) and the archive log data sets.
6. You can also use these copies on a subsystem installed with the LOCALSITE option if you run RECOVER with the
RECOVERYSITE option. Or you can use copies prepared for the local site on a recovery site, if you run RECOVER with the
option LOCALSITE.
For disaster recovery to be successful, all copies and reports must be updated and
sent to the recovery site regularly. Data will be up to date through the last archive
sent. For disaster recovery start up procedures, see “Remote site recovery from a
disaster at the local site” on page 525.
Actions to take
To aid in successful recovery of inconsistent data:
v During the installation of, or migration to, Version 8, make a full image copy
of the DB2 directory and catalog using installation job DSNTIJIC.
See Part 2 of DB2 Installation Guide for DSNTIJIC information. If you did not do
this during installation or migration, use the COPY utility, described in Part 2 of
DB2 Utility Guide and Reference, to make a full image copy of the DB2 catalog
and directory. If you do not do this and you subsequently have a problem with
inconsistent data in the DB2 catalog or directory, you will not be able to use the
RECOVER utility to resolve the problem.
v Periodically make an image copy of the catalog, directory, and user databases.
This minimizes the time the RECOVER utility requires to perform recovery. In
addition, this increases the probability that the necessary archive log data sets
will still be available. You should keep two copies of each level of image copy
data set. This reduces the risk involved if one image copy data set is lost or
damaged. See Part 2 of DB2 Utility Guide and Referencefor more information
about using the COPY utility.
Actions to avoid
v Do not discard archive logs you might need.
The RECOVER utility might need an archive log to recover from an inconsistent
data problem. If you have discarded it, you cannot use the RECOVER utility and
must resolve the problem manually. For information about determining when
you can discard archive logs, see “Discarding archive log records” on page 405.
v Do not make an image copy of a page set that contains inconsistent data.
If you use the COPY utility to make an image copy of a page set containing
inconsistent data, the RECOVER utility cannot recover a problem involving that
page set unless you have an older image copy of that page set taken before the
problem occurred. You can run DSN1COPY with the CHECK option to
determine whether intra-page data inconsistency problems exist on page sets
Data written to the log for propagation to IMS uses an expanded format that is
much longer than the DB2 internal format. Using DATA CAPTURE(CHANGES)
can greatly increase the size of your log.
A full image copy is required for indexes. For information about copying indexes,
see “Considerations for recovering indexes” on page 441.
You can use the CONCURRENT option of the COPY utility to make a copy, with
DFSMSdss concurrent copy, that is recorded in the DB2 catalog. For more
information about using this option, see DB2 Utility Guide and Reference.
Use the MERGECOPY utility to merge several image copies. MERGECOPY does
not apply to indexes.
The CHANGELIMIT option of the COPY utility causes DB2 to make an image
copy automatically when a table space has changed past a default limit or a limit
you specify. DB2 determines whether to make a full or incremental image copy.
DB2 makes an incremental image copy if the percent of changed pages is greater
than the low CHANGELIMIT value and less than the high CHANGELIMIT value.
DB2 makes a full image copy if the percent of changed pages is greater than or
equal to the high CHANGELIMIT value. The CHANGELIMIT option does not
apply to indexes.
If you want DB2 to recommend what image copies should be made but not make
the image copies, use the CHANGELIMIT and REPORTONLY options of the COPY
utility.
You can add conditional code to your jobs so that an incremental or full image
copy, or some other step is performed depending on how much the table space has
changed. When you use the COPY utility with the CHANGELIMIT option to
display image copy statistics, the COPY utility uses the following return codes to
indicate the degree that a table space or list of table spaces has changed:
Code Meaning
When you use generation data groups (GDGs) and need to make an incremental
image copy, there are new steps you can take to prevent an empty image copy
output data set from being created if no pages have been changed. You can
perform one of the following:
1. Make a copy of your image copy step, but add the REPORTONLY and
CHANGELIMIT options to the new COPY utility statement. The REPORTONLY
keyword specifies that you only want image copy information displayed.
Change the SYSCOPY DD card to DD DUMMY so that no output data set is
allocated. Run this step to visually determine the change status of your table
space.
2. Add step 1 before your existing image copy step, and add a JCL conditional
statement to examine the return code and execute the image copy step if the
table space changes meet either of the CHANGELIMIT values.
You can also use the COPY utility with the CHANGELIMIT option to determine
whether any space map pages are broken, or to identify any other problems that
might prevent an image copy from being taken, such as the object being in recover
pending status. You need to correct these problems before you run the image copy
job.
You can also make a full image copy when you run the LOAD or REORG utility.
This technique is better than running the COPY utility after the LOAD or REORG
utility because it decreases the time that your table spaces are unavailable.
However, only the COPY utility makes image copies of indexes.
Related information: For guidance in using COPY and MERGECOPY and making
image copies during LOAD and REORG, see Part 2 of DB2 Utility Guide and
Reference.
There are two ways to use the concurrent copy function of Data Facility Storage
Management Subsystem (DFSMS):
v Run the COPY utility with the CONCURRENT option. DB2 records the resulting
image copies in SYSIBM.SYSCOPY. To recover with these DFSMS copies, you
can run the RECOVER utility to restore those image copies and apply the
necessary log records to them to complete recovery.
v Make copies using DFSMS outside of DB2’s control. To recover with these
copies, you must manually restore the data sets, and then run RECOVER with
the LOGONLY option to apply the necessary log records.
To use RVA SnapShot or Enterprise Storage Server FlashCopy for a DB2 backup
requires a method of suspending all update activity for a DB2 subsystem to make
a remote copy of the entire subsystem without quiescing the update activity at the
primary site. Use the SUSPEND option on the SET LOG command to suspend all
logging activity at the primary site which also prevents any database updates.
After the remote copy has been created, use the RESUME option on the SET LOG
command to return to normal logging activities. See the DB2 Command Reference for
more details on using the SET LOG command.
For more information about RVA, see IBM RAMAC Virtual Array. For more
information about using PPRC, see RAMAC Virtual Array: Implementing Peer-to-Peer
Remote Copy. For more information about Enterprise Storage Server and the
FlashCopy function, see Enterprise Storage Server Introduction and Planning.
The RECOVER utility can use image copies for the local site or the recovery site,
regardless of where you invoke the utility. The RECOVER utility locates all full
and incremental image copies.
The RECOVER utility first attempts to use the primary image copy data set. If an
error is encountered (allocation, open, or I/O), RECOVER attempts to use the
backup image copy. If DB2 encounters an error in the backup image copy or no
backup image copy exists, RECOVER falls back to an earlier full copy and
attempts to apply incremental copies and log records. If an earlier full copy in not
available, RECOVER attempts to apply log records only.
For guidance in using RECOVER and REBUILD INDEX, see Part 2 of DB2 Utility
Guide and Reference.
Important: Be very careful when using disk dump and restore for recovering a
data set. Disk dump and restore can make one data set inconsistent with DB2
subsystem tables in some other data set. Use disk dump and restore only to restore
the entire subsystem to a previous point of consistency, and prepare that point as
described in the alternative in step 2 under “Preparing to recover to a prior point
of consistency” on page 449.
Also, DB2 always resets any error ranges when the work file table space is
initialized, regardless of whether the disk error has really been corrected. Work file
table spaces are initialized when:
v The work file table space is stopped and then started
v The work file database is stopped and then started, and the work file table space
was not previously stopped
v DB2 is started and the work file table space was not previously stopped
If the error range is reset while the disk error still exists, and if DB2 has an I/O
error when using the work file table space again, then DB2 sets the error range
again.
You can use the REPORT utility to report on recovery information about the
catalog and directory.
To avoid restart processing of any page sets before attempts are made to recover
any of the members of the list of catalog and directory objects, use the DEFER
option when installing DB2 followed by the option ALL. For more information
about DEFER, see “Deferring restart processing” on page 418.
Point in time recovery: Recovering the DB2 catalog and directory to a prior point
in time is strongly discouraged. For more information about recovering the catalog
and directory to a prior point in time see “Considerations for recovering to a prior
point of consistency” on page 463.
You cannot recover to some points in time. See Part 2 of DB2 Utility Guide and
Reference for more information about these restrictions.
Recovering partitioned table spaces: You cannot recover a table space to a point
in time prior to rotating partitions. After you rotate a partition, you cannot recover
the contents of that partition.
| If you recover to a point in time prior to the addition of a partition, DB2 cannot
| roll back the definition of the partition. In such a recovery, DB2 clears all data from
| the partition, and the partition remains part of the database.
| If you recover a table space partition to a point in time before the table space
| partitions were rebalanced, you must include all partitions that are affected by that
| rebalance in your recovery list.
If you use the DB2 RECOVER utility, the database descriptor (DBD) is updated
dynamically to match the restored table space on the next non-index access of the
table. The table space must be in write access mode.
If you use a method outside of DB2’s control, such as DSN1COPY to restore a table
space to a prior point in time, run the REPAIR utility with the LEVELID option to
force DB2 to accept the down-level data. Then run the REORG utility on the table
space to correct the DBD.
Recovering LOB table spaces: When you recover tables with LOB columns,
recover the entire set of objects, including the base table space, the LOB table
spaces, and index spaces for the auxiliary indexes.
If you use the RECOVER utility to recover a LOB table space to a prior point of
consistency, RECOVER might place the table space in a pending state. For more
details about the particular pending states that the RECOVER utility sets, see
“Using RECOVER to restore data to a previous point in time” on page 468.
Recovering table space sets: If you restore a page set to a prior state, restore all
related tables and indexes to the same point to avoid inconsistencies. The table
spaces that contain referentially related tables are called a table space set. Similarly, a
LOB table space and its associated base table space are also part of a table space
set. For example, in the DB2 sample application, a column in the EMPLOYEE table
identifies the department to which each employee belongs. The departments are
described by records in the DEPARTMENT table, which is in a different table
You can use the REPORT TABLESPACESET utility to determine all the page sets
that belong to a single table space set and then restore those page sets that are
related. However, if page sets are logically related outside of DB2 in application
programs, you are responsible for identifying all the page sets on your own.
To determine a valid quiesce point for the table space set, use the procedure for
determining a RECOVER TOLOGPOINT value. See RECOVER in Part 2 of DB2
Utility Guide and Reference for more information.
Tip: To determine the last value in an identity column, issue the MAX column
function for ascending sequences of identity column values, or the MIN column
function for descending sequences of identity column values. This method
works only if the identity column does not use CYCLE.
v If you recover to a point in time at which the identity column was not yet
defined, that identity column remains part of the table. The resulting identity
column no longer contains values.
To regenerate missing identity column values, perform the following steps:
1. Choose a starting value for the identity column with the following ALTER
TABLE statement:
ALTER TABLE table-name
ALTER COLUMN identity-column-name
RESTART WITH starting-identity-value
2. Run the REORG utility to regenerate lost sequence values.
If you do not choose a starting value, the REORG utility generates a sequence of
identity column values that starts with the next value that DB2 would have
assigned before the recovery.
Recovering indexes
When you recover indexes to a prior point of consistency, the following general
rules apply:
v If an image copies exists for an indexes, use the RECOVER utility.
v If indexes do not have image copies, you must use REBUILD INDEX to
re-create the indexes after the data has been recovered.
More specifically, you must consider how indexes on altered tables and indexes on
tables in partitioned table spaces can restrict recovery.
| Recovering indexes on altered tables: You cannot use the RECOVER utility to
| recover an index to a point in time that existed before you issued any of the
| following ALTER statements on that index. These statements place the index in
| REBUILD-pending (RBDP) status:
| v ALTER INDEX PADDED
| v ALTER INDEX NOT PADDED
| v ALTER TABLE SET DATA TYPE on an indexed column for numeric data type
| changes
| v ALTER TABLE ADD COLUMN and ALTER INDEX ADD COLUMN that are not
| issued in the same commit scope
| When you recover a table space to prior point in time and the table space uses
| indexes that were set to RBDP at any time after the recovery point, you must use
| the REBUILD INDEX utility to rebuild these indexes.
For more information about the RECOVER and REBUILD INDEX utilities, see Part
2 of DB2 Utility Guide and Reference.
The output from the DISPLAY DATABASE RESTRICT command shows the
restrictive states for index spaces. See DB2 Command Reference for descriptions of
status codes displayed by the DISPLAY DATABASE command.
| You cannot recover an index space to a point in time prior to rotating partitions.
| After you rotate a partition, you cannot recover the contents of that partition to a
| point in time before the rotation.
| If you recover to a point in time prior to the addition of a partition, DB2 cannot
| roll back the addition of that partition. In such a recovery, DB2 clears all data from
| the partition, and it remains part of the database.
It is much easier to avoid point in time recovery of the catalog, than to attempt to
correct the inconsistencies that the recovery causes. If you recover catalog tables to
a prior point in time, you must perform the following actions to make catalog
definitions consistent with your data:
1. Run the DSN1PRNT utility with the PARM=(FORMAT, NODATA) option on all
data sets that might contain user table spaces. The NODATA option suppresses
all row data, which reduces the output volume that you receive. Data sets that
contain user tables are of the following form, where y can be either I or J:
catname.DSNDBC.dbname.tsname.y0001.A00n
2. Execute the following SELECT statements to find a list of table space and table
definitions in the DB2 catalog:
See Part 3 of DB2 Utility Guide and Reference for more information about
DSN1COPY and DSN1PRNT.
If the image copy data set is cataloged when the image copy is made, the entry for
that copy in SYSIBM.SYSCOPY does not record the volume serial numbers of the
data set. You can identify that copy by its name by using TOCOPY data set name. If
the image copy data set was not cataloged when it was created, you can identify
the copy by its volume serial identifier by using TOVOLUME volser.
You can use the TOLOGPOINT option in both data sharing and non-data-sharing
environments. In a non-data-sharing environment, TOLOGPOINT and TORBA are
interchangeable keywords that identify an RBA on the log at which recovery stops.
TORBA can be used in a data sharing environment only if the TORBA value is
before the point at which data sharing was enabled.
An inline copy that is made during LOAD REPLACE can produce unpredictable
results if that copy is used later in a RECOVER TOCOPY operation. DB2 makes
the copy during the RELOAD phase of the LOAD operation. Therefore, the copy
does not contain corrections for unique index violations and referential constraint
violations because those corrections occur during the INDEXVAL and ENFORCE
phases.
To improve the performance of the recovery, take a full image copy of the page
sets, and then quiesce them using the QUIESCE utility. This allows RECOVER
TOLOGPOINT to recover the page sets to the quiesce point with minimal use of
the log.
If you are working with partitioned table spaces, image copies that were taken
prior to resetting the REORG-pending status of any partition of a partitioned table
space cannot be used for recovery to a current point in time. Avoid performing a
point in time recovery for a partitioned table space to a point in time that is after
the REORG-pending status was set, but before a rebalancing REORG was
performed. See information about RECOVER in Part 2 of DB2 Utility Guide and
Reference for details on determining an appropriate point in time and creating a
new recovery point.
If you use the REORG TABLESPACE utility with the FASTSWITCH YES option on
only some partitions of a table space, you must recover that table space at the
partition level. When you take an image copy of such a table space, the COPY
utility issues the informational message DSNU429I. For a complete description of
DSNU429I, see .
A table space and all of its indexes (or a table space set and all related indexes)
should be recovered in a single RECOVER utility statement that specifies
TOLOGPOINT. The log point should identify a quiesce point or a common
SHRLEVEL REFERENCE copy point. This action avoids placing indexes in the
CHECK-pending or RECOVER-pending status. If the log point is not a common
quiesce point or SHRLEVEL REFERENCE copy point for all objects, use the
following procedure:
1. RECOVER table spaces to the log point.
2. Use concurrent REBUILD INDEX jobs to rebuild the indexes for each table
space.
This procedure ensures that the table spaces and indexes are synchronized and
eliminates the need to run the CHECK INDEX utility.
The RECOVER utility sets various states on table spaces. The following point in
time recoveries set such states on table spaces:
v When the RECOVER utility finds an invalid column during the LOGAPPLY
phase on a LOB table space, it sets the table space to auxiliary-warning (AUXW)
status.
v When you recover a LOB table space to a point in time that is not a quiesce
point or to an image copy that is produced with SHRLEVEL CHANGE, the LOB
table space is placed in CHECK-pending (CHKP) status.
To remove various pending states from a table space, run the following utilities in
this order:
1. Use the REORG TABLESPACE utility to remove the REORP status.
2. If the table space status is auxiliary CHECK-pending status:
v Use CHECK LOB for all associated LOB table spaces.
v Use CHECK INDEX for all indexes on the LOB table spaces.
3. Use the CHECK DATA utility to remove the CHECK-pending status.
The RECOVER utility does not reset the values that DB2 generates for identity
columns.
| Important: The RECOVER utility does not back out CREATE or ALTER statements.
| After a recovery to a previous point in time, all previous alterations to identity
| column attributes remain unchanged. Because these alterations are not backed out,
| a recovery to a point in time might put identity column tables out of sync with the
| SYSIBM.SYSSEQUENCES catalog table. You might need to modify identity column
| attributes after a recovery to resynchronize identity columns with the catalog table.
Keep access method services LISTCAT listings of table space data sets that
correspond to each level of retained backup data.
For more information about using DSN1COPY, see Part 3 of DB2 Utility Guide and
Reference.
Even though DB2 data sets are defined as VSAM data sets, DB2 data cannot be
read or written by VSAM record processing because it has a different internal
format. The data can be accessed by VSAM control interval (CI) processing. If you
manage your own data sets, you can define them as VSAM linear data sets (LDSs),
and access them through services that support data sets of that type.
Access method services for CI and LDS processing are available in z/OS. IMPORT
and EXPORT use CI processing; PRINT and REPRO do not, but they do support
LDSs.
DFSMS Data Set Services (DFSMSdss) is available on z/OS and provides dump
and restore services that can be used on DB2 data sets. Those services use VSAM
CI processing.
Recommendation: To prepare for this procedure, run regular catalog reports that
include a list of all OBIDs in the subsystem. In addition create catalog reports that
list dependencies on the table (such as referential constraints, indexes, and so on).
After a table is dropped, this information disappears from the catalog.
Important: When you recover a dropped object, you essentially recover a table
space to a point in time. If you want to use log records to perform forward
recovery on the table space, you need the IBM DB2 UDB Log Analysis Tool for
z/OS.
To perform this procedure, you need a full image copy or a DSN1COPY file that
contains the data from the dropped table.
For segmented table spaces, the image copy or DSN1COPY file must contain the
table when it was active (that is, created). Because of the way space is reused for
segmented table spaces, this procedure cannot be used if the table was not active
when the image copy or DSN1COPY was made. For nonsegmented table spaces,
the image copy or DSN1COPY file can contain the table when it was active or not
active.
When a table space is dropped, DB2 loses all information about the image copies
of that table space. Although the image copy data set is not lost, locating it might
require examination of image copy job listings or manually recorded information
about the image copies.
This section provides two separate procedures: one for user-managed data sets and
one for DB2-managed data sets.
User-managed data sets: The following procedure recovers dropped table spaces
that are not part of the catalog. In this procedure, you copy the data sets
containing data from the dropped table space to redefined data sets using the
OBID-translate function of DSN1COPY.
1. Find the DBID for the database, the PSID for the dropped table space, and the
OBIDs for the tables contained in the dropped table space. For information
about how to do this, see step 1 of “Recovery of an accidentally dropped
table” on page 473.
2. Rename the data set containing the dropped table space by using the
IDCAMS ALTER command. Rename both the CLUSTER and DATA portion of
the data set with a name that begins with the integrated catalog facility
catalog name or alias.
3. Redefine the original DB2 VSAM data sets.
Use the access method services LISTCAT command to obtain a list of data set
attributes. The data set attributes on the redefined data sets must be the same
as they were on the original data sets.
4. Use SQL CREATE statements to re-create the table space, tables, and any
indexes on the tables.
5. To allow DSN1COPY to access the DB2 data sets, stop the table space using
the following command:
-STOP DATABASE(database-name) SPACENAM(tablespace-name)
This step is necessary to prevent updates to the table space during this
procedure in the event that the table space has been left open.
6. Find the target identifiers of the objects you created in step 4 (which consist of
a PSID for the table space and the OBIDs for the tables within that table
space) by querying the SYSIBM.SYSTABLESPACE and SYSIBM.SYSTABLES
catalog tables.
DB2-managed data sets: The following procedure recovers dropped table spaces
that are part of the catalog. If a consistent full image copy or DSN1COPY file is
| The online utilities BACKUP SYSTEM and RESTORE SYSTEM are described in
| Part 2 of DB2 Utility Guide and Reference, and the stand-alone utility DSNJU003 is
| described in Part 3 of DB2 Utility Guide and Reference.
| BACKUP SYSTEM online utility: This utility invokes z/OS Version 1 Release 5
| DFSMShsm services to take volume copies of the data in a non-data sharing DB2
| system. All DB2 data sets that are to be copied (and then recovered) must be
| managed by SMS.
| The BACKUP SYSTEM utility requires z/OS Version 1 Release 5 data structures
| called copy pools. Because these data structures are implemented in z/OS, DB2
| cannot generate copy pools automatically. Before you invoke the BACKUP
| SYSTEM utility, copy pools must be allocated in z/OS. For a more detailed
| description of how DB2 uses copy pools, see “Using DFSMShsm with the BACKUP
| SYSTEM utility” on page 36. For information about how to allocate a copy pool in
| z/OS, see z/OS DFSMSdfp Storage Administration Reference.
| You can use the BACKUP SYSTEM utility to ease the task of managing data
| recovery. Choose either DATA ONLY or FULL (data and logs), depending on
| whether your objective is to minimize the amount of data loss or to minimize the
| time for the recovery.
| Using data-only system backups: The BACKUP SYSTEM DATA ONLY utility control
| statement creates copies that contain only databases. The RESTORE SYSTEM utility
| uses these copies to restore databases to an arbitrary point in time. (The RESTORE
| SYSTEM DATA ONLY utility can also use full system backup copies as input, but
| the RESTORE SYSTEM utility will not restore log volumes, which the BACKUP
| SYSTEM FULL utility control statement creates.)
| Using full system backups: The BACKUP SYSTEM FULL utility control statement
| creates copies that contain both logs and databases. With these copies, you can
| recover your DB2 system to the point in time of a backup using normal DB2
| restart recovery, or to an arbitrary point in time by using the RECOVER SYSTEM
| utility.
| When you recover your DB2 system to the point in time of a full system backup,
| you could lose a few hours of data, but your recovery time would be only a few
| minutes (because restart is fast). For more information about this type of recovery,
| see “Recovery to the point in time of a backup” on page 481.
| RESTORE SYSTEM online utility: This utility invokes z/OS Version 1 Release 5
| DFSMShsm services to recover a DB2 system to a prior point in time by restoring
| the databases in the volume copies that have been provided by the BACKUP
| SYSTEM utility. After restoring the data, this utility can then recover to an
| arbitrary point in time.
| If you restore the system data by some other means, use the RESTORE SYSTEM
| utility with the LOGONLY option to skip the restore phase, and use the CRCR to
| apply the logs to the restored databases.
| Backup: You must perform the following procedure before an event occurs that
| creates a need to recover your DB2 system:
| 1. Use BACKUP SYSTEM DATA ONLY to take the backup. You can also take a
| FULL backup, although it is not needed.
| Recovery: If you have performed the appropriate backup procedures, you can use
| the following procedure to recover your DB2 system to an arbitrary point in time:
| 1. Stop the DB2 subsystem. If your system is a data sharing group, stop all
| members of the group.
| 2. Run DSNJU003 (change log inventory) with the CRESTART SYSPITR option
| specifying the log truncation point that corresponds to the point in time to
| which you want to recover the system. For data sharing systems, run
| DSNJU003 on all active members of the data-sharing group, and specify the
| same LRSN truncation point for each member. If the point in time that you
| specify for recovery is prior to the oldest system backup, you must manually
| restore the volume backup from tape.
| 3. For data sharing systems, delete all CF structures that the data-sharing group
| owns.
| 4. Start DB2. For data sharing systems, start all active members.
| Using backups from BACKUP SYSTEM: To recover a DB2 system to the point in
| time of a back up that the BACKUP SYSTEM utility creates, you must perform the
| following backup and recovery procedures. For more information about the
| BACKUP SYSTEM utility, see “BACKUP SYSTEM online utility” on page 479.
| Backup: You must perform the following procedure before an event occurs that
| creates a need to recover your DB2 system.
| 1. Use BACKUP SYSTEM FULL to take the system backup. DFSMShsm maintains
| up to 85 versions of system backups on disk at any given time.
| Recovery: If you have performed the appropriate backup procedures, you can use
| the following procedure to recover your DB2 system to the point in time of a
| backup:
| 1. Stop the DB2 subsystem. For data sharing systems, stop all members of the
| group.
| 2. Use the DFSMShsm command FRRECOV * COPYPOOL(cpname)
| GENERATION(gen) to restore the database and log copy pools that the
| BACKUP SYSTEM utility creates. In this command, cpname specifies the name
| of the copy pool, and gen specifies which version of the copy pool is restored.
| 3. For data sharing systems, delete all CF structures that are owned by this group.
| 4. Start DB2. For data sharing systems, start all active members.
| 5. For data sharing systems, execute the GRECP and LPL recovery, which recovers
| the changed data that was stored in the coupling facility at the time of the
| backup.
| Using backups from FlashCopy: To recover a DB2 system to the point in time of
| a backup that FlashCopy creates, you must perform the following backup and
| recovery procedures. For more information about the FlashCopy function, see z/OS
| DFSMS Advanced Copy Services.
| Backup: You must perform the following procedure before an event occurs that
| creates a need to recover your DB2 system:
| 1. Issue the DB2 command SET LOG SUSPEND to suspend logging and update
| activity, and to quiesce 32K page writes and data set extensions. For data
| sharing systems, issue the command to each member of the group.
| 2. Use the FlashCopy function to copy all DB2 volumes. Include any ICF catalogs
| that are used by DB2, as well as active logs and BSDSs.
| 3. Issue the DB2 command SET LOG RESUME to resume normal DB2 update
| activity. To save disk space, you can use DFSMSdss to dump the disk copies
| you just created to a lower-cost medium, such as tape.
| Recovery: If you have performed the appropriate backup procedures, you can use
| the following procedure to recover your DB2 system to the point in time of a
| backup:
| 1. Stop the DB2 subsystem. For data sharing systems, stop all members of the
| group.
| Recovering with BACKUP SYSTEM: The following procedures use the BACKUP
| SYSTEM and RESTORE SYSTEM utilities to recover your DB2 system:
| Preparation:
| 1. Use BACKUP SYSTEM FULL to take the system backup.
| 2. Transport the system backups to the remote site.
| Recovery:
| 1. Run the DSNJU003 utility using the control statement CRESTART CREATE,
| SYSPITR=log-truncation-point, where log-truncation-point is the RBA or LRSN
| of the point to which you want to recover.
| 2. Start DB2.
| 3. Run the RESTORE SYSTEM utility using the control statement RESTORE SYSTEM
| to recover to the current time (or to the time of the last log transmission from
| the local site).
| Preparation:
| 1. Issue the DB2 command SET LOG SUSPEND to suspend logging and update
| activity, and to quiesce 32K page writes and data set extensions. For data
| sharing systems, issue the command to each member of the data sharing group.
| 2. Use the FlashCopy function to copy all DB2 volumes. Include any ICF catalogs
| that are used by DB2, as well as active logs and BSDSs.
| 3. Issue the DB2 command SET LOG RESUME to resume normal DB2 update
| activity.
| 4. Use DFSMSdss to dump the disk copies that you just created to tape, and then
| transport this tape to the remote site. You can also use other methods to
| transmit the copies you make to the remote site.
| Recovery:
| 1. Use DFSMSdss to restore the FlashCopy data sets to disk.
| 2. Run the DSNJU003 utility using the control statement CRESTART CREATE,
| SYSPITR=log-truncation-point, where log-truncation-point is the RBA or LRSN
| of the point to which you want to recover.
System action: If the IRLM abends, DB2 terminates. If the IRLM waits or loops,
then terminate the IRLM, and DB2 terminates automatically.
Operator action:
Operator action:
1. IPL z/OS and initialize the job entry subsystem.
2. If you normally run VTAM with DB2, start VTAM at this point.
3. Start the IRLM if you did not set it for automatic start when you installed DB2.
(See “Starting the IRLM” on page 347.)
4. Start DB2. (See “Starting DB2” on page 322.)
5. Use the RECOVER POSTPONED command if postponed-abort units of
recovery were reported after restarting DB2, and the AUTO option of the
LIMIT BACKOUT field on installation panel DSNTIPL was not specified.
6. Restart IMS or CICS.
a. IMS automatically connects and resynchronizes when it is restarted. (See
“Connecting to the IMS control region” on page 358.)
b. CICS automatically connects to DB2 if the CICS PLT contains an entry for
the attach module DSNCCOM0. Alternatively, use the command DSNC
STRT to connect the CICS attachment facility to DB2. (See “Connecting from
CICS” on page 354.)
If you know that a power failure is imminent, it is a good idea to issue STOP DB2
MODE(FORCE) to allow DB2 to come down cleanly before the power is
interrupted. If DB2 is unable to stop completely before the power failure, the
situation is no worse than if DB2 were still up.
Symptom: No I/O activity for the affected disk address. Databases and tables
residing on the affected unit are unavailable.
System action: The system returns SQLCODE 0 for the SELECT statement, because
the error was not in SQL or DB2, but in the application program. That error can be
identified and corrected, but the data in the table is now inaccurate.
If you use this procedure, you will lose any updates to the database that occurred
after the last checkpoint before the application error occurred.
1. Run the DSN1LOGP stand-alone utility on the log scope available at DB2
restart, using the SUMMARY(ONLY) option. For instructions on running
DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference.
2. Determine the RBA of the most recent checkpoint before the first bad update
occurred, from one of the following sources:
v Message DSNR003I on the operator’s console. It looks (in part) like this:
DSNR003I RESTART ..... PRIOR CHECKPOINT
RBA=000007425468
Symptom:
v IMS waits, loops, or abends.
v DB2 attempts to send the following message to the IMS master terminal during
an abend:
DSNM002I IMS/TM xxxx DISCONNECTED FROM SUBSYSTEM
yyyy RC=RC
This message cannot be sent if the failure prevents messages from being
displayed.
v DB2 does not send any messages related to this problem to the z/OS console.
System action:
v DB2 detects that IMS has failed.
v DB2 either backs out or commits work in process.
v DB2 saves indoubt units of recovery. (These must be resolved at reconnection
time.)
Operator action:
1. Use normal IMS restart procedures, which include starting IMS by issuing the
z/OS START IMS command.
2. The following results occur:
v All DL/I and DB2 updates that have not been committed are backed out.
v IMS is automatically reconnected to DB2.
v IMS passes the recovery information for each entry to DB2 through the IMS
attachment facility. (IMS indicates whether to commit or roll back.)
v DB2 resolves the entries according to IMS instructions.
Problem 1
There are unresolved indoubt units of recovery. When IMS connects to DB2, DB2
has one or more indoubt units of recovery that have not been resolved.
Symptom: If DB2 has indoubt units of recovery that IMS did not resolve, the
following message is issued at the IMS master terminal:
DSNM004I RESOLVE INDOUBT ENTRY(S) ARE OUTSTANDING FOR
SUBSYSTEM xxxx
When this message is issued, IMS was either cold started or it was started with an
incomplete log tape. This message could also be issued if DB2 or IMS had an
abend due to a software error or other subsystem failure.
System action:
v The connection remains active.
v IMS applications can still access DB2 databases.
v Some DB2 resources remain locked out.
If the command is rejected because there are more network IDs associated, use
the same command again, substituting the recovery ID for the network ID.
(For a description of the OASN and the NID, see “Duplicate correlation IDs” on
page 362.)
Problem 2
Committed units of recovery should be aborted. At the time IMS connects to DB2,
DB2 has committed one or more indoubt units of recovery that IMS says should be
rolled back.
Symptom: By DB2 restart time, DB2 has committed and rolled back those units of
recovery about which DB2 was not indoubt. DB2 records those decisions, and at
connect time, verifies that they are consistent with the IMS/TM decisions.
An inconsistency can occur when the DB2 RECOVER INDOUBT command is used
before IMS attempted to reconnect. If this happens, the following message is issued
at the IMS master terminal:
DSNM005I IMS/TM RESOLVE INDOUBT PROTOCOL PROBLEM WITH
SUBSYSTEM xxxx
Because DB2 tells IMS to retain the inconsistent entries, the following message is
issued when the resolution attempt ends:
DFS3602I xxxx SUBSYSTEM RESOLVE-INDOUBT FAILURE,
RC=yyyy
System programmer action: Do not use the DB2 command RECOVER INDOUBT.
The problem is that DB2 was not indoubt but should have been.
Database updates have most likely been committed on one side (IMS or DB2) and
rolled back on the other side. (For a description of the OASN and the NID, see
“Duplicate correlation IDs” on page 362.)
1. Enter the IMS command /DISPLAY OASN SUBSYS DB2 to display the IMS list
of units of recovery that need to be resolved. The /DISPLAY OASN SUBSYS
DB2 command produces the OASNs in a decimal format, not a hexadecimal
format.
2. Issue the IMS command /CHANGE SUBSYS DB2 RESET to reset all the entries
in the list. (No entries are passed to DB2.)
3. Use DFSERA10 to print the log records recorded at the time of failure and
during restart. Look at the X'37', X'56', and X'5501FE' records at reconnect time.
Notify the IBM support center about the problem.
4. Determine what the inconsistent unit of recovery was doing by using the log
information, and manually make the DL/I and DB2 databases consistent.
Problem 1
An IMS application abends.
Symptom: The following messages appear at the IMS master terminal and at the
LTERM that entered the transaction involved:
DFS555 - TRAN tttttttt ABEND (SYSIDssss);
MSG IN PROCESS: xxxx (up to 78 bytes of data) timestamp
DFS555A - SUBSYSTEM xxxx OASN yyyyyyyyyyyyyyyy STATUS COMMIT|ABORT
System action:
The failing unit of recovery is backed out by both DL/I and DB2.
The connection between IMS and DB2 remains active.
Operator action: If you think the problem was caused by a user error, refer to Part
2 of DB2 Application Programming and SQL Guide. For procedures to diagnose DB2
problems, rather than user errors, refer to Part 3 of DB2 Diagnosis Guide and
Reference. If necessary, contact the IBM support center for assistance.
Problem 2
DB2 has failed or is not running.
In both cases, the master terminal receives a message (IMS message number
DFS554), and the terminal involved also receives a message (DFS555).
Operator action:
1. Restart DB2.
2. Follow the standard IMS procedures for handling application abends.
tranid can represent any abending CICS transaction and abcode is the abend code.
System action:
The failing unit of recovery is backed out in both CICS and DB2.
The connection remains.
Operator action:
1. For information about the CICS attachment facility abend, refer to Part 2 of
DB2 Messages.
2. For an AEY9 abend, start the CICS attachment facility.
3. For an ASP7 abend, determine why the CICS SYNCPOINT was unsuccessful.
4. For other abends, see DB2 Diagnosis Guide and Reference or CICS Transaction
Server for z/OS Problem Determination Guide for diagnostic procedures.
Operator action:
1. Correct the problem that caused CICS to terminate abnormally.
2. Do an emergency restart of CICS. The emergency restart performs each of the
following actions:
v Backs out inflight transactions that changed CICS resources
v Remembers the transactions with access to DB2 that might be indoubt.
3. Start the CICS attachment facility by entering the appropriate command for
your release of CICS. See “Connecting from CICS” on page 354. The CICS
attachment faciliy performs the following actions:
v Initializes and reconnects to DB2.
v Requests information from DB2 about the indoubt units of recovery and
passes the information to CICS.
v Allows CICS to resolve the indoubt units of recovery.
Symptom:
v CICS remains operative, but the CICS attachment facility abends.
v The CICS attachment facility issues a message giving the reason for the
connection failure, or it requests an X'04E' dump.
v The reason code in the X'04E' dump gives the reason for failure.
v CICS issues message DFH2206 indicating that the CICS attach facility has
terminated abnormally with the DSNC abend code.
v CICS application programs trying to access DB2 while the CICS attachment
facility is inactive are abnormally terminated. The code AEY9 is issued.
System Action: CICS backs out the abnormally terminated transaction and treats it
like an application abend.
Operator action:
1. Start the CICS attachment facility by entering the appropriate command for
your release of CICS. See “Connecting from CICS” on page 354.
For CICS, a DB2 unit of recovery could be indoubt if the forget entry (X'FD59') of
the task-related installation exit is absent from the CICS system journal. The
indoubt condition applies only to the DB2 UR, because CICS will have already
committed or backed out any changes to its resources.
A DB2 unit of recovery is indoubt for DB2 if an End Phase 1 is present and the
Begin Phase 2 is absent.
Problem: When CICS connects to DB2, there are one or more indoubt units of
recovery that have not been resolved.
CICS retains details of indoubt units of recovery that were not resolved during
connection start up. An entry is purged when it no longer appears on the list
presented by DB2 or, when present, DB2 solves it.
System programmer action: Any indoubt unit of recovery that CICS cannot resolve
must be resolved manually by using DB2 commands. This manual procedure
should be used rarely within an installation, because it is required only where
operational errors or software problems have prevented automatic resolution. Any
inconsistencies found during indoubt resolution must be investigated.
The corr_id (correlation ID) for CICS Transaction Server for z/OS 1.1 and previous
releases of CICS consists of:
Byte 1 Connection type: G = group, P = pool
Byte 2 Thread type: T = transaction (TYPE=ENTRY), G = group, C = command
(TYPE=COMD)
Bytes 3, 4
Thread number
Bytes 5 - 8
Transaction ID
The corr_id (correlation ID) for CICS Transaction Server for z/OS 1.2 and
subsequent releases of CICS consists of:
Bytes 1 - 4
Thread type: COMD, POOL, or ENTR
Bytes 5 - 8
Transaction ID
Bytes 9 - 12
Unique thread number
Two threads can sometimes have the same correlation ID when the connection has
been broken several times and the indoubt units of recovery have not been
resolved. In this case, the network ID (NID) must be used instead of the correlation
ID to uniquely identify indoubt units of recovery.
The network ID consists of the CICS connection name and a unique number
provided by CICS at the time the syncpoint log entries are written. This unique
number is an 8-byte store clock value that is stored in records written to both the
CICS system log and to the DB2 log at syncpoint processing time. This value is
referred to in CICS as the recovery token.
Step 2: Scan the CICS log for entries related to a particular unit of recovery: To
do this, search the CICS log, looking for a PREPARE record (JCRSTRIDX'F959'), for
the task-related installation where the recovery token field (JCSRMTKN) equals the
value obtained from the network-ID. The network ID is supplied by DB2 in the
DISPLAY THREAD command output.
CICS journal print utility DFHJUP can be used when scanning the log. See CICS
Transaction Server for z/OS Operations and Utilities Guide for details on how to use
this program.
Step 3: Scan the DB2 log for entries related to a particular unit of recovery: To
do this, scan the DB2 log to locate the End Phase 1 record with the network ID
required. Then use the URID from this record to obtain the rest of the log records
for this unit of recovery.
When scanning the DB2 log, note that the DB2 start up message DSNJ099I
provides the start log RBA for this session.
The DSN1LOGP utility can be used for that purpose. See Part 3 of DB2 Utility
Guide and Reference for details on how to use this program.
Step 4: If needed, do indoubt resolution in DB2: DB2 can be directed to take the
recovery action for an indoubt unit of recovery using a DB2 RECOVER INDOUBT
command. Where the correlation ID is unique, use the following command:
DSNC -RECOVER INDOUBT (connection-name)
ACTION (COMMIT/ABORT)
ID (correlation-id)
If the transaction is a pool thread, use the value of the correlation ID (corr_id)
returned by DISPLAY THREAD for thread#.tranid in the command RECOVER
INDOUBT. In this case, the first letter of the correlation ID is P. The transaction ID
is in characters five through eight of the correlation ID.
When two threads have the same correlation ID, use the NID keyword instead of
the ID keyword. The NID value uniquely identifies the work unit.
The command results that are in either of the following messages to indicate
whether the thread is committed or rolled back:
DSNV414I - THREAD thread#.tranid COMMIT SCHEDULED
DSNV415I - THREAD thread#.tranid ABORT SCHEDULED
When performing indoubt resolution, note that CICS and the attachment facility
are not aware of the commands to DB2 to commit or abort indoubt units of
recovery, because only DB2 resources are affected. However, CICS keeps details
Symptom:
v If the main subtask abends, an abend dump is requested. The contents of the
dump indicate the cause of the abend. When the dump is issued, shutdown of
the CICS attachment facility begins.
v If a thread subtask terminates abnormally, an X'04E' dump is issued and the
CICS application abends with a DSNC dump code. The X'04E' dump should
show the cause of the abend. The CICS attachment facility remains active.
System action:
v The CICS attachment facility shuts down if there is a main subtask abend.
v The matching CICS application abends with a DSNC dump code if a thread
subtask abends.
Operator action: Correct the problem that caused the abend by analyzing the CICS
formatted transaction dump or subtask SNAP dump. For more information about
analyzing these dumps, see Part 2 of DB2 Messages. If the CICS attachment facility
shuts down, use CICS commands to stop the execution of any CICS-DB2
applications.
System action:
v IMS and CICS continue.
v In-process CICS and IMS applications receive SQLCODE -923 (SQLSTATE
'57015') when accessing DB2.
Operator action:
1. Restart DB2 by issuing the command START DB2.
2. Reestablish the IMS connection by issuing the IMS command /START SUBSYS
DB2.
3. Reestablish the CICS connection by issuing the CICS attachment facility
command DSNC STRT.
Symptom: An out of space condition on the active log has very serious
consequences. When the active log becomes full, the DB2 subsystem cannot do any
work that requires writing to the log until an offload is completed.
Due to the serious implications of this event, the DB2 subsystem issues the
following warning message when the last available active log data set is 5% full
and reissues the message after each additional 5% of the data set space is filled.
Each time the message is issued, the offload process is started. IFCID trace record
0330 is also issued if statistics class 3 is active.
DSNJ110E - LAST COPYn ACTIVE LOG DATA SET IS nnn PERCENT FULL
If the active log fills to capacity, after having switched to single logging, the
following message is issued, and an offload is started. The DB2 subsystem then
halts processing until an offload has completed.
DSNJ111E - OUT OF SPACE IN ACTIVE LOG DATA SETS
System action: DB2 waits for an available active log data set before resuming
normal DB2 processing. Normal shutdown, with either QUIESCE or FORCE, is not
possible because the shutdown sequence requires log space to record system events
related to shutdown (for example, checkpoint records).
Operator action: Make sure offload is not waiting for a tape drive. If it is, mount a
tape and DB2 will process the offload command.
If you are uncertain about what is causing the problem, enter the following
command:
-ARCHIVE LOG CANCEL OFFLOAD
This command causes DB2 to restart the offload task. This might solve the
problem.
If this command does not solve the problem, you must determine the cause of the
problem and then reissue the command again. If the problem cannot be solved
quickly, have the system programmer define additional active logs.
System programmer action: Additional active log data sets can permit DB2 to
continue its normal operation while the problem causing the offload failures is
corrected.
1. Use the z/OS command CANCEL command to bring DB2 down.
2. Use the access method services DEFINE command to define new active log
data sets. Run utility DSNJLOGF to initialize the new active log data sets.
To minimize the number of offloads taken per day in your installation, consider
increasing the size of the active log data sets.
3. Define the new active log data sets in the BSDS by using the change log
inventory utility (DSNJU003). For additional details, see Part 3 of DB2 Utility
Guide and Reference.
System action:
v Marks the failing log data set TRUNCATED in the BSDS.
v Goes on to the next available data set.
v If dual active logging is used, truncates the other copy at the same point.
v The data in the truncated data set is offloaded later, as usual.
v The data set is not stopped. It is reused on the next cycle. However, if there is a
DSNJ104 message indicating that there is a CATUPDT failure, then the data set
is marked “stopped”.
System programmer action: If you get the DSNJ104 message indicating CATUPDT
failure, you must use access method services and the change log inventory utility
(DSNJU003) to add a replacement data set. This requires that you bring DB2 down.
When you do this depends on how widespread the problem is.
v If the problem is localized and does not affect your ability to recover from any
further problems, you can wait until the earliest convenient time.
v If the problem is widespread (perhaps affecting an entire set of active log data
sets), take DB2 down after the next offload.
For instructions on using the change log inventory utility, see Part 3 of DB2 Utility
Guide and Reference.
Having completed one active log data set, DB2 found that the subsequent (COPY
n) data sets were not offloaded or were marked “stopped”.
System action: Continues in single mode until offloading completes, then returns
to dual mode. If the data set is marked “stopped”, however, then intervention is
required.
System programmer action: Check that offload is proceeding and is not waiting for
a tape mount. It might be necessary to run the print log map utility to determine
the status of all data sets.
If there are “stopped” data sets, you must use IDCAMS to delete the data sets, and
then re-add them using the change log inventory utility (DSNJU003). See Part 3 of
DB2 Utility Guide and Reference for information about using the change log
inventory utility.
System action:
Also, you can use IDCAMS REPRO to archive as much of the stopped active log
data set as possible. Then run the change log inventory utility to notify the BSDS
of the new archive log and its log RBA range. Repairing the active log does not
solve the problem, because offload does not go back to unload it.
If the active log data set has been stopped, it is not used for logging. The data set
is not deallocated; it is still used for reading.
If the data set is not stopped, an active log data set should nevertheless be
replaced if persistent errors occur. The operator is not told explicitly whether the
data set has been stopped. To determine the status of the active log data set, run
the print log map utility (DSNJU004). For more information about the print log
map utility, see Part 3 of DB2 Utility Guide and Reference.
z/OS dynamic allocation provides the ERROR STATUS. If the allocation was for
offload processing, the message is also displayed:
DSNJ115I - OFFLOAD FAILED, COULD NOT ALLOCATE AN ARCHIVE DATA SET
Operator action: Check the allocation error code for the cause of the problem and
correct it. Ensure that drives are available and run the recovery job again. Caution
must be exercised if a DFSMSdfp ACS user-exit filter has been written for an
archive log data set, because this can cause the DB2 subsystem to fail on a device
allocation error attempting to read the archive log data set.
System action:
v Offload abandons that output data set (no entry in BSDS).
v Offload dynamically allocates a new archive and restarts offloading from the
point at which it was previously triggered. If there is dual archiving, the second
copy waits.
v If an error occurs on the new data set, the additional actions occur:
– If in dual archive mode, message DSNJ114I is generated and the offload
processing changes to single mode.
DSNJ114I - ERROR ON ARCHIVE DATA SET, OFFLOAD CONTINUING
WITH ONLY ONE ARCHIVE DATA SET BEING GENERATED
– If in single mode, it abandons the output data set. Another attempt to offload
this RBA range is made the next time offload is triggered.
– The active log does not wrap around; if there are no more active logs, data is
not lost.
Operator action: Ensure that offload is allocated on a good drive and control unit.
System action:
v If a second copy exists, it is allocated and used.
v If a second copy does not exist, recovery fails.
Operator action: If you are recovering from tape, try recovering using a different
drive. If this does not work, contact the system programmer.
System programmer action: The only option is to recover to the last image copy or
the last quiesce point RBA. See Part 2 of DB2 Utility Guide and Reference for more
information about using the RECOVER utility.
System action: DB2 deallocates the data set on which the error occurred. If in dual
archive mode, DB2 changes to single archive mode and continues the offload. If
the offload cannot compete in single archive mode, the active log data sets cannot
be offloaded, and the status of the active log data sets remains NOTREUSEABLE.
Another attempt to offload the RBA range of the active log data sets is made the
next time offload is invoked.
System programmer action: If DB2 is operating with restricted active log resources
(see message DSNJ110E), quiesce the DB2 subsystem to restrict logging activity
until the z/OS ABEND is resolved.
In these cases, DB2 issues messages for the access failure for each log data set.
These messages provide information needed to resolve the access error. For
example:
DSNJ104I ( DSNJR206 RECEIVED ERROR STATUS 00000004
FROM DSNPCLOC FOR DSNAME=DSNC710.ARCHLOG1.A0000049
DSNJ104I ( DSNJR206 RECEIVED ERROR STATUS 00000004
FROM DSNPCLOC FOR DSNAME=DSNC710.ARCHLOG2.A0000049
*DSNJ153E ( DSNJR006 CRITICAL LOG READ ERROR
CONNECTION-ID = TEST0001
CORRELATION-ID = CTHDCORID001
LUWID = V71A.SYEC1DB2.B3943707629D=10
REASON-CODE = 00D10345
You can attempt to recover from temporary failures by issuing a positive reply to
message:
*26 DSNJ154I ( DSNJR126 REPLY Y TO RETRY LOG READ REQUEST, N TO ABEND
This section covers some of the BSDS problems that can occur. Problems not
covered here include:
v RECOVER BSDS command failure (messages DSNJ301I through DSNJ307I)
v Change log inventory utility failure (message DSNJ123E)
v Errors in the BSDS backup being dumped by offload (message DSNJ125I).
The error status is VSAM return code/feedback. For information about VSAM
codes, refer to z/OS DFSMS: Macro Instructions for Data Sets.
System action: DB2 attempts to resynchronize the BSDS data sets and restore dual
BSDS mode. If this attempt succeeds, DB2 restart continues automatically.
where rrrr is an z/OS dynamic allocation reason code. For information about these
reason codes, see z/OS MVS Programming: Authorized Assembler Services Guide.
where:
rc Is a return code
sfi Is subfunction information (sfi only appears with certain return codes)
ccc Is a function code
iii Is a job name
sss Is a step name
ddn Is a ddname
ddd Is a device number (if the error is related to a specific device)
ser Is a volume serial number (if the error is related to a specific volume)
xxx Is a VSAM cluster name
dsn Is a data set name
cat Is a catalog name.
For information about these codes, see z/OS MVS System Messages Volumes 1-10.
DSNB204I - OPEN OF DATA SET FAILED. DSNAME = dsn
System action:
v The table space is automatically stopped.
v Programs receive an -904 SQLCODE (SQLSTATE '57011').
v If the problem occurs during restart, the table space is marked for deferred
restart, and restart continues. The changes are applied later when the table space
is started.
Operator action:
DB2 associates a level ID with every page set or partition. Most operations detect a
down-level ID, and return an error condition, when the page set or partition is first
opened for mainline or restart processing. The exceptions are operations that do
not use the data:
LOAD REPLACE
RECOVER
REBUILD INDEX
DSN1COPY
DSN1PRNT
The RESET option of DSN1COPY resets the level ID on its output to a neutral
value that passes any level check. Hence, you can still use DSN1COPY to move
data from one system or table space to another.
The LEVELID option of the REPAIR utility marks a down-level table space or
index as current. See DB2 Utility Guide and Reference for details on using REPAIR.
For directory page sets, and the page sets for SYSIBM.SYSCOPY and
SYSIBM.SYSGROUP, a down-level ID is detected only at restart and not during
mainline operations.
The message contains also the level ID of the data set, the level ID that DB2
expects, and the name of the data set.
System action:
v If the error was reported during mainline processing, DB2 sends back a
″resource unavailable″ SQLCODE to the application and a reason code
explaining the error.
v If the error was detected while a utility was processing, the utility gives a return
code 8.
System programmer action: You can recover by using any of the following
methods:
If the message occurs during normal operation, use any of the preceding methods
in addition to one of the following action:
v Accept the down-level data set by changing its level ID.
The REPAIR utility contains a statement for that purpose. Run a utility job with
the statement REPAIR LEVELID. The LEVELID statement cannot be used in the
same job step with any other REPAIR statement.
Important
If you accept a down-level data set or disable down-level detection, your
data might be inconsistent.
For more information about using the utilities, see DB2 Utility Guide and Reference.
You can control down-level detection. Use the LEVELID UPDATE FREQ field of
panel DSNTIPL to either disable down-level detection or control how often the
level ID of a page set or partition is updated. DB2 accepts any value between 0
and 32767.
To control how often level ID updates are taken, specify a value between 1 and
32767. See Part 2 of DB2 Installation Guide for more information about choosing the
frequency of level ID updates.
If you need to recover LOB data that changed after your last image copy, follow
this procedure:
1. Run the RECOVER utility as you do for other table spaces:
RECOVER TABLESPACE dbname.lobts
If changes were made after the image copy, DB2 puts the table space in Aux
Warning status. The purpose of this status is let you know that some of your
LOBs are invalid. Applications that try to retrieve the values of those LOBs will
receive SQLCODE -904. Applications can still access other LOBs in the LOB
table space.
2. Get a report of the invalid LOBs by running CHECK LOB on the LOB table
space:
Any table spaces identified in DSNU086I messages must be recovered using one of
the procedures in this section listed under “Operator Action”.
If error range recovery fails: If the error range recovery of the table space failed
because of a hardware problem, proceed as follows:
1. Use the command STOP DATABASE to stop the table space or table space
partition that contains the error range. This causes all the in-storage data
buffers associated with the data set to be externalized to ensure data
consistency during the subsequent steps.
2. Use the INSPECT function of the IBM Device Support Facility, ICKDSF, to
check for track defects and to assign alternate tracks as necessary. The physical
location of the defects can be determined by analyzing the output of messages
DSNB224I, DSNU086I, IOS000I, which were displayed on the system operator’s
console at the time the error range was created. If damaged storage media is
where dddddddd is a table space name from the catalog or directory. dddddddd is the
table space that failed (for example, SYSCOPY, abbreviation for SYSIBM.SYSCOPY,
or SYSLGRNX, abbreviation for DSNDB01.SYSLGRNX). This message can indicate
either read or write errors. You can also get a DSNB224I or DSNB225I message,
which could indicate an input or output error for the catalog or directory.
Any catalog or directory table spaces that are identified in DSNU086I messages
must be recovered with this procedure.
If the DB2 directory or any catalog table is damaged, only user IDs with the
RECOVERDB privilege in DSNDB06, or an authority that includes that privilege,
can do the recovery. Furthermore, until the recovery takes place, only those IDs
can do anything with the subsystem. If an ID without proper authorization
attempts to recover the catalog or directory, message DSNU060I is displayed. If the
authorization tables are unavailable, message DSNT500I is displayed indicating the
resource is unavailable.
Operator action: Take the following steps for each table space in the DB2 catalog
and directory that has failed. If there is more than one, refer to the description of
RECOVER in Part 2 of DB2 Utility Guide and Reference for more information about
the specific order of recovery.
1. Stop the failing table spaces.
2. Determine the name of the data set that failed. There are two ways to do this:
v Check prefix.SDSNSAMP (DSNTIJIN), which contains the JCL for installing
DB2. Find the fully qualified name of the data set that failed by searching for
the name of the table space that failed (the one identified in the message as
SPACE = dddddddd).
v Construct the data set name by performing one of the following actions:
– If the table space is in the DB2 catalog, the data set name format is:
DSNC810.DSNDBC.DSNDB06.dddddddd.I0001.A001
where dddddddd is the name of the table space that failed.
– If the table space is in the DB2 directory, the data set name format is:
In this VSAM message, yy is 28, 30, or 32 for an out-of-space condition. Any other
values for yy indicate a damaged VVDS.
System action: Your program is terminated abnormally and one or more messages
are issued.
Operator action: For information about recovering the VVDS, consult z/OS
DFSMS: Managing Catalogs.
System action: For a demand request failure during restart, the object supported
by the data set (an index space or a table space) is stopped with deferred restart
pending. Otherwise, the state of the object remains unchanged. Programs receive a
-904 SQL return code (SQLSTATE '57011').
If the database qualifier of the data set name is DSNDB07, then the condition is on
| your work file database. Use “Procedure 7. Enlarge a fully extended data set for
| the work file database” on page 519.
In all other cases, if the data set has not reached its maximum DB2 size, then you
can enlarge it. (The maximum size is 2 GB for a data set of a simple space, and 1,
2, or 4 GB for a data set containing a partition. Large partitioned table spaces and
indexes on large partitioned table spaces have a maximum data set size of 4 GB.)
v If the data set has not reached the maximum number of VSAM extents, use
“Procedure 1. Extend a data set” on page 518.
v If the data set has reached the maximum number of VSAM extents, use either
“Procedure 2. Enlarge a fully extended data set (user-managed)” on page 518 or
“Procedure 3. Enlarge a fully extended data set (in a DB2 storage group)” on
page 518, depending on whether the data set is user-managed or DB2-managed.
User-managed data sets include essential data sets such as the catalog and the
directory.
Procedure 1. Extend a data set: If the data set is user-defined, provide more VSAM
space. You can add volumes with the access method services command ALTER
ADDVOLUMES or make room on the current volume.
If the data set is defined in a DB2 storage group, add more volumes to the storage
group by using the SQL ALTER STOGROUP statement.
For more information about DB2 data set extension, refer to “Extending
DB2-managed data sets” on page 31.
Procedure 3. Enlarge a fully extended data set (in a DB2 storage group):
1. Use ALTER TABLESPACE or ALTER INDEX with a USING clause. (You do not
have to stop the table space before you use ALTER TABLESPACE.) You can
give new values of PRIQTY and SECQTY in either the same or a new DB2
storage group.
2. Use one of the following procedures. Keep in mind that no movement of data
occurs until this step is completed.
Procedure 4. Add a data set: If the object supported is user-defined, use the access
method services to define another data set. The name of the new data set must
continue the sequence begun by the names of the existing data sets that support
the object. The last four characters of each name are a relative data set number: If
the last name ended with A001, the next must end with A002, and so on. Also, be
sure to add either I or J in the name of the data set.
If the object is defined in a DB2 storage group, DB2 automatically tries to create an
additional data set. If that fails, access method services messages are sent to an
operator indicating the cause of the problem. Correcting that problem allows DB2
to get the additional space.
Procedure 7. Enlarge a fully extended data set for the work file database:
Use one of the following methods to add space for extension to the DB2 storage
group:
v Use SQL to create more table spaces in database DSNDB07.
Or,
v Execute these steps:
Symptom: One of the following messages is issued at the end of utility processing,
depending upon whether or not the table space is partitioned.
DSNU561I csect-name - TABLESPACE= tablespace-name PARTITION= partnum
IS IN CHECK PENDING
DSNU563I csect-name - TABLESPACE= tablespace-name IS IN CHECK PENDING
System action: None. The table space is still available; however, it is not available
to the COPY, REORG, and QUIESCE utilities, or to SQL select, insert, delete, or
update operations that involve tables in the table space.
Operator action:
1. Use the START DATABASE ACCESS (UT) command to start the table space for
utility-only access.
2. Run the CHECK DATA utility on the table space. Take the following
recommendations into consideration:
v If you do not believe that violations exist, specify DELETE NO. If, indeed,
violations do not exist, this resets the check-pending status; however, if
violations do exist, the status is not going to be reset.
v If you believe that violations exist, specify the DELETE YES option and an
appropriate exception table (see Part 2 of DB2 Utility Guide and Reference for
the syntax of this utility). This deletes all rows in violation, copies them to an
exception table, and resets the check-pending status.
v If the check-pending status was set during execution of the LOAD utility,
specify the SCOPE PENDING option. This checks only those rows added to
the table space by LOAD, rather than every row in the table space.
3. Correct the rows in the exception table, if necessary, and use the SQL INSERT
statement to insert them into the original table.
4. Issue the command START DATABASE to start the table space for RO or RW
access, whichever is appropriate. The table space is no longer in check-pending
status and is available for use. If you use the ACCESS (FORCE) option of this
command, the check-pending status is reset. However, this is not recommended
because it does not correct violations of referential constraints.
System action: When the error is detected, it is reported by a console message and
the application receives an SQL return code. For DB2 private protocol access,
SQLCODE -904 (SQLSTATE '57011') is returned with resource type 1001, 1002, or
1003. The resource name in the SQLCA contains VTAM return codes such as
RTNCD, FDBK2, RCPRI, and RCSEC, and any SNA SENSE information. See VTAM
for MVS/ESA Messages and Codes for more information.
System programmer action: The system programmer needs to review the VTAM or
TCP/IP return codes and might need to discuss the problem with a
communications expert. Many VTAM or TCP/IP errors, besides the error of an
inactive remote LU or TCP/IP errors, require a person who has a knowledge of
VTAM or TCP/IP and the network configuration to diagnose them.
Operator action: Correct the cause of the unavailable resource condition by taking
action required by the diagnostic messages appearing on the console.
System action: The distributed data facility (DDF) does not terminate if it has
already started and an individual CDB table becomes unavailable. Depending on
the severity of the failure, threads will either receive a -904 SQL return code
(SQLSTATE '57011') with resource type 1004 (CDB), or continue using VTAM
defaults. Only the threads that access locations that have not had any prior threads
will receive a -904 SQL return code. DB2 and DDF remain up.
Operator action: Correct the error based on the messages received, then stop and
restart DDF.
Problem 2
The DB2 CDB is not defined correctly. This occurs when DDF is started and the
DB2 catalog is accessed to verify the CDB definitions.
Symptom: A DSNL701I, 702I, 703I, 704I, or 705I message is issued to identify the
problem. Other messages describing the cause of the failure are also sent to the
console.
Operator action: Correct the error based on the messages received and restart DDF.
Symptom: In the event of a failure of a database access thread, the DB2 server
terminates the database access thread only if a unit of recovery exists. The server
deallocates the database access thread and then deallocates the conversation with
an abnormal indication (a negative SQL code), which is subsequently returned to
the requesting application. The returned SQL code depends on the type of remote
access:
v DB2 private protocol access
The application program receives a -904 SQL return code (SQLSTATE '57011')
with a resource type 1005 at the requesting site. The SNA sense in the resource
name contains the DB2 reason code describing the failure.
v DRDA access
For a database access thread or non-DB2 server, a DDM error message is sent to
the requesting site and the conversation is deallocated normally. The SQL error
status code is a -30020 with a resource type ’1232’ (agent permanent error
received from the server).
System action: Normal DB2 error recovery mechanisms apply with the following
exceptions:
v Errors caught in the functional recovery routine are automatically converted to
rollback situations. The allied thread sees conversation failures.
System programmer action: All diagnostic information related to the failure must
be collected at the serving site. For a DB2 DBAT, a dump is produced at the server.
Operator action: Communicate with the operator at the other site to take the
appropriate corrective action, regarding the messages appearing on consoles at
both the requesting and responding sites. Operators at both sites should gather the
appropriate diagnostic information and give it to the programmer for diagnosis.
Symptom: VTAM messages and DB2 messages are issued indicating that DDF is
terminating and explaining why.
Operator action: Correct the condition described in the messages received at the
console, and restart VTAM and DDF.
Symptom: TCP/IP messages and DB2 messages are issued indicating that TCP/IP
is unavailable.
Operator action: Correct the condition described in the messages received at the
console, restart TCP/IP. You do not need to restart DDF after a TCP/IP failure.
Message DSNL502I is issued for system conversations that are active to the remote
LU at the time of the failure. This message contains the VTAM diagnostic
information about the cause of the failure.
Operator action: Communicate with the other sites involved regarding the
unavailable resource condition, and request that appropriate corrective action be
taken. If a DSNL502 message is received, the operator should activate the remote
LU.
Operator action: Use the DISPLAY THREAD command with the LOCATION and
DETAIL options to identify the LUWID and the session’s allocation for the waiting
thread. Then use the CANCEL DDF THREAD command to cancel the waiting
thread. If the CANCEL DDF THREAD command fails to break the wait (because
the thread is not suspended in DB2), try using VTAM commands such as VARY
TERM,SID=xxx. For additional information concerning canceling DDF threads, see
“The command CANCEL THREAD” on page 381 and “Using VTAM commands to
cancel threads” on page 382.
To check for very long waits, look to see if the conversation timestamp is changing
from the last time used. If it is changing, the conversation thread is not hung, but
is taking more time for a long query. Also, look for conversation state changes and
determine what they mean.
Symptom: Message DSNL500I is issued at the requester for VTAM conversations (if
it is a DB2 subsystem) with return codes RTNCD=0, FDBK2=B, RCPRI=4,
RCSEC=5 meaning ″Security Not Valid.″ The server has deallocated the
conversation because the user is not allowed to access the server. For conversations
using DRDA access, LU 6.2 communications protocols present specific reasons for
Operator action: If it is a DB2 database access thread, the operator should provide
the DSNL030I message to the system programmer. If it is not a DB2 server, the
operator needs to work with the operator or programmer at the server to get
diagnostic information needed by the system programmer.
Symptom: Your local system hardware has suffered physical damage and is not
operational.
Operator action (at the recovery site): The procedures in this scenario differ from
other recovery procedures because you cannot use the hardware at your local DB2
site to recover data. You use hardware at a remote site to recover after a disaster
using one of the following three methods:
v “Restoring data from image copies and archive logs”
v “Using a tracker site for disaster recovery” on page 535
v “Using data mirroring” on page 544
2. If an integrated catalog facility catalog does not already exist, run job
DSNTIJCA to create a user catalog.
3. Use the access method services IMPORT command to import the integrated
catalog facility catalog.
4. Restore DB2 libraries, such as DB2 reslibs, SMP libraries, user program
libraries, user DBRM libraries, CLISTs, SDSNSAMP, or where the installation
jobs are, JCL for user-defined table spaces, and so on.
5. Use IDCAMS DELETE NOSCRATCH to delete all catalog and user objects.
(Because step 3 imports a user ICF catalog, the catalog reflects data sets that
do not exist on disk.)
# Obtain a copy of installation job DSNTIJIN. This job creates DB2 VSAM and
# non-VSAM data sets. Change the volume serial numbers in the job to volume
# serial numbers that exist at the recovery site. Comment out the steps that
# create DB2 non-VSAM data sets, if these data sets already exist. Run
# DSNTIJIN. However, do not run DSNTIJID.
Data sharing
Obtain a copy of the installation job DSNTIJIN for the first data sharing
member to be migrated. Run DSNTIJIN on the first data sharing
member. For subsequent members of the data sharing group, run the
DSNTIJIN that defines the BSDS and logs.
b. To determine the RBA range for this archive log, use the print log map
utility (DSNJU004) to list the current BSDS contents. Find the most recent
archive log in the BSDS listing and add 1 to its ENDRBA value. Use this as
the STARTRBA. Find the active log in the BSDS listing that starts with this
RBA and use its ENDRBA as the ENDRBA.
Data Sharing
The LRSNs are also required.
c. Use the change log inventory utility (DSNJU003) to register this latest
archive log tape data set in the archive log inventory of the BSDS just
restored. This is necessary because the BSDS image on an archive log tape
does not reflect the archive log data set residing on that tape.
Data sharing
Running DSNJU003 is critical for data sharing groups. Group buffer
pool checkpoint information is stored in the BSDS and needs to be
included from the most recent archive log.
After these archive logs are registered, use the print log map utility
(DSNJU004) with the GROUP option to list the contents of all the
BSDSs. You recieve output that includes the start and end LRSN and
RBA values for the latest active log data sets (shown as
NOTREUSABLE). If you did not save the values from the DSNJ003I
message, you can get those values by running DSNJU004, which
creates output similar the output that is shown in Figure 54 and
Figure 55 on page 528.
Data sharing
Do all other preparatory activities as you would for a single system.
Do these activities for each member of the data sharing group.
d. Use the change log inventory utility to adjust the active logs:
1) Use the DELETE option of the change log inventory utility (DSNJU003)
to delete all active logs in the BSDS. Use the BSDS listing produced in
step 6c on page 527 to determine the active log data set names.
2) Use the NEWLOG statement of the change log inventory utility
(DSNJU003) to add the active log data sets to the BSDS. Do not specify
a STARTRBA or ENDRBA value in the NEWLOG statement. This
indicates to DB2 that the new active logs are empty.
e. If you are using the DB2 distributed data facility, run the change log
inventory utility with the DDF statement to update the LOCATION and
the LUNAME values in the BSDS.
f. Use the print log map utility (DSNJU004) to list the new BSDS contents
and ensure that the BSDS correctly reflects the active and archive log data
set inventories. In particular, ensure that:
v All active logs show a status of NEW and REUSABLE
v The archive log inventory is complete and correct (for example, the start
and end RBAs should be correct).
g. If you are using dual BSDSs, make a copy of the newly restored BSDS data
set to the second BSDS dataset.
7. Optionally, you can restore archive logs to disk. Archive logs are typically
stored on tape, but restoring them to disk could speed later steps. If you elect
this option, and the archive log data sets are not cataloged in the primary
integrated catalog facility catalog, use the change log inventory utility to
update the BSDS. If the archive logs are listed as cataloged in the BSDS, DB2
allocates them using the integrated catalog and not the unit or volser specified
in the BSDS. If you are using dual BSDSs, remember to update both copies.
8. Use the DSN1LOGP utility to determine which transactions were in process at
the end of the last archive log. Use the following job control language where
yyyyyyyyyyyy is the STARTRBA of the last complete checkpoint within the
RBA range on the last archive log from the previous print log map:
DB2 discards any log information in the bootstrap data set and the active logs
with an RBA greater than or equal to nnnnnnnnn000 or an LRSN greater than
nnnnnnnnnnnn as listed in the preceding CRESTART statements.
Use the print log map utility to verify that the conditional restart control
record that you created in the previous step is active.
11. Enter the command START DB2 ACCESS(MAINT).
Data Sharing
If a discrepancy exists among the print log map reports as to the number
of members in the group, record the one that shows the highest number
of members. (This is an unlikely occurrence.) Start this DB2 subsystem
first using ACCESS(MAINT). DB2 prompts you to start each additional
DB2 subsystem in the group.
After all additional members are successfully restarted, and if you are
going to run single-system data sharing at the recovery site, stop all DB2
subsystems but one by using the STOP DB2 command with
MODE(QUIESCE).
If you planned to use the light mode when starting the DB2 group, add
the LIGHT parameter to the START command. Start the members that
run in LIGHT(NO) mode first, followed by the LIGHT(YES) members.
See “Preparing for disaster recovery” on page 451 for details on using
restart light at a recovery site.
What to do about utilities in progress: If any utility jobs were running after the
last time that the log was offloaded before the disaster, you might need to take
some additional steps. After restarting DB2, the following utilities only need to be
terminated with the TERM UTILITY command:
v CHECK INDEX
v MERGECOPY
v MODIFY
v QUIESCE
v RECOVER
v RUNSTATS
v STOSPACE
Note: 1You must specify a value that is greater than zero for integer. If you
specify zero for integer the SORTKEYS option does not apply.
If you have image copies from immediately before REORG failed, run
RECOVER with the option TOCOPY to recover the catalog and directory,
in the order shown in Part 2 of DB2 Utility Guide and Reference.
From the primary site, you transfer the BSDS and the archive logs, and that tracker
site runs periodic LOGONLY recoveries to keep the shadow data up-to-date. If a
disaster occurs at the primary site, the tracker site becomes the takeover site.
Because the tracker site has been shadowing the activity on the primary site, you
do not have to constantly ship image copies; the takeover time for the tracker site
might be faster because DB2 recovery does not have to use image copies.
Important: Do not attempt to start the tracker site when you are setting it up. You
must follow the procedure described in “Using the RECOVER utility to establish a
recovery cycle” on page 539.
| Complete the following procedure to use the RESTORE SYSTEM utility to establish
| a recovery cycle at your tracker site:
| 1. While your primary site continues its usual workload, send a copy of the
| primary site’s active log, archive logs, and BSDS to the tracker site.
| Send full image copies for the following objects:
| v Table spaces or partitions that are reorganized, loaded, or repaired with the
| LOG NO option after the latest recovery cycle.
| v Objects that, after the latest recovery cycle, have been recovered to a point
| in time
| Recommendation: If you are taking incremental image copies, run the
| MERGECOPY utility at the primary site before sending the copy to the tracker
| site.
| 2. At the tracker site, restore the BSDS that was received from the primary site.
| v Locate the BSDS in the latest archive log that is now at the tracker site.
| v Use the change log inventory utility (DSNJU003) to register this archive log
| in the archive log inventory of the new BSDS.
| In this control statement, nnnnnnnnn000 equals the RBA at which the latest
| archive log record ends plus one. You must not specify the RBA at which the
| an archive log begins because you cannot cold start or skip logs in tracker
| mode.
|
| Data sharing
| If you are recovering a data sharing group, you must use the following
| CRESTART control statement on all members of the data sharing group.
| The ENDLRSN value must be the same for all members.
| CRESTART CREATE,ENDLSRN=nnnnnnnnnnnn,FORWARD=NO,BACKOUT=NO
| The ENDLRSN or ENDRBA value indicates the end log point for data
| recovery and for truncating the archive log. With ENDLRSN, the missing log
| records between the lowest and highest ENDLRSN values for all the members
| are applied during the next recovery cycle.
| 4. If the tracker site is a data sharing group, delete all DB2 coupling facility
| structures before restarting the tracker members.
| 5. If you used DSN1COPY to create a copy of SYSUTILX during the last tracker
| cycle, restore this copy with DSN1COPY.
| 6. At the tracker site, restart DB2 to begin a tracker site recovery cycle.
|
| Data sharing
| For data sharing, restart every member of the data sharing group.
|
|
| 7. At the tracker site, run the RESTORE SYSTEM utility with the LOGONLY
| option to apply the logs (both archive and active) to the data at the tracker
| site. See “Media failures during LOGONLY recovery” on page 541 for
| information about what to do if a media failure occurs during LOGONLY
| recovery.
| 8. If the RESTORE SYSTEM utility issues a return code of 4, use DSN1COPY to
| make a copy of SYSUTILX and indexes that are associated with SYSUTILX
| before you recover or rebuild those objects. DSN1COPY issues a return code
| of 4 if applying the log marks one or more DB2 objects as RECP or RBDP.
The ENDLRSN or ENDRBA value indicates the end log point for data
recovery and for truncating the archive log. With ENDLRSN, the missing log
records between the lowest and highest ENDLRSN values for all the members
are applied during the next recovery cycle.
4. If the tracker site is a data sharing group, delete all DB2 coupling facility
structures before restarting the tracker members.
5. At the tracker site, restart DB2 to begin a tracker site recovery cycle.
Data sharing
For data sharing, restart every member of the data sharing group.
6. At the tracker site, submit RECOVER jobs to recover database objects. Run the
RECOVER utility with the LOGONLY option on all database objects that do
not require recovery from an image copy. See “Media failures during
LOGONLY recovery” on page 541 for information about what to do if a media
failure occurs during LOGONLY recovery.
You must recover database objects as the following procedure specifies:
a. Restore the full image copy or DSN1COPY of SYSUTILX.
Recovering SYSUTILX: If you are doing a LOGONLY recovery on
SYSUTILX from a previous DSN1COPY backup, make another DSN1COPY
copy of that table space after the LOGONLY recovery is complete and
before recovering any other catalog or directory objects.
After you recover SYSUTILX and either recover or rebuild its indexes, and
before recovering other system and user table spaces, find out what
utilities were running at the primary site.
b. Recover the catalog and directory.
See DB2 Utility Guide and Reference for information about the order of
recovery for the catalog and directory objects.
User-defined catalog indexes: Unless you require them for catalog query
performance, it is not necessary to rebuild user-defined catalog indexes
until the tracker DB2 becomes the takeover DB2. However, if you are
recovering user-defined catalog indexes, do the recover in this step.
c. If needed, recover other system data such as the data definition control
support table spaces and the resource limit facility table spaces.
Data sharing
If read/write shared data (GPB-dependent data) is in the advisory
recovery pending state, the tracker DB2 performs recovery processing.
Because the tracker DB2 always performs a conditional restart, the
postponed indoubt units of recovery are not recognized after the
tracker DB2 restarts.
10. After all recovery has completed at the tracker site, shut down the tracker site
DB2. This is the end of the tracker site recovery cycle. If you choose to, you
can stop and start the tracker DB2 several times before completing a recovery
cycle.
If an entire volume is corrupted and you are using DB2 storage groups, you cannot
use the ALTER STOGROUP statement to remove the corrupted volume and add
another as is documented for a non-tracker system. Instead, you must remove the
corrupted volume and reinitialize another volume with the same volume serial
number before you invoke the RECOVER utility for all table spaces and indexes on
that volume.
Because bringing up your tracker site as the takeover site destroys the tracker site
environment, you should save your complete tracker site prior to takeover site
testing. The tracker site can then be restored after the takeover site testing, and the
tracker site recovery cycles can be resumed.
| Recovering at a tracker site that uses RESTORE SYSTEM LOGONLY: If you use
| RESTORE SYSTEM LOGONLY in the recovery cycles at the tracker site, use the
| following procedure after a disaster to make the tracker site the takeover site:
| 1. If log data for a recovery cycle is enroute or is available but has not yet been
| used in a recovery cycle perform the procedure in “Using RESTORE SYSTEM
| LOGONLY to establish a recovery cycle” on page 537.
| 2. Ensure that the TRKSITE NO subsystem parameter is specified.
| 3. For scenarios other than data sharing, continue with the next step.
|
| Data sharing
| If this is a data sharing system, delete the coupling facility structures.
|
|
| 4. Start DB2 at the same RBA or ENDLSRN that you used in the most recent
| tracker site recovery cycle. Specify FORWARD=YES and BACKOUT=YES in the
| CRESTART statement (this takes care of uncommitted work).
| 5. Restart the objects that are in GRECP or LPL status using the following START
| DATABASE command:
| START DATABASE(*) SPACENAM(*)
| 6. If you used to DSN1COPY to create a copy of SYSUTILX in the last recovery
| cycle, use DSN1COPY to restore that copy.
| 7. Terminate any in-progress utilities by using the following procedure:
| a. Enter the command DISPLAY UTIL(*).
| b. Run the DIAGNOSE utility with DISPLAY SYSUTIL to get the names of
| objects on which utilities are being run.
| c. Terminate in-progress utilities using the command TERM UTIL(*).
542 Administration Guide
| See “What to do about utilities in progress” on page 533 for more
| information about how to terminate in-progress utilities and how to recover
| an object on which a utility was running.
| 8. Rebuild indexes, including IBM and user-defined indexes on the DB2 catalog
| and user-defined indexes on table spaces.
Recovering at a tracker site that uses the RECOVER utility: If you use the
RECOVER utility in the recovery cycles at your tracker site, use the following
procedure after a disaster to make the tracker site the takeover site:
1. Restore the BSDS and register the archive log from the last archive you received
from the primary site.
2. For scenarios other than data sharing, continue with the next step.
Data sharing
If this is a data sharing system, delete the coupling facility structures.
3. Ensure that the DEFER ALL and TRKSITE NO subsystem parameters are
specified.
4. If this is a non-data-sharing DB2, the log truncation point varies depending on
whether you have received more logs from the primary site since the last
recovery cycle:
v If you received no more logs from the primary site:
Start DB2 using the same ENDRBA that you used on the last tracker cycle.
Specify FORWARD=YES and BACKOUT=YES (this takes care of
uncommitted work). If you have fully recovered the objects during the
previous cycle, then they are current except for any objects that had
outstanding units of recovery during restart. Because the previous cycle
specified NO for FORWARD and BACKOUT and you have now specified
YES, affected data sets are placed in LPL. Restart the objects that are in LPL
status using the following START DATABASE command:
START DATABASE(*) SPACENAM(*)
After you issue the command, all table spaces and indexes that were
previously recovered are now current. Remember to rebuild any indexes that
were not recovered during the previous tracker cycle, including user-defined
indexes on the DB2 catalog.
v If you received more logs from the primary site:
Start DB2 using the truncated RBA nnnnnnnnn000, which is the ENDRBA + 1
of the latest archive log. Specify FORWARD=YES and BACKOUT=YES. Run
your recoveries as you did during recovery cycles.
Important: The scenarios and procedures for data mirroring are intended for
environments that mirror an entire DB2 subsystem or data sharing group, which
includes the catalog, directory, user data, BSDS, and active logs. You must mirror
all volumes in such a way that they terminate at exactly the same point. You can
achieve this final condition using consistency groups, which are described in
“Consistency groups” on page 545.
To use data mirroring for disaster recovery, you must mirror data from your local
site with a method that does not reproduce a rolling disaster at your recovery site.
To recover DB2 with data integrity, you must use volumes that end at a consistent
point in time for each DB2 subsystem or data sharing group. Mirroring a rolling
disaster causes volumes at your recovery site to end over a span of time rather
than at one single point.
Primary Secondary
Database Database
Disk fails Device Device
at 12:00
Example: In a rolling disaster, the following events at the primary site cause data
inconsistency at your recovery site. This data inconsistency example follows the
same scenario that Figure 56 depicts.
1. A table space is updated in the buffer pool (11:58).
2. The log record is written to disk on logical storage subsystem 1 (12:00).
3. Logical storage subsystem 2 fails (12:01).
4. The update to the table space is externalized to logical storage subsystem 2 but
is not written because subsystem 2 failed (12:02).
5. The log record that table space update was made is written to disk on logical
storage subsystem 1 (12:03).
6. Logical storage subsystem 1 fails (12:04).
Because the logical storage subsystems do not fail at the same point in time, they
contain inconsistent data. In this scenario, the log indicates that the update is
applied to the table space, but the update is not applied to the data volume that
holds this table space.
Attention: Any disaster recovery solution that uses data mirroring must
guarantee that all volumes at the recovery site contain data for the same point in
time.
Consistency groups
Generally, a consistency group is a collection of volumes that contain consistent,
related data. This data can span logical storage subsystems and disk subsystems.
For DB2 specifically, a consistency group contains an entire DB2 subsystem or a
DB2 data sharing group. The following DB2 elements comprise a consistency
group:
v Catalog tables
v Directory tables
You can use various methods to create consistency groups. The following DB2
services enable you to create consistency groups:
v XRC I/O timestamping and system data mover
v FlashCopy consistency groups
v GDPSfreeze policies
v The DB2 SET LOG SUSPEND command
v The DB2 BACKUP SYSTEM utility
When a rolling disaster strikes your primary site, consistency groups guarantee
that all volumes at the recovery site contain data for the same point in time. In a
data mirroring environment, you must perform both of the following actions for
each consistency group that you maintain:
v Mirror data to the secondary volumes in the same sequence that DB2 writes data
to the primary volumes.
In many processing situations, DB2 must complete one write operation before it
begins another write operation on a different disk group or a different storage
server. A write operation that depends on a previous write operation is called a
dependent write. Do not mirror a dependent write if you have not mirrored the
write operation on which the dependent write depends. If you mirror data out
of sequence, your recovery site will contain inconsistent data that you cannot
use for disaster recovery.
v Temporarily suspend and queue write operations to create a group point of
consistency when an error occurs between any pair of primary and secondary
volumes.
When an error occurs that prevents the update of a secondary volume in a
single-volume pair, this error might mark the beginning of a rolling disaster. To
prevent your secondary site from mirroring a rolling disaster, you must suspend
and queue data mirroring with the following steps after a write error between
any pairs:
1. Suspend and queue all write operations in the volume pair that experiences
a write error.
2. Invoke automation that temporarily suspends and queues data mirroring to
all your secondary volumes.
3. Save data at the secondary site at a point of consistency.
4. If a rolling disaster does not strike your primary site, resume normal data
mirroring after some amount of time that you have defined. If a rolling
disaster does strike your primary site, follow the recovery procedure in
“Recovering in a data mirroring environment.”
You do not need to restore DB2 image copies or apply DB2 logs to bring DB2 data
to the current point in time when you use data mirroring. However, you might
need image copies at the recovery site if the LOAD or RECOVER utilities were
active at the time of the disaster.
To recover at the secondary site after a disaster, use the following procedure:
1. IPL all z/OS images at your recovery site that correspond to the z/OS images
that you lost at your primary site.
2. For scenarios other than data sharing, continue with the next step.
Data sharing
For data sharing groups, you must remove old information from the
coupling facility.
a. Enter the following z/OS command to display the structures for this
data sharing group:
D XCF,STRUCTURE,STRNAME=grpname*
b. For group buffer pools and the lock structure, enter the following
command to force off the connections in those structures:
SETXCF FORCE,CONNECTION,STRNAME=strname,CONNAME=ALL
c. Delete all the DB2 coupling facility structures by using the following
command for each structure:
SETXCF FORCE,STRUCTURE,STRNAME=strname
3. If you are using the distributed data facility, set LOCATON and LUNAME in
the BSDS to values that are specific to your new primary site. To set
LOCATION and LUNAME run the stand-alone utility DSNJU003 (change log
inventory) with the following control statement:
DDF LOCATION=locname, LUNAME=luname
4. Start all DB2 members using local DSNZPARM data sets and perform a
normal restart.
Data sharing
For data sharing groups, DB2 performs group restart. Shared data sets
are set to GRECP (group buffer pool RECOVER-pending) status, and
pages are added to the LPL (logical page list).
5. For scenarios other than data sharing, continue to the next step.
6. Use the following DB2 command to display all utilities that the failure
interrupted:
-DISPLAY UTILITY(*)
If utilities are pending, record the output that this command and continue to
step 7. You cannot restart utilities at a recovery site. You will terminate these
utilities in step number 8. If no utilities are pending, continue to step number
9.
7. Use the DIAGNOSE utility to access the SYSUTILX directory table. This
directory table is not listed in the DB2 SQL Reference because you can access it
only when you use the DIAGNOSE utility. (The DIAGNOSE utility is
normally intended to be used under the direction of IBM Software Support.)
Use the following control statement when you run the DIAGNOSE utility:
DIAGNOSE DISPLAY SYSUTILX
END DIAGNOSE
Record the phase in which each pending utility was interrupted, and record
the object on which each utility was operating.
8. Terminate all pending utilities with the following command:
-TERM UTILITY(*)
9. For scenarios other than data sharing, continue to the next step.
Data sharing
For data sharing groups, use the following START DATABASE command
on each database that contains objects in GRECP or LPL status:
-START DATABASE(database) SPACENAM(*)
When you use the START DATABASE command to recover objects, you
do not need to provide DB2 with image copies.
10. Start all remaining database objects with the following START DATABASE
command:
To recover at an XRC secondary site after a disaster, use the following procedure:
1. Issue the XEND XRC TSO command to end the XRC session.
2. Issue the XRECOVER XRC TSO command. This command changes your
secondary site to your primary site and applies the XRC journals to recover
data that was in transit when your primary site failed.
3. Complete the procedure in “Recovering in a data mirroring environment” on
page 546.
Applications
The following IMS and TSO applications are running at Seattle and accessing both
local and remote data.
v IMS application, IMSAPP01, at Seattle, accessing local data and remote data by
DRDA access at San Jose, which is accessing remote data on behalf of Seattle by
DB2 private protocol access at Los Angeles.
v TSO application, TSOAPP01, at Seattle, accessing data by DRDA access at San
Jose and at Los Angeles.
Threads
The following threads are described and keyed to Figure 57 on page 551. Data base
access threads (DBAT) access data on behalf of a thread (either allied or DBAT) at a
remote requester.
v Allied IMS thread A at Seattle accessing data at San Jose by DRDA access.
– DBAT at San Jose accessing data for Seattle by DRDA access 1 and
requesting data at Los Angeles by DB2 private protocol access 2.
– DBAT at Los Angeles accessing data for San Jose by DB2 private protocol
access 2.
v Allied TSO thread B at Seattle accessing local data and remote data at San Jose
and Los Angeles, by DRDA access.
– DBAT at San Jose accessing data for Seattle by DRDA access 3.
550 Administration Guide
– DBAT at Los Angeles accessing data for Seattle by DRDA access 4.
DB2 at SJ
IBMSJ0DB20001
DBAT 1
CONNID=SEAINS01
CORRID=xyz
PLAN=IMSAPP01
LUWID=15,TOKEN=8
DB2 at SEA
IBMSEADB20001 DBAT 3
CONNID=BATCH
Allied Thread A CORRID=abc
CONNID=SEAIMS01 PLAN=TSOAPP01
CORRID=xyz LUWID=16,TOKEN=6
IMS
PLAN=IMSAPP01
NID=A5
LUWID=15,TOKEN=1 DB2 at LA
IBMLA0DB20001
Allied Thread B
CONNID=BATCH DBAT 2
TSO CORRID=abc CONNID=SERVER
PLAN=TSOAPP01 CORRID=xyz
LUWID=16,TOKEN=2 PLAN=IMSAPP01
LUWID=15,TOKEN=4
DBAT 4
CONNID=BATCH
CORRID=abc
PLAN=TSOAPP01
LUWID=16,TOKEN=5
Figure 57. Resolving indoubt threads. Results of issuing DIS THD TYPE(ACTIVE) at each
DB2 system.
For the purposes of this section, assume that both applications have updated data
at all DB2 locations. In the following problem scenarios, the error occurs after the
coordinator has recorded the commit decision, but before the affected participants
have recorded the commit decision. These participants are therefore indoubt.
The TSO application is told that the commit succeeded. If the application continues
and processes another SQL request, it is rejected with an SQL code indicating it
must roll back before any more SQL requests can be processed. This is to insure
that the application does not proceed with an assumption based upon data
retrieved from LA, or with the expectation that cursor positioning at LA is still
intact.
At LA, an IFCID 209 trace record is written. After the alert is generated and the
message displayed, the DBAT 4 is placed into the indoubt state. All locks remain
held until resolution occurs. The thread appears in a display thread report for
indoubt threads.
The DB2 systems, at both SEA and LA, periodically attempt reconnecting and
automatically resolving the indoubt thread. If the communication failure only
affects the session being used by the TSO application, and other sessions are
available, automatic resolution occurs in a relatively short time. At this time,
message DSNL407 is displayed by both DB2 subsystems.
System programmer action: Determine and correct the cause of the communication
failure. When corrected, automatic resolution of the indoubt thread occurs within a
short time. If the failure cannot be corrected for a long time, call the database
administrator. The database administrator might want to make a heuristic decision
to release the database resources held for the indoubt thread. See “Making a
heuristic decision.”
After determining the correct action to take, issue the RECOVER INDOUBT
command at the LA DB2 subsystem, specifying the LUWID and the correct action.
Symptom: When IMS is cold started, and later reconnects with the SEA DB2
subsystem, IMS is not able to resolve the indoubt thread with DB2. Message
DSNM004I is displayed at the IMS master terminal. This is the same process as
described in “Resolution of indoubt units of recovery” on page 490.
When the indoubt thread at the SEA DB2 subsystem is resolved by issuing the
RECOVER INDOUBT command, completion of the two-phase commit process with
the DB2 subsystems at SJ and LA occurs, and the unit of work commits or aborts.
The IMS subsystem at SEA is operational and has the responsibility of resolving
indoubt units with the SEA DB2.
Symptom: The DB2 subsystem at SEA is started with a conditional restart record in
the BSDS indicating a cold start:
v When the IMS subsystem reconnects, it attempts to resolve the indoubt thread
identified in IMS as NID=A5. IMS has a resource recovery element (RRE) for this
thread. The SEA DB2 informs IMS that it has no knowledge of this thread. IMS
does not delete the RRE and it can be displayed by using the IMS DISPLAY
OASN command. The SEA DB2 also:
– Generates message DSN3005 for each IMS RRE for which DB2 has no
knowledge.
– Generates an IFCID 234 trace event.
v When the DB2 subsystems at SJ and LA reconnect with SEA, each detects that
the SEA DB2 has cold started. Both the SJ DB2 and the LA DB2:
– Display message DSNL411.
– Generate alert A001.
– Generate an IFCID 204 trace event.
v A display thread report of indoubt threads at both the SJ and LA DB2
subsystems shows the indoubt threads and indicates that the coordinator has
cold started.
System action: The DB2 subsystem at both SJ and LA accept the cold start
connection from SEA. Processing continues, waiting for a heuristic decision to
resolve the indoubt threads.
Also not known to the administrator at LA is the fact that SEA distributed the
LUWID=16 thread to SJ where it is also indoubt. Likewise, the administrator at SJ
does not know that LA has an indoubt thread for the LUWID=16 thread. It is
important that both SJ and LA make the same heuristic decision. It is also
important that the administrators at SJ and LA determine the originator of the
two-phase commit.
The recovery log of the originator indicates whether the decision was commit or
abort. The originator might have more accessible functions to determine the
decision. Even though the SEA DB2 cold started, you might be able to determine
the decision from its recovery log. Or, if the failure occurred before the decision
was recorded, you might be able to determine the name of the coordinator, if the
The LUWID contains the name of the logical unit (LU) where the distributed
logical unit of work originated. This logical unit is most likely in the system that
originated the two-phase commit.
The administrator must determine if the LU name contained in the LUWID is the
same as the LU name of the SEA DB2 subsystem. If this is not the case (it is the
case in this example), then the SEA DB2 is a participant in the logical unit of work,
and is being coordinated by a remote system. You must communicate with that
system and request that facilities of that system be used to determine if the logical
unit of work is to be committed or aborted.
If the LUWID contains the LU name of the SEA DB2 subsystem, then the logical
unit of work originated at SEA and is either an IMS, CICS, TSO, or BATCH allied
thread of the SEA DB2. The display thread report for indoubt threads at a DB2
participant includes message DSNV458 if the coordinator is remote. This line
provides external information provided by the coordinator to assist in identifying
the thread. A DB2 coordinator provides the following identifier:
connection-name.correlation-id
Anything else represents an IMS or CICS connection name. The thread represents a
local application and the commit coordinator is the IMS or CICS system using this
connection name.
In our example, the administrator at SJ sees that both indoubt threads have a
LUWID with the LU name the same as the SEA DB2 LU name, and furthermore,
that one thread (LUWID=15) is an IMS thread and the other thread (LUWID=16) is
a batch thread. The LA administrator sees that the LA indoubt thread (LUWID=16)
originates at SEA DB2 and is a batch thread.
The originator of a DB2 batch thread is DB2. To determine the commit or abort
decision for the LUWID=16 indoubt threads, the SEA DB2 recovery log must be
analyzed, if it can be. The DSN1LOGP utility must be executed against the SEA
DB2 recovery log, looking for the LUWID=16 entry. There are three possibilities:
1. No entry is found. That portion of the DB2 recovery log was not available.
2. An entry is found but incomplete.
3. An entry is found and the status is committed or aborted.
In the third case, the heuristic decision at SJ and LA for indoubt thread LUWID=16
is indicated by the status indicated in the SEA DB2 recovery log. In the other two
cases, the recovery procedure used when cold starting DB2 is important. If
The recovery logs at SJ and LA can help determine what activity took place. If it
can be determined that updates were performed at either SJ, LA, or both (but not
SEA), then if both SJ and LA make the same heuristic action, there should be no
data inconsistency. If updates were also performed at SEA, then looking at the SEA
data might help determine what action to take. In any case, both SJ and LA should
make the same decision.
For the indoubt thread with LUWID=15 (the IMS coordinator), there are several
alternative paths to recovery. The SEA DB2 has been restarted. When it reconnects
with IMS, message DSN3005 is issued for each thread that IMS is trying to resolve
with DB2. The message indicates that DB2 has no knowledge of the thread that is
identified by the IMS assigned NID. The outcome for the thread, commit or abort,
is included in the message. Trace event IFCID=234 is also written to statistics class
4 containing the same information.
If there is only one such message, or one such entry in statistics class 4, then the
decision for indoubt thread LUWID=15 is known and can be communicated to the
administrator at SJ. If there are multiple such messages, or multiple such trace
events, you must match the IMS NID with the network LUWID. Again,
DSN1LOGP should be used to analyze the SEA DB2 recovery log if possible. There
are now four possibilities:
1. No entry is found. That portion of the DB2 recovery log was not available.
2. An entry is found but incomplete because of lost recovery log.
3. An entry is found and the status is indoubt.
4. An entry is found and the status is committed or aborted.
In the fourth case, the heuristic decision at SJ for the indoubt thread LUWID=15 is
determined by the status indicated in the SEA DB2 recovery log. If an entry is
found whose status is indoubt, DSN1LOGP also reports the IMS NID value. The
NID is the unique identifier for the logical unit of work in IMS and CICS.
Knowing the NID allows correlation to the DSN3005 message, or to the 234 trace
event, which provides the correct decision.
If an incomplete entry is found, the NID may or may not have been reported by
DSN1LOGP. If it was, use it as previously discussed. If no NID is found, or the
SEA DB2 has not been started, or reconnecting to IMS has not occurred, then the
correlation-id used by IMS to correlate the IMS logical unit of work to the DB2
thread must be used in a search of the IMS recovery log. The SEA DB2 provided
this value to the SJ DB2 when distributing the thread to SJ. The SJ DB2 displays
this value in the report generated by DISPLAY THREAD TYPE(INDOUBT).
Symptom: When the DB2 at SEA reconnects with the DB2 at LA, indoubt
resolution occurs for LUWID=16. Both systems detect heuristic damage and both
generate alert A004; each writes an IFCID 207 trace record. Message DSNL400 is
displayed at LA and message DSNL403 is displayed at SEA..
This is not an easy task. Since the time of the heuristic action, the data at LA might
have been read or written by many applications. Correcting the damage can
involve reversing the effects of these applications as well. The tools available are:
v DSN1LOGP. The summary report of this utility identifies the table spaces
modified by the LUWID=16 thread.
v The statistics trace class 4 contains an IFCID 207. entry. This entry identifies the
recovery log RBA for the LUWID=16 thread.
When DB2 recovery log damage terminates restart processing, DB2 issues messages
to the console identifying the damage and giving an abend reason code. (The SVC
dump title includes a more specific abend reason code to assist in problem
diagnosis.) If the explanations in Part 2 of DB2 Messages indicate that restart failed
because of some problem not related to a log error, refer to Part 2 of DB2 Diagnosis
Guide and Reference and contact the IBM support center.
To minimize log problems during restart, the system requires two copies of the
BSDS. Dual logging is also recommended.
Basic approaches to recovery: There are two basic approaches to recovery from
problems with the log:
v Restart DB2, bypassing the inaccessible portion of the log and rendering some
data inconsistent. Then recover the inconsistent objects by using the RECOVER
utility, or re-create the data using REPAIR. Use the methods that are described
following this procedure to recover the inconsistent data.
v Restore the entire DB2 subsystem to a prior point of consistency. The method
requires that you have first prepared such a point; for suggestions, see
“Preparing to recover to a prior point of consistency” on page 449. Methods of
recovery are described under “Unresolvable BSDS or log data set problem
during restart” on page 578.
Bypassing the damaged log: Even if the log is damaged, and DB2 is started by
circumventing the damaged portion, the log is the most important source for
determining what work was lost and what data is inconsistent. For information
about data sharing considerations, see Chapter 5 of DB2 Data Sharing: Planning and
Administration.
Bypassing a damaged portion of the log generally proceeds with the following
steps:
1. DB2 restart fails. A problem exists on the log, and a message identifies the
location of the error. The following abend reason codes, which appear only in
the dump title, can be issued for this type of problem. This is not an exhaustive
list; other codes might occur.
Time
line
RBA: X Y
2. DB2 cannot skip over the damaged portion of the log and continue restart
processing. Instead, you restrict processing to only a part of the log that is error
free. For example, the damage shown in Figure 58 occurs in the log RBA range
from X to Y. You can restrict restart to all of the log before X; then changes later
than X are not made. Or you can restrict restart to all of the log after Y; then
changes between X and Y are not made. In either case, some amount of data is
inconsistent.
3. You identify the data that is made inconsistent by your restart decision. With
the SUMMARY option, the DSN1LOGP utility scans the accessible portion of
the log and identifies work that must be done at restart, namely, the units of
recovery to be completed and the page sets that they modified. (For
instructions on using DSN1LOGP, see Part 3 of DB2 Utility Guide and Reference.)
Because a portion of the log is inaccessible, the summary information might not
be complete. In some circumstances, your knowledge of work in progress is
needed to identify potential inconsistencies.
4. You use the CHANGE LOG INVENTORY utility to identify the portion of the
log to be used at restart, and to tell whether to bypass any phase of recovery.
You can choose to do a cold start and bypass the entire log.
5. You restart DB2. Data that is unaffected by omitted portions of the log is
available for immediate access.
6. Before you allow access to any data that is affected by the log damage, you
resolve all data inconsistencies. That process is described under “Resolving
inconsistencies resulting from a conditional restart” on page 584.
Where to start: The specific procedure depends on the phase of restart that was in
control when the log problem was detected. On completion, each phase of restart
writes a message to the console. You must find the last of those messages in the
console log. The next phase after the one identified is the one that was in control
when the log problem was detected. Accordingly, start at:
v “Log initialization or current status rebuild failure recovery” on page 561
v “Failure during forward log recovery” on page 570
v “Failure during backward log recovery” on page 575
Another scenario ( “Failure resulting from total or excessive loss of log data” on
page 581) provides information to use if you determine (by using Log initialization
or current status rebuild failure recovery) that an excessive amount (or all) of DB2
log information (BSDS, active, and archive logs) has been lost.
Symptom: An abend was issued indicating that restart failed. In addition, the last
restart message received was a DSNJ001I message indicating a failure during
current status rebuild, or none of the following messages was issued:
DSNJ001I
DSNR004I
DSNR005I
If none of the preceding messages were issued, the failure occurred during the log
initialization phase of restart.
System action: The action depends on whether the failure occurred during log
initialization or during current status rebuild.
v Failure during log initialization: DB2 terminates because a portion of the log is
inaccessible, and DB2 cannot locate the end of the log during restart.
v Failure during current status rebuild: DB2 terminates because a portion of the
log is inaccessible, and DB2 cannot determine the state of the subsystem (such as
outstanding units of recovery, outstanding database writes, or exception
database conditions) that existed at the prior DB2 termination.
Chapter 22. Recovery from BSDS or log failure during restart 561
v Restore the DB2 log and all data to a prior consistent point and start DB2. This
procedure is described in “Unresolvable BSDS or log data set problem during
restart” on page 578.
v Start DB2 without completing some database changes. Using a combination of
DB2 services and your own knowledge, determine what work will be lost by
truncating the log. The procedure for determining the page sets that contain
incomplete changes is described in “Restart by truncating the log” on page 563.
In order to obtain a better idea of what the problem is, read one of the following
sections, depending on when the failure occurred.
Time
line
The portion of the log between log RBAs X and Y is inaccessible. For failures that
occur during the log initialization phase, the following activities occur:
1. DB2 allocates and opens each active log data set that is not in a stopped state.
2. DB2 reads the log until the last log record is located.
3. During this process, a problem with the log is encountered, preventing DB2
from locating the end of the log. DB2 terminates and issues one of the abend
reason codes listed in Table 101 on page 564.
During its operations, DB2 periodically records in the BSDS the RBA of the last log
record written. This value is displayed in the print log map report as follows:
HIGHEST RBA WRITTEN: 00000742989E
Because this field is updated frequently in the BSDS, the highest RBA written can be
interpreted as an approximation of the end of the log. The field is updated in the
BSDS when any one of a variety of internal events occurs. In the absence of these
internal events, the field is updated each time a complete cycle of log buffers is
written. A complete cycle of log buffers occurs when the number of log buffers
written equals the value of the OUTPUT BUFFER field of installation panel
DSNTIPL. The value in the BSDS is, therefore, relatively close to the end of the log.
To find the actual end of the log at restart, DB2 reads the log forward sequentially,
starting at the log RBA that approximates the end of the log and continuing until
the actual end of the log is located.
Because the end of the log is inaccessible in this case, some information has been
lost. Units of recovery might have successfully committed or modified additional
page sets past point X. Additional data might have been written, including those
that are identified with writes pending in the accessible portion of the log. New
units of recovery might have been created, and these might have modified data.
Because of the log error, DB2 cannot perceive these events.
How to restart DB2 is described under “Restart by truncating the log” on page 563.
Log Start Begin URID1 Begin URID3 Log Error Log End
Time
line
The portion of the log between log RBAs X and Y is inaccessible. For failures that
occur during the current status rebuild phase, the following activities occur:
1. Log initialization completes successfully.
2. DB2 locates the last checkpoint. (The BSDS contains a record of its location on
the log.)
3. DB2 reads the log, beginning at the checkpoint and continuing to the end of the
log.
4. DB2 reconstructs the subsystem’s state as it existed at the prior termination of
DB2.
5. During this process, a problem with the log is encountered, preventing DB2
from reading all required log information. DB2 terminates with one of the
abend reason codes listed in Table 101 on page 564.
Because the end of the log is inaccessible in this case, some information has been
lost. Units of recovery might have successfully committed or modified additional
page sets past point X. Additional data might have been written, including those
that are identified with writes pending in the accessible portion of the log. New
units of recovery might have been created, and these might have modified data.
Because of the log error, DB2 cannot perceive these events.
Step 1: Find the log RBA after the inaccessible part of the log
The log damage is illustrated in Figure 59 on page 562 and in Figure 60. The range
of the log between RBAs X and Y is inaccessible to all DB2 processes.
Use the abend reason code accompanying the X'04E' abend and the message on the
title of the accompanying dump at the operator’s console, to find the name and
location of a procedure in Table 101 on page 564. Use that procedure to find X and
Y.
Chapter 22. Recovery from BSDS or log failure during restart 563
Table 101. Abend reason codes and messages
Abend
reason code Messag Procedure General error description
00D10261 DSNJ012I “RBA 1” Log record is logically damaged
00D10262
00D10263
00D10264
00D10265
00D10266
00D10267
00D10268
00D10329 DSNJ106I “RBA 2” I/O error occurred while log record
was being read
00D1032A DSNJ113E “RBA 3” on page 565 Log RBA could not be found in BSDS
00D1032B DSNJ103I “RBA 4” on page 565 Allocation error occurred for an
archive log data set
00D1032B DSNJ007I “RBA 5” on page 566 The operator canceled a request for
archive mount
00D1032C DSNJ104I “RBA 4” on page 565 Open error occurred for an archive
and active log data set
00E80084 DSNJ103I “RBA 4” on page 565 Active log data set named in the
BSDS could not be allocated during
log initialization
Procedure RBA 1: The message accompanying the abend identifies the log RBA of
the first inaccessible log record that DB2 detects. For example, the following
message indicates a logical error in the log record at log RBA X'7429ABA'.
DSNJ012I ERROR D10265 READING RBA 000007429ABA
IN DATA SET DSNCAT.LOGCOPY2.DS01
CONNECTION-ID=DSN,
CORRELATION-ID=DSN
Figure 152 on page 1083 shows that a given physical log record is actually a set of
logical log records (the log records generally spoken of) and the log control
interval definition (LCID). DB2 stores logical records in blocks of physical records
to improve efficiency. When this type of an error on the log occurs during log
initialization or current status rebuild, all log records within the physical log record
are inaccessible. Therefore, the value of X is the log RBA that was reported in the
message rounded down to a 4 KB boundary (X'7429000').
Continue with “Step 2: Identify lost work and inconsistent data” on page 566.
Procedure RBA 2: The message accompanying the abend identifies the log RBA of
the first inaccessible log record that DB2 detects. For example, the following
message indicates an I/O error in the log at RBA X'7429ABA'.
DSNJ106I LOG READ ERROR DSNAME=DSNCAT.LOGCOPY2.DS01,
LOGRBA=000007429ABA,ERROR STATUS=0108320C
Figure 152 on page 1083 shows that a given physical log record is actually a set of
logical log records (the log records generally spoken of) and the LCID. When this
type of an error on the log occurs during log initialization or current status rebuild,
all log records within the physical log record and beyond it to the end of the log
data set are inaccessible to the log initialization or current status rebuild phase of
Continue with “Step 2: Identify lost work and inconsistent data” on page 566.
Procedure RBA 3: The message accompanying the abend identifies the log RBA of
the inaccessible log record. This log RBA is not registered in the BSDS.
For example, the following message indicates that the log RBA X'7429ABA' is not
registered in the BSDS:
DSNJ113E RBA 000007429ABA NOT IN ANY ACTIVE OR ARCHIVE
LOG DATA SET. CONNECTION-ID=DSN, CORRELATION-ID=DSN
The print log map utility can be used to list the contents of the BSDS. For an
example of the output, see the description of print log map (DSNJU004) in Part 3
of DB2 Utility Guide and Reference.
Figure 152 on page 1083 shows that a given physical log record is actually a set of
logical log records (the log records generally spoken of) and the LCID. When this
type of an error on the log occurs during log initialization or current status rebuild,
all log records within the physical log record are inaccessible.
Using the print log map output, locate the RBA closest to, but less than,
X'7429ABA' for the value of X. If there is not an RBA that is less than X'7429ABA',
a considerable amount of log information has been lost. If this is the case, continue
with “Failure resulting from total or excessive loss of log data” on page 581.
If X has a value, continue with “Step 2: Identify lost work and inconsistent data”
on page 566.
Procedure RBA 4: The message accompanying the abend identifies an entire data
set that is inaccessible. For example, the following message indicates that the
archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible, and the
STATUS field identifies the code that is associated with the reason for the data set
being inaccessible. For an explanation of the STATUS codes, see the explanation for
the message in Part 2 of DB2 Messages.
DSNJ103I - csect-name LOG ALLOCATION ERROR
DSNAME=DSNCAT.ARCHLOG1.A0000009,ERROR
STATUS=04980004
SMS REASON CODE=00000000
To determine the value of X, run the print log map utility to list the log inventory
information. For an example of the output, see the description of print log map
(DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each
log data set name and its associated log RBA range—the values of X and Y.
Verify the accuracy of the information in the print log map utility output for the
active log data set with the lowest RBA range. For this active log data set only, the
information in the BSDS is potentially inaccurate for the following reasons:
v When an active log data set is full, archiving is started. DB2 then selects another
active log data set, usually the data set with the lowest RBA. This selection is
made so that units of recovery do not have to wait for the archive operation to
complete before logging can continue. However, if a data set has not been
archived, nothing beyond it has been archived, and the procedure is ended.
v When logging has begun on a reusable data set, DB2 updates the BSDS with the
new log RBA range for the active log data set, and marks it as Not Reusable. The
Chapter 22. Recovery from BSDS or log failure during restart 565
process of writing the new information to the BSDS can be delayed by other
processing. It is therefore possible for a failure to occur between the time that
logging to a new active log data set begins and the time that the BSDS is
updated. In this case, the BSDS information is not correct.
The log RBA that appears for the active log data set with the lowest RBA range in
the print log map utility output is valid, provided that the data set is marked Not
Reusable. If the data set is marked Reusable, it can be assumed for the purposes of
this restart that the starting log RBA (X) for this data set is one greater than the
highest log RBA listed in the BSDS for all other active log data sets.
Procedure RBA 5: The message accompanying the abend identifies an entire data
set that is inaccessible. For example, the following message indicates that the
archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator
canceled a request for archive mount, resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE
DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the value of X, run the print log map utility to list the log inventory
information. For an example of the output, see the description of print log map
(DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output provides each
log data set name and its associated log RBA range: the values of X and Y.
Figure 61. Sample JCL for obtaining DSN1LOGP summary output for restart
Chapter 22. Recovery from BSDS or log failure during restart 567
DSN1157I RESTART SUMMARY
DATA MODIFIED:
DATABASE=0101=STVDB02 PAGESET=0002=STVTS02
DATA MODIFIED:
DATABASE=0104=STVDB05 PAGESET=0002=STVTS05
The following heading message is followed by messages that identify the units
of recovery that have not yet completed and the page sets that they modified:
DSN1157I RESTART SUMMARY
Following the summary of outstanding units of recovery is a summary of page
sets with database writes pending.
In each case (units of recovery or databases with pending writes), the earliest
required log record is identified by the START information. In this context,
START information is the log RBA of the earliest log record required in order to
complete outstanding writes for this page set.
Those units of recovery with a START log RBA equal to, or prior to, the point Y
cannot be completed at restart. All page sets modified by such units of recovery
are inconsistent after completion of restart using this procedure.
All page sets identified in message DSN1160I with a START log RBA value
equal to, or prior to, the point Y have database changes that cannot be written
to disk. As in the case previously described, all such page sets are inconsistent
after completion of restart using this procedure.
At this point, it is only necessary to identify the page sets in preparation for
restart. After restart, the problems in the page sets that are inconsistent must be
resolved.
Because the end of the log is inaccessible, some information has been lost,
therefore, the information is inaccurate. Some of the units of recovery that
appear to be inflight might have successfully committed, or they could have
modified additional page sets beyond point X. Additional data could have been
568 Administration Guide
written, including those page sets that are identified as having writes pending
in the accessible portion of the log. New units of recovery could have been
created, and these can have modified data. DB2 cannot detect that these events
occurred.
From this and other information (such as system accounting information and
console messages), it could be possible to determine what work was actually
outstanding and which page sets will be inconsistent after starting DB2,
because the record of each event contains the date and time to help determine
how recent the information is. In addition, the information is displayed in
chronological sequence.
Use the change log inventory utility to create a conditional restart control record
(CRCR) in the BSDS, identifying the end of the log (X) to use on a subsequent
restart. The value is the RBA at which DB2 begins writing new log records. If point
X is X'7429000', on the CRESTART control statement, specify ENDRBA=7429000.
At restart, DB2 discards the portion of the log beyond X'7429000' before processing
the log for completing work (such as units of recovery and database writes).
Unless otherwise directed, normal restart processing is performed within the scope
of the log. Because log information has been lost, DB2 errors can occur. For
example, a unit of recovery that has actually been committed can be rolled back.
Also, some changes made by that unit of recovery might not be rolled back
because information about data changes has been lost.
To minimize such errors, use this change log inventory control statement:
CRESTART CREATE,ENDRBA=7429000,FORWARD=NO,BACKOUT=NO
Recovering and backing out units of recovery with lost information can introduce
more inconsistencies than the incomplete units of recovery.
When you start DB2 after you run DSNJU003 with the FORWARD=NO and the
BACKOUT=NO change log inventory options, DB2 performs the following actions
(in “Step 5: Start DB2” on page 570):
1. Discards from the checkpoint queue any entries with RBAs beyond the
ENDRBA value in the CRCR (which is X'7429000' in the previous example).
2. Reconstructs the system status up to the point of log truncation.
3. Performs pending database writes that the truncated log specifies that have not
already applied to the data. You can use the DSN1LOGP utility to identify
these writes. No forward recovery processing occurs for units of work in a
FORWARD=NO conditional restart. All pending writes for in-commit and
Chapter 22. Recovery from BSDS or log failure during restart 569
indoubt units of recovery are applied to the data. The processing for the
different unit of work states that is described in “Phase 3: Forward log
recovery” on page 426 does not occur.
4. Marks all units of recovery that have committed or are indoubt as complete on
the log.
5. Leaves inflight and in-abort units of recovery incomplete. Inconsistent data is
left in tables modified by inflight or indoubt URs. When you specify a
BACKOUT=NO conditional restart, inflight and in-abort units of recovery are
not backed out.
In a conditional restart that truncates the log, BACKOUT=NO minimizes DB2
errors for the following reasons:
v Inflight units of recovery might have been committed in the portion of the
log that the conditional restart discarded. If these units of recovery are
backed out as “Phase 4: Backward log recovery” on page 427 describes, DB2
can back out database changes incompletely, which introduces additional
errors.
v Data modified by in-abort units of recovery could have been modified again
after the point of damage on the log. For in-abort units of recovery DB2
could have written backout processing to disk after the point of log
truncation. If these units of recovery are backed out as “Phase 4: Backward
log recovery” on page 427 describes, DB2 can introduce additional data
inconsistencies by backing out units of recovery that are already partially or
fully backed out.
Symptom: An abend was issued, indicating that restart had failed. In addition, the
last restart message received was a DSNR004I message indicating that log
initialization had completed and thus the failure occurred during forward log
recovery.
System action: DB2 terminates because a portion of the log is inaccessible, and
DB2 is therefore unable to guarantee the consistency of the data after restart.
Time
line
The portion of the log between log RBA X and Y is inaccessible. The log
initialization and current status rebuild phases of restart completed successfully.
Restart processing was reading the log in a forward direction beginning at some
point prior to X and continuing to the end of the log. Because of the inaccessibility
of log data (between points X and Y), restart processing cannot guarantee the
completion of any work that was outstanding at restart prior to point Y.
For purposes of discussion, assume the following work was outstanding at restart:
v The unit of recovery identified as URID1 was in-commit.
v The unit of recovery identified as URID2 was inflight.
v The unit of recovery identified as URID3 was in-commit.
v The unit of recovery identified as URID4 was inflight.
v Page set A had writes pending prior to the error on the log, continuing to the
end of the log.
v Page set B had writes pending after the error on the log, continuing to the end
of the log.
The earliest log record for each unit of recovery is identified on the log line in
Figure 63. In order for DB2 to complete each unit of recovery, DB2 requires access
to all log records from the beginning point for each unit of recovery to the end of
the log.
The error on the log prevents DB2 from guaranteeing the completion of any
outstanding work that began prior to point Y on the log. Consequently, database
changes made by URID1 and URID2 might not be fully committed or backed out.
Writes pending for page set A (from points in the log prior to Y) will be lost.
Chapter 22. Recovery from BSDS or log failure during restart 571
which page sets are involved because after this procedure is used, the page sets
will contain inconsistencies that must be resolved. In addition, using this procedure
results in the completion of all database writes that are pending. For a description
of this process of writing database pages to disk, see “Tuning database buffer
pools” on page 633.
Step 1: Find the log RBA after the inaccessible part of the log
The log damage is shown in Figure 63 on page 571. The range of the log between
RBA X and RBA Y is inaccessible to all DB2 processes.
Use the abend reason code accompanying the X'04E' abend, and the message on
the title of the accompanying dump at the operator’s console, to find the name and
the location of a procedure in Table 102. Use that procedure to find X and Y.
Table 102. Abend reason codes and messages
Abend
reason code Message Procedure General error description
00D10261 DSNJ012I “RBA 1” Log record is logically damaged
00D10262
00D10263
00D10264
00D10265
00D10266
00D10267
00D10268
00D10329 DSNJ106I “RBA 2” on page 573 I/O error occurred while log record
was being read
00D1032A DSNJ113E “RBA 3” on page 573 Log RBA could not be found in BSDS
00D1032B DSNJ103I “RBA 4” on page 574 Allocation error occurred for an
archive log data set
00D1032B DSN007I “RBA 5” on page 574 The operator canceled a request for
archive mount
00D1032C DSNJ104E “RBA 4” on page 574 Open error occurred for an archive
log data set
00E80084 DSNJ103I “RBA 4” on page 574 Active log data set named in the
BSDS could not be allocated during
log initialization.
Procedure RBA 1: The message accompanying the abend identifies the log RBA of
the first inaccessible log record that DB2 detects. For example, the following
message indicates a logical error in the log record at log RBA X'7429ABA':
DSNJ012I ERROR D10265 READING RBA 000007429ABA
IN DATA SET DSNCAT.LOGCOPY2.DS01
CONNECTION-ID=DSN
CORRELATION-ID=DSN
Figure 152 on page 1083 shows that a given physical log record is actually a set of
logical log records (the log records generally spoken of) and the log control
interval definition (LCID). When this type of an error on the log occurs during
forward log recovery, all log records within the physical log record, as described,
are inaccessible. Therefore, the value of X is the log RBA that was reported in the
message, rounded down to a 4K boundary (that is, X'7429000').
Continue with “Step 2: Identify incomplete units of recovery and inconsistent page
sets” on page 574.
Procedure RBA 2: The message accompanying the abend identifies the log RBA of
the first inaccessible log record that DB2 detects. For example, the following
message indicates an I/O error in the log at RBA X'7429ABA':
DSNJ106I LOG READ ERROR DSNAME=DSNCAT.LOGCOPY2.DS01,
LOGRBA=000007429ABA, ERROR STATUS=0108320C
Figure 152 on page 1083 shows that a given physical log record is actually a set of
logical log records (the log records generally spoken of) and the LCID. When this
type of an error on the log occurs during forward log recovery, all log records
within the physical log record and beyond it to the end of the log data set are
inaccessible to the forward recovery phase of restart. Therefore, the value of X is
the log RBA that was reported in the message, rounded down to a 4K boundary
(that is, X'7429000').
To determine the value of Y, run the print log map utility to list the log inventory
information. For an example of this output, see the description of print log map
(DSNJU004) in Part 3 of DB2 Utility Guide and Reference. Locate the data set name
and its associated log RBA range. The RBA of the end of the range is the value Y.
Continue with “Step 2: Identify incomplete units of recovery and inconsistent page
sets” on page 574.
Procedure RBA 3: The message accompanying the abend identifies the log RBA of
the inaccessible log record. This log RBA is not registered in the BSDS.
For example, the following message indicates that the log RBA X'7429ABA' is not
registered in the BSDS:
DSNJ113E RBA 000007429ABA NOT IN ANY ACTIVE OR ARCHIVE
LOG DATA SET. CONNECTION-ID=DSN, CORRELATION-ID=DSN
Use the print log map utility to list the contents of the BSDS. For an example of
this output, see the description of print log map (DSNJU004) in Part 3 of DB2
Utility Guide and Reference.
Figure 152 on page 1083 shows that a given physical log record is actually a set of
logical log records (the log records generally spoken of) and the LCID. When this
type of error on the log occurs during forward log recovery, all log records within
the physical log record are inaccessible.
Using the print log map output, locate the RBA closest to, but less than,
X'7429ABA'. This is the value of X. If an RBA less than X'7429ABA' cannot be
found, the value of X is zero. Locate the RBA closest to, but greater than,
X'7429ABA'. This is the value of Y.
Continue with “Step 2: Identify incomplete units of recovery and inconsistent page
sets” on page 574.
Chapter 22. Recovery from BSDS or log failure during restart 573
Procedure RBA 4: The message accompanying the abend identifies an entire data
set that is inaccessible. For example, the following message indicates that the
archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The STATUS
field identifies the code that is associated with the reason for the data set being
inaccessible. For an explanation of the STATUS codes, see the explanation for the
message in DB2 Messages.
DSNJ103I LOG ALLOCATION ERROR
DSNAME=DSNCAT.ARCHLOG1.A0000009, ERROR
STATUS=04980004
SMS REASON CODE=00000000
To determine the values of X and Y, run the print log map utility to list the log
inventory information. For an example of this output, see the description of print
log map (DSNJU004) in Part 2 of DB2 Utility Guide and Reference. The output
provides each log data set name and its associated log RBA range: the values of X
and Y.
Continue with “Step 2: Identify incomplete units of recovery and inconsistent page
sets.”
Procedure RBA 5: The message accompanying the abend identifies an entire data
set that is inaccessible. For example, the following message indicates that the
archive log data set DSNCAT.ARCHLOG1.A0000009 is not accessible. The operator
canceled a request for archive mount resulting in the following message:
DSNJ007I OPERATOR CANCELED MOUNT OF ARCHIVE
DSNCAT.ARCHLOG1.A0000009 VOLSER=5B225.
To determine the values of X and Y, run the print log map utility to list the log
inventory information. For an example of the output, see the description of print
log map (DSNJU004) in Part 3 of DB2 Utility Guide and Reference. The output
provides each log data set name and its associated log RBA range: the values of X
and Y. Continue with “Step 2: Identify incomplete units of recovery and
inconsistent page sets.”
Step 3: Restrict restart processing to the part of the log after the
damage
Use the change log inventory utility to create a conditional restart control record
(CRCR) in the BSDS. Identify the accessible portion of the log beyond the damage
by using the STARTRBA specification, which will be used at the next restart.
Specify the value Y+1 (that is, if Y is X'7429FFF', specify STARTRBA=742A000).
Restart will restrict its processing to the portion of the log beginning with the
specified STARTRBA and continuing to the end of the log. A sample change log
inventory utility control statement is:
CRESTART CREATE,STARTRBA=742A000
Symptom: An abend was issued that indicated that restart failed because of a log
problem. In addition, the last restart message received was a DSNR005I message,
indicating that forward log recovery completed and thus the failure occurred
during backward log recovery.
System action: DB2 terminates because a portion of the log that it needs is
inaccessible, and DB2 is therefore unable to rollback some database changes during
restart.
Chapter 22. Recovery from BSDS or log failure during restart 575
described in “Bypassing backout before restarting.” Continue reading this
chapter to obtain a better idea of how to fix the problem.
Time
line
RBA: X Y Checkpoint
The portion of the log between log RBA X and Y is inaccessible. Restart was
reading the log in a backward direction beginning at the end of the log and
continuing backward to the point marked by Begin URID5 in order to back out the
changes made by URID5, URID6, and URID7. You can assume that DB2
determined that these units of recovery were inflight or in-abort. The portion of the
log from point Y to the end has been processed. However, the portion of the log
from Begin URID5 to point Y has not been processed and cannot be processed by
restart. Consequently, database changes made by URID5 and URID6 might not be
fully backed out. All database changes made by URID7 have been fully backed
out, but these database changes might not have been written to disk. A subsequent
restart of DB2 causes these changes to be written to disk during forward recovery.
Chapter 22. Recovery from BSDS or log failure during restart 577
Symptom: Abend code 00D1032A and message DSNJ113E are displayed:
DSNJ113E RBA log-rba NOT IN ANY ACTIVE OR ARCHIVE
LOG DATA SET. CONNECTION-ID=aaaaaaaa, CORRELATION-ID=aaaaaaaa
For instructions about adding an old archive data set, refer to “Changing the BSDS
log inventory” on page 405. Also, see Part 3 of DB2 Utility Guide and Reference for
additional information about the change log inventory utility.
Problem: During restart of DB2, serious problems with the BSDS or log data sets
were detected and cannot be resolved.
If too much log information has been lost, use the alternative approach described
in “Failure resulting from total or excessive loss of log data” on page 581.
Chapter 22. Recovery from BSDS or log failure during restart 579
v For table spaces and indexes that might have been changed after the
shutdown point, use the DB2 RECOVER utility to recover these table spaces
and indexes. They must be recovered in the order indicated in Part 2 of
DB2 Utility Guide and Reference.
v For data that has not been changed after the shutdown point (data used
with RO access), it is not necessary to use RECOVER or DROP.
v For table spaces that were deleted after the shutdown point, issue the
DROP statement. These table spaces will not be recovered.
v Any objects created after the shutdown point should be re-created.
You must recover all data that has potentially been modified after the
shutdown point. If the RECOVER utility is not used to recover modified data,
serious problems can occur because of data inconsistency.
If an attempt is made to access data that is inconsistent, any of the following
events can occur (and the list is not comprehensive):
v It is possible to successfully access the correct data.
v Data can be accessed without DB2 recognizing any problem, but it might
not be the data you want (the index might be pointing to the wrong data).
v DB2 might recognize that a page is logically incorrect and, as a result,
abend the subsystem with an X'04E' abend completion code and an abend
reason code of X'00C90102'.
v DB2 might notice that a page was updated after the shutdown point and, as
a result, abend the requester with an X'04E' abend completion code and an
abend reason code of X'00C200C1'.
7. Analyze the CICS log and the IMS log to determine the work that must be
redone (work that was lost because of shutdown at the previous point).
Inform all TSO users, QMF users, and batch users for which no transaction
log tracking has been performed, about the decision to fall back to a previous
point.
8. When DB2 is started after being shut down, indoubt units of recovery can
exist. This occurs if transactions are indoubt when the command STOP DB2
MODE (QUIESCE) is given. When DB2 is started again, these transactions will
still be indoubt to DB2. IMS and CICS cannot know the disposition of these
units of recovery.
To resolve these indoubt units of recovery, use the command RECOVER
INDOUBT.
9. If a table space was dropped and re-created after the shutdown point, it
should be dropped and re-created again after DB2 is restarted. To do this, use
SQL DROP and SQL CREATE statements.
Do not use the RECOVER utility to accomplish this, because it will result in
the old version (which can contain inconsistent data) being recovered.
10. If any table spaces and indexes were created after the shutdown point, these
must be re-created after DB2 is restarted. There are two ways to accomplish
this:
v For data sets defined in DB2 storage groups, use the CREATE TABLESPACE
statement and specify the appropriate storage group names. DB2
automatically deletes the old data set and redefines a new one.
v For user-defined data sets, use access method services DELETE to delete the
old data sets. After these data sets have been deleted, use access method
services DEFINE to redefine them; then use the CREATE TABLESPACE
statement.
Operations management action: Restart DB2 without any log data by using either
the procedure in “Total loss of the log” or “Excessive loss of data in the active log”
on page 582.
For example, you might know that DB2 was dedicated to a few processes (such as
utilities) during the DB2 session, and you might be able to identify the page sets
they modified. If you cannot identify the page sets that are inconsistent, you must
decide whether you are willing to assume the risk involved in restarting DB2
under those conditions. If you decide to restart, take the following steps:
1. Define and initialize the BSDSs. See “Recovering the BSDS from a backup
copy” on page 507.
2. Define the active log data sets using the access method services DEFINE
function. Run utility DSNJLOGF to initialize the new active log data sets.
3. Prepare to restart DB2 using no log data. See “Deferring restart processing” on
page 418.
Each data and index page contains the log RBA of the last log record applied
against the page. Safeguards within DB2 disallow a modification to a page that
contains a log RBA that is higher than the current end of the log. There are two
choices.
a. Run the DSN1COPY utility specifying the RESET option to reset the log
RBA in every data and index page. Depending on the amount of data in the
subsystem, this process can take quite a long time. Because the BSDS has
been redefined and reinitialized, logging begins at log RBA 0 when DB2
starts.
If the BSDS is not reinitialized, logging can be forced to begin at log RBA 0
by constructing a conditional restart control record (CRCR) that specifies a
STARTRBA and ENDRBA that are both equal to 0, as the following
command shows:
CRESTART CREATE,STARTRBA=0,ENDRBA=0
Continue with step 4.
b. Determine the highest possible log RBA of the prior log. From previous
console logs written when DB2 was operational, locate the last DSNJ001I
message. When DB2 switches to a new active log data set, this message is
Chapter 22. Recovery from BSDS or log failure during restart 581
written to the console, identifying the data set name and the highest
potential log RBA that can be written for that data set. Assume that this is
the value X'8BFFF'. Add one to this value (X'8C000'), and create a
conditional restart control record specifying the following change log
inventory control statement:
CRESTART CREATE,STARTRBA=8C000,ENDRBA=8C000
When DB2 starts, all phases of restart are bypassed and logging begins at
log RBA X'8C000'. If this method is chosen, it is not necessary to use the
DSN1COPY RESET option and a lot of time is saved.
4. Start DB2. Use START DB2 ACCESS(MAINT) until data is consistent or page
sets are stopped.
5. After restart, resolve all inconsistent data as described in “Resolving
inconsistencies resulting from a conditional restart” on page 584.
This procedure will cause all phases of restart to be bypassed and logging to begin
at log RBA X'8C000'. It will create a gap in the log between the highest RBA kept
in the BSDS and X'8C000', and that portion of the log will be inaccessible.
No DB2 process can tolerate a gap, including RECOVER. Therefore, all data must
be image copied after a cold start. Even data that is known to be consistent must
be image copied again when a gap is created in the log.
There is another approach to doing a cold start that does not create a gap in the
log. This is only a method for eliminating the gap in the physical record. It does
not mean that you can use a cold start to resolve the logical inconsistencies. The
procedure is as follows:
1. Locate the last valid log record by using DSN1LOGP to scan the log. (Message
DSN1213I identifies the last valid log RBA.)
2. Begin at an RBA that is known to be valid. If message DSN1213I indicated that
the last valid log RBA is at X'89158', round this value up to the next 4K
boundary (X'8A000').
3. Create a CRCR similar to the CRCR that the following command specifies:
CRESTART CREATE,STARTRBA=8A000,ENDRBA=8A000
4. Use START DB2 ACCESS(MAINT) until data is consistent or page sets are
stopped.
5. Now, take image copies of all data for which data modifications were recorded
beyond log RBA X'8A000'. If you do not know what data was modified, take
image copies of all data.
If image copies are not taken of data that has been modified beyond the log
RBA used in the CRESTART statement, future RECOVER operations can fail or
result in inconsistent data.
Chapter 22. Recovery from BSDS or log failure during restart 583
After restart, resolve all inconsistent data as described in Resolving inconsistencies
resulting from a conditional restart.
A cold status for restart means that the DB2 system has no memory of previous
connections with its partner, and therefore has no memory of indoubt logical units
of work. This includes all postponed aborted URs that end without resolution and
any restart pending page sets and partitions are removed from restart pending
state.
The partner accepts the cold start connection and remembers the recovery log
name of the cold starting DB2. If the partner has indoubt thread resolution
requirements with the cold starting DB2, those requirements cannot be achieved.
The partner terminates its indoubt resolution responsibility with the cold starting
DB2. However, as a participant, the partner has indoubt logical units of work that
must be resolved manually.
A warm status for restart means the DB2 system does have memory of previous
connections with the partner and therefore does have memory of indoubt logical
units of work. The exchange of recovery log names validates that the correct
recovery logs are being used for indoubt resolution. Each partner indicates its
recovery log name and the recovery log name it believes to be the one the other
partner is using. A warm start connection where one system specifies a recovery
log name that is different than the name remembered by the other system is
rejected if indoubt resolution is required between the two partners.
The following three methods describe one or more steps that must be taken to
resolve inconsistencies in the DB2 subsystem. Before using these methods,
however, complete the following procedure:
1. Obtain image copies of all DB2 table spaces. You will need these image copies
if any of the following conditions apply:
v You did a cold start.
Chapter 22. Recovery from BSDS or log failure during restart 585
Method 2. Re-create the table space
Take the following steps to drop the table space and reconstruct the data using the
CREATE statement. This procedure is simple relative to “Method 3. Use the
REPAIR utility on the data.” However, if you want to use this procedure, you need
to plan ahead, because, when a table space is dropped, all tables in that table
space, as well as related indexes, authorities, and views, are implicitly dropped. Be
prepared to reestablish indexes, views, and authorizations, as well as the data
content itself.
DB2 subsystem tables, such as the catalog and directory, cannot be dropped.
Follow either “Method 1. Recover to a prior point of consistency” on page 585 or
“Method 3. Use the REPAIR utility on the data” for these tables.
1. Issue an SQL DROP TABLESPACE statement for all table spaces that are
suspected of being involved in the problem.
2. Re-create the table spaces, tables, indexes, synonyms, and views using SQL
CREATE statements.
3. Grant access to these objects as it was granted prior to the time of the error.
4. Reconstruct the data in the tables.
5. Run the RUNSTATS utility on the data.
6. Use COPY to acquire a full image copy of all data.
7. Use the REBIND process on all plans that use the tables or views involved in
this activity.
If you decide to use this method to resolve data inconsistencies, be sure to read the
following section carefully, because it contains information that is important to the
successful resolution of the inconsistencies.
Chapter 22. Recovery from BSDS or log failure during restart 587
588 Administration Guide
Part 5. Performance monitoring and tuning
Chapter 23. Planning your performance strategy 595 Accounting and statistics traces . . . . . 629
Managing performance in general . . . . . . 595 Audit trace . . . . . . . . . . . . 629
Setting reasonable performance objectives . . . . 596 Performance trace . . . . . . . . . . 629
Defining the workload . . . . . . . . . 596 Using fixed-length records . . . . . . . . 629
Initial performance planning . . . . . . . 597 Response time reporting . . . . . . . . . . 630
Translating resource requirements into
objectives . . . . . . . . . . . . . 598 Chapter 26. Tuning DB2 buffer, EDM, RID, and
External design . . . . . . . . . . . 598 sort pools . . . . . . . . . . . . . . 633
Internal design . . . . . . . . . . . 598 Tuning database buffer pools . . . . . . . . 633
Coding and testing . . . . . . . . . 599 Terminology: Types of buffer pool pages . . . 634
Post-development review . . . . . . . . 599 Read operations . . . . . . . . . . . 634
Planning for performance monitoring . . . . . 599 Write operations . . . . . . . . . . . 634
Continuous performance monitoring . . . . 600 Assigning a table space or index to a buffer
Periodic performance monitoring . . . . . . 601 pool . . . . . . . . . . . . . . . 635
Detailed performance monitoring . . . . . . 601 Assigning data to default buffer pools . . . 635
Exception performance monitoring . . . . . 602 Assigning data to particular buffer pools . . 635
A performance monitoring strategy . . . . . 602 Buffer pool thresholds . . . . . . . . . 635
Reviewing performance data . . . . . . . . 602 Fixed thresholds . . . . . . . . . . 635
Typical review questions . . . . . . . . 603 Thresholds you can change . . . . . . . 636
Are your performance objectives reasonable? 604 Guidelines for setting buffer pool thresholds 638
Determining size and number of buffer pools 639
Chapter 24. Analyzing performance data . . . 607 Buffer pool sizes . . . . . . . . . . 639
Investigating the problem overall . . . . . . . 607 The buffer-pool hit ratio . . . . . . . . 639
Looking at the entire system . . . . . . . 607 Buffer pool size guidelines . . . . . . . 640
Beginning to look at DB2 . . . . . . . . 607 Advantages of large buffer pools . . . . . 641
Reading accounting reports from OMEGAMON 608 Choosing one or many buffer pools . . . . 641
The accounting report (short format) . . . . 608 Choosing a page-stealing algorithm . . . . . 642
The accounting report (long format) . . . . . 609 | Long-term page fix option for buffer pools . . 643
A general approach to problem analysis in DB2 614 Monitoring and tuning buffer pools using online
commands . . . . . . . . . . . . . 643
Chapter 25. Improving response time and Using OMEGAMON to monitor buffer pool
throughput . . . . . . . . . . . . . . 619 statistics . . . . . . . . . . . . . . 645
Reducing I/O operations . . . . . . . . . 619 | Tuning EDM storage . . . . . . . . . . . 647
Using RUNSTATS to keep access path statistics | EDM storage space handling . . . . . . . 648
current . . . . . . . . . . . . . . 619 Implications for database design . . . . . 648
Reserving free space in table spaces and indexes 620 Monitoring and EDM storage . . . . . . 649
Specifying free space on pages . . . . . 620 Tips for managing EDM storage . . . . . . 650
Determining pages of free space . . . . . 621 Use packages . . . . . . . . . . . 650
Recommendations for allocating free space 621 Use RELEASE(COMMIT) when appropriate 651
Making buffer pools large enough for the | Be aware of large DBDs . . . . . . . . 651
workload . . . . . . . . . . . . . . 622 Understand the impact of using
Reducing the time needed to perform I/O DEGREE(ANY) . . . . . . . . . . . 651
operations . . . . . . . . . . . . . . 622 Increasing RID pool size . . . . . . . . . . 651
Distributing data sets efficiently . . . . . . 622 Controlling sort pool size and sort processing . . 652
Putting frequently used data sets on fast Estimating the maximum size of the sort pool 652
devices . . . . . . . . . . . . . 622 How sort work files are allocated. . . . . . 652
Distributing the I/O . . . . . . . . . 623 Improving the performance of sort processing 653
Creating additional work file table spaces . . . 625
Managing space for I/O performance . . . . 626 Chapter 27. Improving resource utilization. . . 655
Formatting early and speed up formatting 626 Managing the opening and closing of data sets . . 655
Avoiding excessive extents . . . . . . . 627 Determining the maximum number of open
Reducing processor resource consumption . . . . 628 data sets . . . . . . . . . . . . . . 655
Reusing threads for your high-volume How DB2 determines DSMAX. . . . . . 656
transactions . . . . . . . . . . . . . 628 Modifying DSMAX . . . . . . . . . 656
Minimizing the use of DB2 traces . . . . . 628 Recommendations . . . . . . . . . . 657
Global trace . . . . . . . . . . . . 628
Example: A service-level agreement might require that 90% of all response times
sampled on a local network in the prime shift be under 2 seconds, or that the
average response time not exceed 6 seconds even during peak periods. (For a
network of remote terminals, consider substantially higher response times.)
Performance objectives must reflect not only elapsed time, but also the amount of
processing expected. Consider whether to define your criteria in terms of the
average, the ninetieth percentile, or even the worst-case response time. Your choice
can depend on your site’s audit controls and the nature of the workloads.
Before installing DB2, gather design data during the phases of initial planning,
external design, internal design, and coding and testing. Keep reevaluating your
performance objectives with that information.
For transactions:
| v Availability of transaction managers, such as IMS, CICS, or WebSphere
| v Number of message pairs for each user function, either to and from a terminal
| or to and from a distributed application
| v Network bandwidth and network device latencies
| v Average and maximum number of concurrent users, either terminal operators or
| distributed application requesters
v Maximum rate of workloads per second, minute, hour, day, or week
v Number of disk I/O operations per user workload
v Average and maximum processor usage per workload type and total workload
v Size of tables
v Effects of objectives on operations and system programming
External design
During the external design phase, you must:
| 1. Estimate the network, Web server, application server, processor, and disk
| subsystem workload.
2. Refine your estimates of logical disk accesses. Ignore physical accesses at this
stage; one of the major difficulties will be determining the number of I/Os per
statement.
Internal design
During the internal design phase, you must:
1. Refine your estimated workload against the actual workload.
2. Refine disk access estimates against database design. After internal design, you
can define physical data accesses for application-oriented processes and
estimate buffer hit ratios.
3. Add the accesses for DB2 work file database, DB2 log, program library, and
DB2 sorts.
4. Consider whether additional processor loads will cause a significant constraint.
5. Refine estimates of processor usage.
Post-development review
When you are ready to test the complete system, review its performance in detail.
Take the following steps to complete your performance review:
1. Validate system performance and response times against the objectives.
2. Identify resources whose usage requires regular monitoring.
3. Incorporate the observed figures into future estimates. This step requires:
a. Identifying discrepancies from the estimated resource usage
b. Identifying the cause of the discrepancies
c. Assigning priorities to remedial actions
d. Identifying resources that are consistently heavily used
e. Setting up utilities to provide graphic representation of those resources
f. Projecting the processor usage against the planned future system growth to
ensure that adequate capacity will be available
g. Updating the design document with the observed performance figures
h. Modifying the estimation procedures to reflect what you have learned
You need feedback from users and might have to solicit it. Establish reporting
procedures and teach your users how to use them. Consider logging incidents such
as these:
v System, line and transaction or query failures
v System unavailable time
v Response times that are outside the specified limits
v Incidents that imply performance constraints, such as deadlocks, deadlock
abends, and insufficient storage
v Situations, such as recoveries, that use additional system resources
The data logged should include the time, date, location, duration, cause (if it can
be determined), and the action taken to resolve the problem.
“A performance monitoring strategy” on page 602 describes a plan that includes all
of these levels.
Running accounting class 2 as well as class 1 allows you to separate DB2 times
from application times.
| Running with CICS without the open transaction environment (OTE), entails less
| need to run with accounting class 2. Application and non-DB2 processing take
place under the CICS main TCB. Because SQL activity takes place under the SQL
TCB, the class 1 and class 2 times are generally close. The CICS attachment work is
spread across class 1, class 2, and time spent processing outside of DB2. Class 1
time thus reports on the SQL TCB time and some of the CICS attachment. If you
are concerned about class 2 overhead and you use CICS, you can generally run
without turning on accounting class 2.
| Statistics and accounting information can be very helpful for application and
| database designers. Consider putting this information into a performance
| warehouse so that the data can be analyzed more easily by all the personnel who
| The data in the performance warehouse can be accessed by any member of the
| DB2 Universal Database family or by any product that supports Distributed
| Relational Database Architecture (DRDA).
The current peak is also a good indicator of the future average. You might have to
monitor more frequently at first to confirm that expected peaks correspond with
actual ones. Do not base conclusions on one or two monitoring periods, but on
data from several days representing different periods.
| For periodic monitoring, gather information from z/OS, the transaction manager
| (IMS, CICS, or WebSphere), the network, the distributed application platforms
| (such as Windows, UNIX, or Linux), and DB2 itself. To compare the different
results from each source, monitor each for the same period of time. Because the
monitoring tools require resources, you need to consider the overhead for using
these tools. See “Minimizing the use of DB2 traces” on page 628 for information on
DB2 trace overhead.
If you have a performance problem, first verify that it is not caused by faulty
design of an application or database. If you suspect a problem in application
design, consult Part 4 of DB2 Application Programming and SQL Guide; for
information about database design, see Part 2, “Designing a database: advanced
topics,” on page 23.
If you have access path problems, refer to Chapter 33, “Using EXPLAIN to
improve SQL performance,” on page 891 for information.
“Minimizing the use of DB2 traces” on page 628 discusses overhead for global,
accounting, statistics, audit, and performance traces.
Plan to review the performance data systematically. Review daily data weekly and
weekly data monthly; review data more often if a report raises questions that
require checking. Depending on your system, the weekly review might require
about an hour, particularly after you have had some experience with the process
and are able to locate quickly any items that require special attention. The monthly
review might take half a day at first, less time later on. But when new applications
are installed, workload volumes increased, or terminals added, allow more time for
review.
Review the data on a gross level, looking for problem areas. Review details only if
a problem arises or if you need to verify measurements.
When reviewing performance data, try to identify the basic pattern in the
workload, and then identify variations of the pattern. After a certain period,
discard most of the data you have collected, but keep a representative sample. For
When you measure performance against initial objectives and report the results to
users, identify any systematic differences between the measured data and what the
user sees. This means investigating the differences between internal response time
(seen by DB2) and external response time (seen by the end user). If the
measurements differ greatly from the estimates, revise response-time objectives for
In such situations, the system shows heavy use of all its resources. However, it is
actually experiencing typical system stress, with a constraint that is yet to be
found.
| First, turn on classes 1, 2, and 3 of the accounting trace to get a picture of task
| activity and determine whether the performance problem is within DB2. Although
| accounting trace class 2 has a high overhead cost for fetch-intensive applications, it
| is generally recommended that you keep it on in all cases. For other applications,
| accounting trace class 2 has an overhead of less than 5%. Accounting trace class 3
| can help you determine which resources DB2 is waiting on.
| For information about packages or DBRMs, run accounting trace classes 7 and 8.
| To determine which packages are consuming excessive resources, compare
| accounting trace classes 7 and 8 to the elapsed time for the whole plan on
| accounting classes 1 and 2.
A number greater than 1 in the QXMAXDEG field of the accounting trace indicates
that parallelism was used. There are special considerations for interpreting such
records, as described in “Monitoring parallel operations” on page 961.
The easiest way to read and interpret the trace data is through the reports
produced by OMEGAMON. If you do not have OMEGAMON or an equivalent
program, refer to Appendix D, “Interpreting DB2 trace output,” on page 1101 for
information about the format of data from DB2 traces.
You can also use the tools for performance measurement described in Appendix F,
“Using tools to monitor performance,” on page 1151 to diagnose system problems.
Also see Appendix F, “Using tools to monitor performance,” on page 1151 for
information on analyzing the DB2 catalog and directory.
| Although this section talks about OMEGAMON reports, you can also use DB2 PM
| to obtain the same reports.
Monitoring application distribution helps you to identify the most frequently used
transactions or queries, and is intended to cover the 20% of the transactions or
queries that represent about 80% of the total work load. The TOP list function of
OMEGAMON lets you identify the report entries that represent the largest user of
a given resource.
An accounting report in the short format can list results in order by package. Thus
you can summarize package or DBRM activity independently of the plan under
which the package or DBRM executed.
Only class 1 of the accounting trace is needed for a report of information by plan.
Classes 2 and 3 are recommended for additional information. Classes 7 and 8 are
needed to give information by package or DBRM.
SQL DML AVERAGE TOTAL SQL DCL TOTAL SQL DDL CREATE DROP ALTER LOCKING AVERAGE TOTAL
-------- -------- -------- -------------- -------- ---------- ------ ------ ------ ---------------------- -------- --------
SELECT 20.00 3860 LOCK TABLE 0 TABLE 0 0 0 TIMEOUTS 0.00 0
INSERT 0.00 0 GRANT 0 CRT TTABLE 0 N/A N/A DEADLOCKS 0.00 0
UPDATE 30.00 5790 REVOKE 0 DCL TTABLE 0 N/A N/A ESCAL.(SHARED) 0.00 0
DELETE 10.00 1930 SET CURR.SQLID 0 AUX TABLE 0 N/A N/A ESCAL.(EXCLUS) 0.00 0
SET HOST VAR. 0 INDEX 0 0 0 MAX PG/ROW LOCKS HELD 43.34 47
DESCRIBE 0.00 0 SET CUR.DEGREE 0 TABLESPACE 0 0 0 LOCK REQUEST 63.82 12318
DESC.TBL 0.00 0 SET RULES 0 DATABASE 0 0 0 UNLOCK REQUEST 14.48 2794
PREPARE 0.00 0 SET CURR.PATH 0 STOGROUP 0 0 0 QUERY REQUEST 0.00 0
OPEN 10.00 1930 SET CURR.PREC. 0 SYNONYM 0 0 N/A CHANGE REQUEST 33.35 6436
FETCH 10.00 1930 CONNECT TYPE 1 0 VIEW 0 0 N/A OTHER REQUEST 0.00 0
CLOSE 10.00 1930 CONNECT TYPE 2 0 ALIAS 0 0 N/A LOCK SUSPENSIONS 0.00 0
SET CONNECTION 0 PACKAGE N/A 0 N/A IRLM LATCH SUSPENSIONS 0.03 5
RELEASE 0 PROCEDURE 0 0 0 OTHER SUSPENSIONS 0.00 0
DML-ALL 90.00 17370 CALL 0 FUNCTION 0 0 0 TOTAL SUSPENSIONS 0.03 5
ASSOC LOCATORS 0 TRIGGER 0 0 N/A
ALLOC CURSOR 0 DIST TYPE 0 0 N/A
HOLD LOCATOR 0 SEQUENCE 0 0 0
FREE LOCATOR 0
DCL-ALL 0 TOTAL 0 0 0
RENAME TBL 0
COMMENT ON 0
LABEL ON 0
| Class 1 elapsed time: Compare this with the CICS, IMS, WebSphere, or distributed
| application transit times:
v In CICS, you can use CICS Performance Analyzer to find the attach and detach
times; use this time as the transit time.
v In IMS, use the PROGRAM EXECUTION time reported in IMS Performance
Analyzer.
Differences can also arise from thread reuse in CICS, IMS, WebSphere, or
distributed application processing or through multiple commits in CICS. If the
class 1 elapsed time is significantly less than the CICS or IMS time, check the
report from Tivoli Decision Support for OS/390, IMS Performance Analyzer, or an
equivalent reporting tool to find out why. Elapsed time can occur:
v In DB2, during sign-on, create, or terminate thread
v Outside DB2, during CICS or IMS processing
Not-in-DB2 time: This is time calculated as the difference between the class 1 and
the class 2 elapsed time. It is time spent outside of DB2, but within the DB2
accounting interval. A lengthy time can be caused by thread reuse, which can
increase class 1 elapsed time, or a problem in the application program, CICS, IMS,
or the overall system.
| The calculated not-in-DB2 time might be zero. Furthermore, this time calculation is
| only an estimate. A primary factor that is not included in the equation is the
| amount of time that requests wait for CPU resources while executing within the
| DDF address space. To determine how long requests wait for CPU resources, look
| at the NOT ACCOUNT field. The NOT ACCOUNT field shows the time that
| requests wait for CPU resources while a distributed task is inside DB2.
Lock/latch suspension time: This shows contention for DB2 and IRLM resources. If
contention is high, check the locking summary section of the report, and then
proceed with the locking reports. For more information, see “Scenario for
analyzing concurrency” on page 835.
Synchronous I/O suspension time: This is the total application wait time for
synchronous I/Os. It is the total of database I/O and log write I/O. In the
OMEGAMON accounting report, check the number reported for SYNCHRON. I/O
(B).
If the number of synchronous read or write I/Os is higher than expected, check
for:
v A change in the access path to data. If you have data from accounting trace class
8, the number of synchronous and asynchronous read I/Os is available for
individual packages. Determine which package or packages have unacceptable
If I/O time is greater than expected, and not caused by more read I/Os, check for:
v Synchronous write I/Os. See “Using OMEGAMON to monitor buffer pool
statistics” on page 645. (You can also use OMEGAMON, which contains the
function of DB2 BPA, to manage and buffer pool activity.)
v I/O contention. In general, each synchronous read I/O typically takes from 5 to
20 milliseconds, depending on the disk device. This estimate assumes that there
are no prefetch or deferred write I/Os on the same device as the synchronous
I/Os. Refer to “Monitoring I/O activity of data sets” on page 661.
| v An increase in the number of users, or an increase in the amount of data. Also,
| drastic changes to the distribution of the data can cause the problem.
Other read suspensions: The accumulated wait time for read I/O done under a
thread other than this one. It includes time for:
v Sequential prefetch
v List prefetch
v Sequential detection
v Synchronous read I/O performed by a thread other than the one being reported
In the OMEGAMON accounting report, other read suspensions are reported in the
field OTHER READ I/O (D).
Other write suspensions: The accumulated wait time for write I/O done under a
thread other than this one. It includes time for:
v Asynchronous write I/O
v Synchronous write I/O performed by a thread other than the one being reported
Service task suspensions: The accumulated wait time from switching synchronous
execution units, by which DB2 switches from one execution unit to another. The
most common contributors to service task suspensions are:
| v Wait for phase 2 commit processing for updates, inserts, and deletes (UPDATE
| COMMIT). You can reduce this wait time by allocating the DB2 primary log on a
| faster disk. You can also help to reduce the wait time by reducing the number of
| commits per unit of work.
| v Wait for OPEN/CLOSE service task (including HSM recall). You can minimize
| this wait time by using two strategies. If DSMAX is frequently reached, increase
| DSMAX. If DSMAX is not frequently reached, change CLOSE YES to CLOSE NO
| on data sets that are used by critical applications.
v Wait for SYSLGRNG recording service task.
| v Wait for data set extend/delete/define service task (EXT/DEL/DEF). You can
| minimize this wait time by defining larger primary and secondary disk space
| allocation for the table space.
v Wait for other service tasks (OTHER SERVICE).
Archive log mode (QUIESCE): The accumulated time the thread was suspended
while processing ARCHIVE LOG MODE(QUIESCE). In the OMEGAMON
accounting report, this information is reported in the field ARCH.LOG (QUIES)
(G).
Archive log read suspension: This is the accumulated wait time the thread was
suspended while waiting for a read from an archive log on tape. In the
OMEGAMON accounting report, this information is reported in the field
ARCHIVE LOG READ (H).
Drain lock suspension: The accumulated wait time the thread was suspended
while waiting for a drain lock. If this value is high, see “Installation options for
Claim release suspension: The accumulated wait time the drainer was suspended
while waiting for all claim holders to release the object. If this value is high, see
“Installation options for wait times” on page 796, and consider running the
OMEGAMON locking reports for additional details.
Page latch suspension: This field shows the accumulated wait time because of page
latch contention. As an example, when the RUNSTATS and COPY utilities are run
with the SHRLEVEL(CHANGE) option, they use a page latch to serialize the
collection of statistics or the copying of a page. The page latch is a short duration
“lock”. If this value is high, the OMEGAMON locking reports can provide
additional data to help you determine which object is the source of the contention.
Not-accounted-for DB2 time: The DB2 accounting class 2 elapsed time that is not
recorded as class 2 CPU time or class 3 suspensions. The most common
contributors to this category are:
v z/OS paging
v Processor wait time
v On DB2 requester systems, the amount of time waiting for requests to be
returned from either VTAM or TCP/IP, including time spent on the network and
time spent handling the request in the target or server systems
v Time spent waiting for parallel tasks to complete (when query parallelism is
used for the query)
| v Some online performance monitoring
Figure 66 on page 617 shows which reports you might use, depending on the
nature of the problem, and the order in which to look at them.
Record trace
RMF
Console log
If you suspect that the problem is in DB2, it is often possible to discover its general
nature from the accounting reports. You can then analyze the problem in detail
based on one of the branches shown in Figure 66:
v Follow the first branch, Application or data problem, when you suspect that the
problem is in the application itself or in the related data. Also use this path for a
further breakdown of the response time when no reason can be identified.
v The second branch, Concurrency problem, shows the reports required to
investigate a lock contention problem. This is illustrated in “Scenario for
analyzing concurrency” on page 835.
v Follow the third branch for a Global problem, such as an excessive average
elapsed time per I/O. A wide variety of transactions could suffer similar
problems.
Before starting the analysis in any of the branches, start the DB2 trace to support
the corresponding reports. When starting the DB2 trace:
v Refer to OMEGAMON Report Reference for the types and classes needed for each
report.
v To make the trace data available as soon as an experiment has been carried out,
| and to avoid flooding the SMF data sets with trace data, use a GTF data set as
the destination for DB2 performance trace data.
Alternatively, use the Collect Report Data function in OMEGAMON to collect
performance data. You specify only the report set, not the DB2 trace types or
classes you need for a specific report. Collect Report Data lets you collect data in
a TSO data set that is readily available for further processing. No SMF or GTF
handling is required.
v To limit the amount of trace data collected, you can restrict the trace to
particular plans or users in the reports for SQL activity or locking. However, you
cannot so restrict the records for performance class 4, which traces asynchronous
I/O for specific page sets. You might want to consider turning on selective traces
and be aware of the added costs incurred by tracing.
When CICS or IMS reports identify a commit, the time stamp can help you locate
the corresponding OMEGAMON accounting trace report.
| You can match DB2 accounting records with CICS accounting records. If you
| specify ACCOUNTREC(UOW) or ACCOUNTREC(TASK) on the DB2ENTRY RDO
| definition, the CICS LU 6.2 token is included in the DB2 trace records, in field
| QWHCTOKN of the correlation header. To help match CICS and DB2 accounting
| records, specify ACCOUNTREC(UOW) or ACCOUNTREC(TASK) in the
| DB2ENTRY definition. That writes a DB2 accounting record after every transaction.
As an alternative, you can produce OMEGAMON accounting reports that
summarize accounting records by CICS transaction ID. Use the OMEGAMON
function Correlation Translation to select the subfield containing the CICS
transaction ID for reporting.
| You can synchronize the statistics recording interval with the RMF reporting
| interval, using the STATIME and SYNCVAL subsystem parameters. STATIME
| specifies the length of the statistics interval, and SYNCVAL specifies that the
| recording interval is synchronized with some part of the hour.
| Example: If the RMF reporting interval is 15 minutes, you can set the STATIME to
| 15 minutes and SYNCVAL to 0 to synchronize with RMF at the beginning of the
| hour. These values cause DB2 statistics to be recorded at 15, 30, 45, and 60 minutes
| past the hour, matching the RMF report interval. Alternatively, because the RMF
| reporting interval is typically in multiples of 5 minutes, you could specify 0 for
| STATIME and 5 for SYNCVAL to synchronize the reporting times. These values
| produce synchronized statistics reports with more granularity and better
| synchronization.
| Synchronizing the statistics recording interval across data sharing members and
| with the RMF reporting interval is helpful because having the DB2 statistics, RMF,
| and CF data for identical time periods makes the problem analysis more accurate.
In general, you can improve the response time and throughput of your DB2
applications and queries by:
v “Reducing I/O operations”
v “Reducing the time needed to perform I/O operations” on page 622
v “Reducing processor resource consumption” on page 628
v “Response time reporting” on page 630
The chapter concludes with an overview of how various DB2 response times are
reported.
Using indexes can also minimize I/O operations. For information on indexes and
access path selection see “Overview of index access” on page 912.
Run RUNSTATS at least once against each table and its associated indexes. How
often you rerun the utility depends on how current you need the catalog data to
| For some tables, you will find no good time to run RUNSTATS. For example, you
| might use some tables for work that is in process. The tables might have only a
| few rows in the evening when it is convenient to run RUNSTATS, but they might
| have thousands or millions of rows in them during the day. For such tables,
| consider these possible approaches:
| v Set the statistics to a relatively high number and hope your estimates are
| appropriate.
| v Use volatile tables. For information about defining a table as volatile, see DB2
| SQL Reference.
| Whichever approach that you choose, monitor the tables because optimization is
| adversely affected by incorrect information.
You can use the PCTFREE and FREEPAGE clauses of the CREATE and ALTER
TABLESPACE statements and CREATE and ALTER INDEX statements to improve
the performance of INSERT and UPDATE operations. The table spaces and indexes
for the DB2 catalog can also be altered to modify FREEPAGE and PCTFREE. These
options are not applicable for LOB table spaces.
You can change the values of PCTFREE and FREEPAGE for existing indexes and
table spaces using the ALTER INDEX and ALTER TABLESPACE statements, but
the change has no effect until you load or reorganize the index or table space.
When you specify a sufficient amount of free space, the advantages during normal
processing are:
Better clustering of rows (giving faster access)
Fewer overflows
Less frequent reorganizations needed
Less information locked by a page lock
Fewer index page splits
The default for PCTFREE for table spaces is 5 (5% of the page is free). If you have
previously used a large PCTFREE to force one row per page, you should instead
use MAXROWS 1 on the CREATE or ALTER TABLESPACE statement. MAXROWS
has the advantage of maintaining the free space even when new data is inserted.
The default for indexes is 10. The maximum amount of space that is left free in
index nonleaf pages is 10%, even if you specify a value higher than 10 for
PCTFREE.
To determine the amount of free space currently on a page, run the RUNSTATS
utility and examine the PERCACTIVE column of SYSIBM.SYSTABLEPART. See
Part 2 of DB2 Utility Guide and Reference for information about using RUNSTATS.
The maximum value you can specify for FREEPAGE is 255; however, in a
segmented table space, the maximum value is 1 less than the number of pages
specified for SEGSIZE.
Use of PCTFREE or FREEPAGE depends on the type of SQL and the distribution
of that activity across the table space or index. When deciding whether to allocate
free space consider the data and each index separately and assess the insert and
update activity on the data and indexes.
When not to use free space: Free space is not necessary if:
v The object is read-only.
If you do not plan to insert or update data in a table, there is no need to leave
free space for either the table or its indexes.
v The object is not read-only, inserts are at the end, and updates that lengthen
varying-length columns are few.
For example, if inserts are in ascending order by key of the clustering index or
are caused by LOAD RESUME SHRLEVEL NONE and update activity is only
on fixed-length columns with non-compressed data, the free space for both the
table and clustering index should be zero. Generally, free space is beneficial for a
non-clustering index because inserts are usually random. However, if the
non-clustering index contains a column with a timestamp value that causes the
inserts into the index to be in sequence, the free space should be zero.
| If update activity on compressed data, which often results in longer rows, is heavy
| or insert volume is heavy, use a PCTFREE value greater than the default.
Additional recommendations:
v For concurrency, use MAXROWS or larger PCTFREE values for small tables and
shared table spaces that use page locking. This reduces the number of rows per
page, thus reducing the frequency that any given page is accessed.
v For the DB2 catalog table spaces and indexes, use the defaults for PCTFREE. If
additional free space is needed, use FREEPAGE.
However, many factors affect how you determine the number of buffer pools to
have and how big they should be. See “Determining size and number of buffer
pools” on page 639 for more information.
For information on parallel operations, see Chapter 34, “Parallel operations and
query performance,” on page 951.
For information on I/O scheduling priority, see “z/OS performance options for
DB2” on page 679.
Consider isolating data sets with characteristics that do not complement other data
sets. For example, do not put high volume transaction work that uses synchronous
reads on the same volume as something of lower importance that uses list
prefetch.
If you increase the number of data sets that are used for an index and spread those
data sets across the available I/O paths, you can reduce the physical contention on
the index. Using data-partitioned secondary indexes or making the piece size of a
nonpartitioned index smaller increases the number of data sets that are used for
the index.
| For a single query, the recommended number of work file disk volumes to have is
| one-fifth the maximum number of data partitions, with 5 as a minimum and 50 as
| a maximum. For concurrently running queries, multiply this value by the number
of concurrent queries.
Place these volumes on different channel or control unit paths. Monitor the I/O
activity for the work file table spaces, because you might need to further separate
this work file activity to avoid contention. As the amount of work file activity
increases, consider increasing the size of the buffer pool for work files to support
concurrent activities more efficiently. The general recommendation for the work file
buffer pool is to increase the size to minimize the following buffer pool statistics:
v MERGE PASSES DEGRADED, which should be less than 1% of MERGE PASS
REQUESTED
v WORKFILE REQUESTS REJECTED, which should be less than 1% of
WORKFILE REQUEST ALL MERGE PASSES
v Synchronous read I/O, which should be less than 1% of pages read by prefetch
v Prefetch quantity of 4 or less, which should be near 8
| During the installation or migration process, you allocated table spaces for 4-KB,
| 8-KB, 16-KB, and 32-KB buffering.
Steps to create an additional work file table space: Use the following steps to
create a new work file table space, xyz. (If you are using DB2-managed data sets,
omit the step to create the data sets.)
1. Define the required data sets using the VSAM DEFINE CLUSTER statement.
| You might want to use the definitions in the edited, installation job DSNTIJTM
| as a model. For more information about job DSNTIJTM and the number of
| work files, see DB2 Installation Guide.
2. Create the work file table space by entering the following SQL statement:
CREATE TABLESPACE xyz IN DSNDB07
BUFFERPOOL BP7
CLOSE NO
USING VCAT DSNC810;
When inserting records, DB2 preformats space within a page set as needed. The
allocation amount, which is either CYLINDER or TRACK, determines the amount
of space that is preformatted at any one time. See “Formatting early and speed up
formatting” for a way you can preformat data using LOAD or REORG.
Because less space is preformatted at one time for the TRACK allocation amount, a
mass insert can take longer when the allocation amount is TRACK than the same
| insert when the allocation amount is CYLINDER. However, smart secondary space
| allocation minimizes the difference between TRACK and CYLINDER. Refer to
| “Secondary space allocation” on page 32 for information about smart secondary
| space allocation.
The allocation amount is dependent on device type and the number of bytes you
specify for PRIQTY and SECQTY when you define table spaces and indexes. The
default SECQTY is 10% of the PRIQTY, or 3 times the page size, whichever is
larger. This default quantity is an efficient use of storage allocation. Choosing a
SECQTY value that is too small in relation to the PRIQTY value results in track
allocation.
For more information about how space allocation amounts are determined, see the
description of the DEFINE CLUSTER command in DFSMS/MVS: Access Method
Services for the Integrated Catalog.
| Specifying sufficient primary and secondary allocations for frequently used data
| sets minimizes I/O time, because the data is not located at different places on the
| disks. Listing the catalog or VTOC occasionally to determine the number of
| secondary allocations that have been made for your more frequently used data sets
| can also be helpful. Alternatively, you can use IFCID 0258 in the statistics class 3
| trace to monitor data set extensions. OMEGAMON monitors IFCID 0258. You can
| define a threshold for the number of extents, and receive an alert when a data set
| exceeds that number of extents.
If you discover that the data sets backing frequently used table spaces or indexes
have an excessive number of extents, and if the data sets are user-defined, you can
use access method services to reallocate the affected data sets using a larger
primary allocation quantity. If the data sets were created using STOGROUPs, you
can use the procedure for modifying the definition of table spaces presented in
“Altering table spaces” on page 69.
Specify primary quantity for nonpartitioned indexes: To prevent wasted space for
nonpartitioned indexes, do one of the following things:
v Let DB2 use the default primary quantity and calculate the secondary quantities.
Do this by specifying 0 for the IXQTY subsystem parameter, and by omitting a
PRIQTY and SECQTY value in the CREATE INDEX statement or ALTER INDEX
statement. If a primary and secondary quantity were previously specified for an
index, you can specify PRIQTY -1 and SECQTY -1 to change to the default
primary quantity and calculated secondary quantity.
v If the MGEXTSZ subsystem parameter is set to NO, so that you control
secondary space allocations, make sure that the value of PRIQTY + (N ×
SECQTY) is a value that evenly divides into PIECESIZE. For more information
about PIECESIZE, see Chapter 5 of DB2 SQL Reference.
Global trace
Global trace requires 2% to 100% additional processor utilization. If conditions
permit at your site, the DB2 global trace should be turned off. You can do this by
specifying NO for the field TRACE AUTO START on panel DSNTIPN at
installation. Then, if the global trace is needed for serviceability, you can start it
using the START TRACE command.
| Exception: If you are using CICS Transaction Server for z/OS 2.2 with the Open
| Transaction Environment (OTE), activate and run class 2.
If you have very light DB2 usage and you are using Measured Usage, then you
need the SMF 89 records. In other situations, be sure that SMF 89 records are not
recorded to avoid this overhead.
Audit trace
The performance impact of auditing is directly dependent on the amount of audit
data produced. When the audit trace is active, the more tables that are audited and
the more transactions that access them, the greater the performance impact. The
overhead of audit trace is typically less than 5%.
When estimating the performance impact of the audit trace, consider the frequency
of certain events. For example, security violations are not as frequent as table
accesses. The frequency of utility runs is likely to be measured in executions per
day. Alternatively, authorization changes can be numerous in a transaction
environment.
Performance trace
The combined overhead of all performance classes runs from about 20% to 100%.
The overhead for performance trace classes 1 through 3 is typically in the range of
| 5% to 30%. Therefore, turn on only the performance trace classes required to
| address a specific performance problem and qualify the trace as much as possible
| to limit the that is data gathered to only the data that you need. For example,
| qualify the trace by the plan name and IFCID.
Suppressing other trace options, such as TSO, IRLM, z/OS, IMS, CICS, and other
trace options can also reduce overhead.
If you use ALTER to add a fixed-length column to a table, that column is treated as
variable-length until the table has been reorganized.
CICS/IMS
elapsed
time
End of transaction
Commit Phase 1
Commit Phase 2
Terminate thread
TP monitor and
application code
Line transmit
User receives
response
Figure 67. Transaction response times. Class 1 is standard accounting data. Class 2 is elapsed and processor time in
DB2. Class 3 is elapsed wait time in DB2. Standard accounting data is provided in IFCID 0003, which is turned on
with accounting class 1. When accounting classes 2 and 3 are turned on as well, IFCID 0003 contains additional
information about DB2 times and wait times.
If the row is changed, the data in the buffer must be written back to disk
eventually. But that write operation might be delayed until DB2 takes a checkpoint,
or until one of the related write thresholds is reached. (In a data sharing
environment, however, the writing mechanism is somewhat different. See Chapter
6 of DB2 Data Sharing: Planning and Administration for more information.)
The data remains in the buffer until DB2 decides to use the space for another page.
Until that time, the data can be read or changed without a disk I/O operation.
DB2 allows you to use up to 50 buffer pools that contain 4-KB buffers and up to 10
buffer pools each for 8-KB, 16-KB, and 32-KB buffers. You can set the size of each
of those buffer pools separately when installing DB2. You can change the sizes and
other characteristics of a buffer pool at any time while DB2 is running, by using
the ALTER BUFFERPOOL command.
| Buffer Pool Analyzer: You can use the Buffer Pool Analyzer for z/OS to
| recommend buffer pool allocation changes and to do “what if” analysis of your
| buffer pools.
In-use pages: These are pages that are currently being read or updated. The data
they contain is available for use by other applications.
Updated pages: These are pages whose data has been changed but have not yet
| been written to disk.
Available pages: These pages can be considered for new use, to be overwritten by
an incoming page of new data. Both in-use pages and updated pages are
unavailable in this sense; they are not considered for new use.
Read operations
DB2 uses three read mechanisms: normal read, sequential prefetch, and list sequential
prefetch.
Normal read: Normal read is used when just one or a few consecutive pages are
retrieved. The unit of transfer for a normal read is one page.
Sequential prefetch can be used to read data pages, by table space scans or index
scans with clustered data reference. It can also be used to read index pages in an
index scan. Sequential prefetch allows CP and I/O operations to be overlapped.
List sequential prefetch: List sequential prefetch is used to prefetch data pages that
are not contiguous (such as through non-clustered indexes). List prefetch can also
be used by incremental image copy. For a complete description of the mechanism,
see “List prefetch (PREFETCH=L)” on page 934.
Write operations
Write operations are usually performed concurrently with user requests. Updated
pages are queued by data set until they are written when:
v A checkpoint is taken.
v The percentage of updated pages in a buffer pool for a single data set exceeds a
preset limit called the vertical deferred write threshold (VDWQT). For more
information on this threshold, see “Buffer pool thresholds” on page 635.
v The percentage of unavailable pages in a buffer pool exceeds a preset limit
called the deferred write threshold (DWQT). For more information on this
threshold, see “Buffer pool thresholds” on page 635.
Table 103 lists how many pages DB2 can write in a single I/O operation.
Table 103. Number of pages that DB2 can write in a single I/O operation
Page size Number of pages
| 2 KB 64
4 KB 32
You cannot use the ALTER statement to change the assignment of the catalog and
the directory. BP0 is the default buffer pool for sorting, but you can change that by
| assigning the work file table spaces to another buffer pool. BP0 has a default size
| of 20000 and a minimum size of 2000. As with any other buffer pool, you can
change the size using the ALTER BUFFERPOOL command.
DB2’s use of a buffer pool is governed by several preset values called thresholds.
Each threshold is a level of use which, when exceeded, causes DB2 to take some
action. Certain thresholds might indicate a buffer pool shortage problem, while
other thresholds merely report normal buffer management by DB2. The level of
use is usually expressed as a percentage of the total size of the buffer pool. For
example, the “immediate write threshold” of a buffer pool (described in more
detail later) is set at 97.5%. When the percentage of unavailable pages in a buffer
pool exceeds that value, DB2 writes pages to disk when updates are completed.
Thresholds for very small buffer pools: This section describes fixed and variable
thresholds that are in effect for buffer pools that are sized for the best performance;
that is, for buffer pools of 1000 buffers or more. For very small buffer pools, some
of the thresholds are lower to prevent “buffer pool full” conditions, but those
thresholds are not described.
Fixed thresholds
Some thresholds, like the immediate write threshold, you cannot change.
Monitoring buffer pool usage includes noting how often those thresholds are
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 635
reached. If they are reached too often, the remedy is to increase the size of the
buffer pool, which you can do with the ALTER BUFFERPOOL command.
Increasing the size, though, can affect other buffer pools, depending on the total
amount of real storage available for your buffers.
The fixed thresholds are more critical for performance than the variable thresholds.
Generally, you want to set buffer pool sizes large enough to avoid reaching any of
these thresholds, except occasionally.
Each of the fixed thresholds is expressed as a percentage of the buffer pool that
might be occupied by unavailable pages.
From the highest value to the lowest value, the fixed thresholds are:
v Immediate write threshold (IWTH): 97.5%
This threshold is checked whenever a page is to be updated. If the threshold has
been exceeded, the updated page is written to disk as soon as the update
completes. The write is synchronous with the SQL request; that is, the request
waits until the write is completed. The two operations do not occur concurrently.
Reaching this threshold has a significant effect on processor usage and I/O
resource consumption. For example, updating three rows per page in 10
sequential pages ordinarily requires one or two write operations. However,
when IWTH has been exceeded, the updates require 30 synchronous writes.
Sometimes DB2 uses synchronous writes even when the IWTH has not been
exceeded. For example, when more than two checkpoints pass without a page
being written, DB2 uses synchronous writes. Situations such as these do not
indicate a buffer shortage.
v Data management threshold (DMTH): 95%
This threshold is checked before a page is read or updated. If the threshold is
not exceeded, DB2 accesses the page in the buffer pool once for each page, no
matter how many rows are retrieved or updated in that page. If the threshold is
exceeded, DB2 accesses the page in the buffer pool once for each row that is
retrieved or updated in that page.
Recommendation: Avoid reaching the DMTH because it has a significant effect
on processor usage.
The DMTH is maintained for each individual buffer pool. When the DMTH is
reached in one buffer pool, DB2 does not release pages from other buffer pools.
v Sequential prefetch threshold (SPTH): 90%
This threshold is checked at two different times:
– Before scheduling a prefetch operation. If the threshold has been exceeded,
the prefetch is not scheduled.
– During buffer allocation for an already-scheduled prefetch operation. If the
threshold has been exceeded, the prefetch is canceled.
When the sequential prefetch threshold is reached, sequential prefetch is
inhibited until more buffers become available. Operations that use sequential
prefetch, such as those using large and frequent scans, are adversely affected.
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 637
percentage or number of updated pages for the data set exceeds the threshold,
writes are scheduled for that data set, up to 128 pages.
You can specify this threshold in one of two ways:
– As a percentage of the buffer pool that might be occupied by updated pages
from a single page set.
The default value for this threshold is 10%. You can change the percentage to
any value from 0% to 90%.
– As the total number of buffers in the buffer pool that might be occupied by
updated pages from a single page set.
You can specify the number of buffers from 0 to 9999. If you want to use the
number of buffers as your threshold, you must set the percentage threshold to
0.
Changing the threshold: Change the percent or number of buffers by using the
VDWQT keyword on the ALTER BUFFERPOOL command.
Because any buffers that count toward VDWQT also count toward DWQT,
setting the VDWQT percentage higher than DWQT has no effect: DWQT is
reached first, write operations are scheduled, and VDWQT is never reached.
Therefore, the ALTER BUFFERPOOL command does not allow you to set the
VDWQT percentage to a value greater than DWQT. You can specify a number of
buffers for VDWQT than is higher than DWQT, but again, with no effect.
This threshold is overridden by certain DB2 utilities, which use a constant limit
of 64 pages rather than a percentage of the buffer pool size. LOAD, REORG, and
RECOVER use a constant limit of 128 pages.
| Setting VDWQT to 0: If you set VDQWT to zero, DB2 implicitly uses the
| minimum value of 1% of the buffer pool (a specific number of pages) to avoid
| synchronous writes to disk. The number of pages is determined by the buffer
| pool page size, as shown in Table 104:
| Table 104. Number of change pages based on buffer pool size
| Buffer pool page size Number of changed pages
| 4 KB 40
| 8 KB 24
| 16 KB 16
| 32 KB 12
|
For help in tuning your buffer pools, try the Buffer Pool Analyzer for z/OS.
Pages are frequently re-referenced and updated: Suppose that you have a
workload such as a branch table in a bank that contains a few hundred rows and
is updated by every transaction. For such a workload, you want a high value for
the deferred write and vertical deferred write threshold (90%). The result is that
I/O is deferred until DB2 checkpoint and you have a lower I/O rate to disk.
However, if the set of pages updated exceeds the size of the buffer pool, setting
both DWQT and VDWQT to 90% might cause the sequential prefetch threshold
Pages are rarely referenced: Suppose that you have a customer table in a bank
that has millions of rows that are accessed randomly or are updated sequentially in
batch. In this case, lowering the DWQT or VDWQT thresholds (perhaps down to
0) can avoid a surge of write I/Os caused by DB2 checkpoint. Lowering those
thresholds causes the write I/Os to be distributed more evenly over time.
Secondly, this can improve performance for the storage controller cache by
avoiding the problem of flooding the device at DB2 checkpoint.
Query-only buffer pools: For a buffer pool used exclusively for query processing,
setting VPSEQT to 100% is reasonable. If parallel query processing is a large part
of the workload, set VPPSEQT and, if applicable, VPXPSEQT, to a very high value.
Mixed workloads: For a buffer pool used for both query and transaction
processing, the value you set for VPSEQT should depend on the respective priority
of the two types of processing. The higher you set VPSEQT, the better queries tend
| to perform, at the expense of transactions. If you are not sure what value to set for
| VPSEQT, use the default setting.
Buffer pools containing LOBs: Put LOB data in buffer pools that are not shared
with other data. For both LOG YES and LOG NO LOBs, use a deferred write
threshold (DWQT) of 0. LOBs specified with LOG NO have their changed pages
written at commit time (force-at-commit processing). If you set DWQT to 0, those
writes happen continuously in the background rather than in a large surge at
commit.
LOBs defined with LOG YES can use deferred write, but by setting DWQT to 0,
you can avoid massive writes at DB2 checkpoints.
Accounting reports, which are application related, show the hit ratio for specific
applications. An accounting trace report shows the ratio for single threads. The
OMEGAMON buffer pool statistics report shows the hit ratio for the subsystem as
a whole. For example, the buffer-pool hit ratio is shown in field A in Figure 69 on
page 645. The buffer hit ratio uses the following formula to determine how many
getpage operations did not require an I/O operation:
Hit ratio = (getpages - pages_read_from_disk) / getpages
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 639
where pages_read_from_disk is the sum of the following fields:
v Number of synchronous reads (field B in Figure 69 on page 645)
v Number of pages read via sequential prefetch (field C)
v Number of pages read via list prefetch (field D)
v Number of pages read via dynamic prefetch (field E)
Example: If you have 1000 getpages and 100 pages were read from disk, the
equation would be as follows:
Hit ratio = (1000-100)/1000
Highest hit ratio: The highest possible value for the hit ratio is 1.0, which is
achieved when every page requested is always in the buffer pool. Reading index
non-leaf pages tend to have a very high hit ratio since they are frequently
re-referenced and thus tend to stay in the buffer pool.
Lowest hit ratio: The lowest hit ratio occurs when the requested page is not in the
buffer pool; in this case, the hit ratio is 0 or less. A negative hit ratio means that
prefetch has brought pages into the buffer pool that are not subsequently
referenced. The pages are not referenced because either the query stops before it
reaches the end of the table space or DB2 must take the pages away to make room
for newer ones before the query can access them.
A low hit ratio is not always bad: While it might seem desirable to make the
buffer hit ratio as close to 1.0 as possible, do not automatically assume a low
buffer-pool hit ratio is bad. The hit ratio is a relative value, based on the type of
application. For example, an application that browses huge amounts of data using
table space scans might very well have a buffer-pool hit ratio of 0. What you want
to watch for is those cases where the hit ratio drops significantly for the same
application. In those cases, it might be helpful to investigate further.
Hit ratios for additional processes: The hit ratio measurement becomes less
meaningful if the buffer pool is being used by additional processes, such as work
files or utilities. Some utilities and SQL statements use a special type of getpage
request that reserve an empty buffer without requiring that the page be read from
disk.
A getpage is issued for each empty work file page without read I/O during sort
input processing. The hit ratio can be calculated if the work files are isolated in
their own buffer pools. If they are, then the number of getpages used for the hit
ratio formula is divided in half as follows:
Hit ratio = ((getpages / 2) - pages_read_from_disk) / (getpages / 2)
| If insufficient real storage exists to back the buffer pool storage, the resulting
| paging activity may cause performance degradation. If you see significant paging
| activity, increase the amount of real storage or decrease the size of the buffer pools.
| Important: Sufficient real and auxiliary storage must exist to support the combined
| size of all the buffer pools that are defined. Insufficient storage might cause the
| system to enter into a wait state and to require an IPL.
| Allocating buffer pool storage: DB2 limits the total amount of storage that is
| allocated for virtual buffer pools to approximately twice the amount of real
| storage. However, to avoid paging, it is strongly recommended that you set the
| total buffer pool size to less than the real storage that is available to DB2.
| Recommendation: The total buffer pool storage should not exceed the available
| real storage.
| DB2 allocates the minimum buffer pool storage as shown in Table 105.
| Table 105. Buffer pool storage allocation
| Buffer pool page size Minimum number of pages allocated
| 4 KB 2000
| 8 KB 1000
| 16 KB 500
| 32 KB 250
|
| If the amount of virtual storage that is allocated to buffer pools is more than twice
| the amount of real storage, you cannot increase the buffer pool size.
Buffer Pool Analyzer: You can use the Buffer Pool Analyzer for z/OS to
recommend buffer pool allocation changes and to do “what if” analysis of your
buffer pools. Specifically, the “what if” analysis can help you compare the
performance of a single buffer pool and multiple buffer pools. Additionally, the
Buffer Pool Analyzer can recommend how to split buffer pools and where to place
| table space and index space objects.
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 641
Reasons to choose a single buffer pool: If your system has any or all of the
following conditions, it is probably best to choose a single 4 KB buffer pool:
| v It is already storage constrained.
v You have no one with the application knowledge necessary to do more
specialized tuning.
v It is a test system.
Reasons to choose more than one buffer pool: You can benefit from the
following advantages if you use more than one buffer pool:
| v You can isolate data in separate buffer pools to favor certain applications, data,
| and indexes. This benefit is twofold:
| – You can favor certain data and indexes by assigning more buffers. For
| example, you might improve the performance of large buffer pools by putting
| indexes into separate pools from data.
| – You can customize buffer pool tuning parameters to match the characteristics
| of the data. For example, you might want to put tables and indexes that are
| updated frequently into a buffer pool with different characteristics from those
| that are frequently accessed but infrequently updated.
v You can put work files into a separate buffer pool. This can provide better
performance for sort-intensive queries. Applications that use created temporary
tables use work files for those tables. Keeping work files separate allows you to
monitor temporary table activity more easily.
v This process of segregating different activities and data into separate buffer
pools has the advantage of providing good and relatively inexpensive
performance diagnosis data from statistics and accounting traces.
However, using the ALTER BUFFERPOOL command, you can also choose to have
DB2 use a first-in, first-out (FIFO) algorithm. With this simple algorithm, DB2 does
not keep track of how often a page is referenced—the pages that are oldest are
moved out, no matter how frequently they are referenced. This simple approach to
page stealing results in a small decrease in the cost of doing a getpage operation,
and it can reduce internal DB2 latch contention in environments that require very
high concurrency.
Recommendations:
v In most cases, keep the default, LRU.
v Use FIFO for buffer pools that have no I/O; that is, the table space or index
remains in the buffer pool. Because all the pages are there, the additional cost of
| a more complicated page management algorithm is not required. FIFO is also
| beneficial if buffer hit ratio is low, for example, less than 1%.
v Keep objects that can benefit from the FIFO algorithm in different buffer pools
from those that benefit from the LRU algorithm. See options for PGSTEAL in
ALTER BUFFERPOOL command in DB2 Command Reference.
| Recommendation: Use PGFIX(YES) for buffer pools with a high I/O rate, that is, a
| high number of pages read or written. For buffer pools with zero I/O, such as
| some read-only data or some indexes with a nearly 100% hit ratio, PGFIX(YES) is
| not recommended. In these cases, PGFIX(YES) does not provide a performance
| advantage.
| To prevent PGFIX(YES) buffer pools from exceeding the real storage capacity, DB2
| uses an 80% threshold when allocating PGFIX(YES) buffer pools. If the threshold is
| exceeded, DB2 overrides the PGFIX(YES) option with PGFIX(NO).
You can use the ALTER BUFFERPOOL command to change the following
attributes:
v Size
v Thresholds
v Page stealing algorithm
You can use the DISPLAY BUFFERPOOL command to display the current status of
one or more active or inactive buffer pools. For example, the following command
produces a detailed report of the status of BP0, as shown in Figure 68 on page 644.
DISPLAY BUFFERPOOL(BP0) DETAIL
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 643
+DISPLAY BPOOL(BP0) DETAIL
DSNB401I + BUFFERPOOL NAME BP0, BUFFERPOOL ID 0, USE COUNT 47
DSNB402I + BUFFERPOOL SIZE = 2000 BUFFERS
ALLOCATED = 2000 TO BE DELETED = 0
IN-USE/UPDATED = 0
DSNB406I + PAGE STEALING METHOD = LRU
DSNB404I + THRESHOLDS -
VP SEQUENTIAL = 80
DEFERRED WRITE = 85 VERTICAL DEFERRED WRT = 80, 0
PARALLEL SEQUENTIAL = 50 ASSISTING PARALLEL SEQ = 0
DSNB409I + INCREMENTAL STATISTICS SINCE 14:57:55 JAN 22, yyyy
DSNB411I + RANDOM GETPAGE = 491222 SYNC READ I/O (R) =A 18193
SEQ. GETPAGE = 1378500 SYNC READ I/O (S) =B 0
DMTH HIT = 0 PAGE-INS REQUIRED = 460400
DSNB412I + SEQUENTIAL PREFETCH
REQUESTSD = 41800 PREFETCH I/O C= 14473
PAGES READE = 444030
DSNB413I + LIST PREFETCH -
REQUESTS = 9046 PREFETCH I/O = 2263
PAGES READ = 3046
DSNB414I + DYNAMIC PREFETCH -
REQUESTS = 6680 PREFETCH I/O = 142
PAGES READ = 1333
DSNB415I + PREFETCH DISABLED -
NO BUFFER = 0 NO READ ENGINE = 0
DSNB420I + SYS PAGE UPDATES =F220425 SYS PAGES WRITTEN = G35169
ASYNC WRITE I/O = 5084 SYNC WRITE I/O = 3
PAGE-INS REQUIRED = 45
DSNB421I + DWT HITH = 2 VERTICAL DWT HIT = I 0
NO WRITE ENGINE = 0
DSNB440I + PARALLEL ACTIVITY -
PARALLEL REQUEST = 0 DEGRADED PARALLEL = 0
DSNB441I + LPL ACTIVITY -
PAGES ADDED = 0
DSN9022I + DSNB1CMD ’+DISPLAY BPOOL’ NORMAL COMPLETION
| Because the number of synchronous read I/Os (A) and the number of SYS PAGE
| UPDATES (F) are relatively high, you would want to tune the buffer pools by
| changing the buffer pool specifications. For example, you could increase the buffer
| To obtain buffer pool information on a specific data set, you can use the LSTATS
option of the DISPLAY BUFFERPOOL command. For example, you can use the
LSTATS option to:
v Provide page count statistics for a certain index. With this information, you
could determine whether a query used the index in question, and perhaps drop
the index if it was not used.
v Monitor the response times on a particular data set. If you determine that I/O
contention is occurring, you could redistribute the data sets across your available
disks.
This same information is available with IFCID 0199 (statistics class 8).
The formula for the buffer-pool hit ratio (fields A through E) is explained in
“The buffer-pool hit ratio” on page 639
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 645
Increase the buffer pool size or reduce the workload if:
v Sequential prefetch is inhibited. PREF.DISABLED-NO BUFFER (F) shows how
many times sequential prefetch is disabled because the sequential prefetch
threshold (90% of the pages in the buffer pool are unavailable) has been reached.
v You detect poor update efficiency. You can determine update efficiency by
checking the values in both of the following fields:
– BUFF.UPDATES/PAGES WRITTEN (H)
– PAGES WRITTEN PER WRITE I/O (J)
In evaluating the values you see in these fields, remember that no values are
absolutely acceptable or absolutely unacceptable. Each installation’s workload is
a special case. To assess the update efficiency of your system, monitor for overall
trends rather than for absolute high values for these ratios.
The following factors impact buffer updates per pages written and pages written
per write I/O:
– Sequential nature of updates
– Number of rows per page
– Row update frequency
For example, a batch program that processes a table in skip sequential mode
with a high row update frequency in a dedicated environment can achieve very
good update efficiency. In contrast, update efficiency tends to be lower for
transaction processing applications, because transaction processing tends to be
random.
The following factors affect the ratio of pages written per write I/O:
| – Checkpoint frequency. The CHECKPOINT FREQ field on installation panel
| DSNTIPN specifies either the number of consecutive log records written
| between DB2 system checkpoints or the number of minutes between DB2
| system checkpoints. You can use a large value to specify CHECKPOINT
| FREQ in number of log records; the default value is 500 000. You can use a
| small value to specify CHECKPOINT FREQ in minutes. If you specify
| CHECKPOINT FREQ in minutes, the recommended setting is 2 to 5 minutes.
| At checkpoint time, I/Os are scheduled to write all updated pages on the
| deferred write queue to disk. If system checkpoints occur too frequently, the
| deferred write queue does not grow large enough to achieve a high ratio of
| pages written per write I/O.
– Frequency of active log switch. DB2 takes a system checkpoint each time the
active log is switched. If the active log data sets are too small, checkpoints
occur often, which prevents the deferred write queue from growing large
enough to achieve a high ratio of pages written per write I/O. For
recommendations on active log data set size, see “Log capacity” on page 665.
– Buffer pool size. The deferred write thresholds (VDWQT and DWQT) are a
function of buffer pool size. If the buffer pool size is decreased, these
thresholds are reached more frequently, causing I/Os to be scheduled more
often to write some of the pages on the deferred write queue to disk. This
prevents the deferred write queue from growing large enough to achieve a
high ratio of pages written per write I/O.
| – Number of data sets, and the spread of updated pages across them. The maximum
| number of pages written per write I/O is 32, subject to a limiting scope of 180
| pages (roughly one cylinder).
| Example: If your application updates page 2 and page 179 in a series of
| pages, the two changed pages could potentially be written with one write
| I/O. But if your application updates page 2 and page 185 within a series of
| pages, writing the two changed pages would require two write I/Os because
| of the 180-page limit. Updated pages are placed in a deferred write queue
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 647
| During the installation process, DSNTINST CLIST calculates the size of the EDM
| pool and the EDM DBD cache. You can check the calculated sizes on installation
| panel DSNTIPC. For more information on estimating and specifying the sizes, see
| DB2 Installation Guide.
| For data sharing, you might need to increase the EDM DBD cache storage estimate.
For more information, see Chapter 2 of DB2 Data Sharing: Planning and
Administration.
| Because of an internal process that changes the size of plans initially bound in one
| release and then are rebound in a later release, you should carefully monitor the
| size of the EDM pool, the EDM DBD cache, and the EDM statement cache and
| increase their sizes, if necessary.
| By designing the EDM storage pools this way, you can avoid allocation I/Os,
which can represent a significant part of the total number of I/Os for a transaction.
You can also reduce the processing time necessary to check whether users
attempting to execute a plan are authorized to do so.
| Monitor and manage DBDs to prevent them from becoming too large. Very large
| DBDs can reduce concurrency and degrade the performance of SQL operations that
| create or alter objects because of increased I/O and logging. DBDs that are created
| or altered in DB2 Version 6 or later do not need contiguous storage, but can use
| pieces of approximately 32 KB. Older DBDs require contiguous storage.
Efficiency of the EDM pool: You can measure the efficiency of the EDM pool by
using the following ratios:
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 649
DBD HIT RATIO (%) C
CT HIT RATIO (%) D
PT HIT RATIO (%) E
These ratios for the EDM pool depend upon your location’s work load. In most
DB2 subsystems, a value of 80% or more is acceptable. This value means that at
least 80% of the requests were satisfied without I/O.
The number of free pages is shown in FREE PAGES (B) in Figure 70. If this value
is more than 20% of PAGES IN EDM STORAGE (A) during peak periods, the
EDM pool size is probably too large. In this case, you can reduce its size without
affecting the efficiency ratios significantly.
EDM statement cache hit ratio: If you have caching turned on for dynamic SQL,
the EDM storage statistics have information that can help you determine how
successful your applications are at finding statements in the cache. See mapping
macro DSNDQISE for descriptions of these fields.
EDM pool space utilization and performance: For smaller EDM pools, space
utilization or fragmentation is normally more critical than for larger EDM pools.
For larger EDM pools, performance is normally more critical. DB2 emphasizes
performance and uses less optimum EDM storage allocation when the EDM pool
size exceeds 40 MB. For systems with large EDM pools that are greater than 40 MB
to continue to use optimum EDM storage allocation at the cost of performance,
you can set the keyword EDMBFIT in the DSNTIJUZ job to YES. The EDMBFIT
keyword adjusts the search algorithm on systems with EDM pools that are larger
than 40 MB. The default NO tells DB2 to use a first-fit algorithm while YES tells
DB2 to use a better-fit algorithm.
| Recommendation: Set EDMBFIT to YES when EDMPOOL full conditions occur for
| an EDM pool size that exceeds 40 MB.
Use packages
By using multiple packages you can increase the effectiveness of EDM pool storage
management by having smaller objects in the pool.
To favor the selection and efficient completion of those access paths, increase the
maximum RID pool size. However, if there is not enough RID pool storage, it is
possible that the statement might revert to a table space scan.
A RID pool size of 0 disables those access paths. If you specify a RID pool size of
0, plans or packages that were previously bound with a non-zero RID pool size
might experience significant performance degradation. Rebind any plans or
packages that include SQL statements that use RID processing.
| The default RID pool size is 8 MB. You can override this value on installation
panel DSNTIPC.
To determine if a transaction used the RID pool, see the RID Pool Processing
section of the OMEGAMON accounting trace record.
| The RID pool, which all concurrent work shares, is limited to a maximum of 10 000
| MB. The RID pool is created at system initialization, but no space is allocated until
| RID storage is needed. It is then allocated in 32-KB blocks as needed, until the
| maximum size that you specified on installation panel DSNTIPC is reached.
Example: Three concurrent RID processing activities, with an average of 4000 RIDs
each, would require 120 KB of storage, because:
3 × 4000 × 2 × 5 = 120KB
Whether your SQL statements that use RID processing complete efficiently or not
depends on other concurrent work using the RID pool.
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 651
Controlling sort pool size and sort processing
Sort is invoked when a cursor is opened for a SELECT statement that requires
sorting. The maximum size of the sort work area allocated for each concurrent sort
user depends on the value you specified for the SORT POOL SIZE field on
| installation panel DSNTIPC. The default value is 2 MB.
For sort key length and sort data length, use values that represent the maximum
values for the queries you run. To determine these values, refer to fields
QW0096KL (key length) and QW0096DL (data length) in IFCID 0096, as mapped
by macro DSNDQW01. You can also determine these values from an SQL activity
trace.
If a column is in the ORDER BY clause that is not in the select clause, that column
should be included in the sort data length and the sort key length as shown in the
following example:
SELECT C1, C2, C3
FROM tablex
ORDER BY C1, C4;
If C1, C2, C3, and C4 are each 10 bytes in length, you could estimate the sort pool
size as follows:
| 32000 × (16 + 20 + (10 + 10 + 10 + 10)) = 2342000 bytes
The work files that are used in sort are logical work files, which reside in work file
table spaces in your work file database (which is DSNDB07 in a non data-sharing
| environment). DB2 uses the buffer pool when writing to the logical work file. Only
| the buffer pool size limits the number of work files that can be used for sorting.
A sort can complete in the buffer pool without I/Os. This is the ideal situation, but
it might be unlikely, especially if the amount of data being sorted is large. The sort
row size is actually made up of the columns being sorted (the sort key length) and
the columns the user selects (the sort data length). Having a very large buffer pool
for sort activity can help avoid disk I/Os.
To support large sorts, DB2 can allocate a single logical work file to several
physical work file table spaces.
Chapter 26. Tuning DB2 buffer, EDM, RID, and sort pools 653
v If I/Os occur in the sorting process, in the merge phase DB2 uses sequential
prefetch to bring pages into the buffer pool with a prefetch quantity of eight
pages. However, if the buffer pool is constrained, then DB2 uses a prefetch
quantity of four pages or less, or disables prefetch entirely because of the
unavailability of enough pages.
For any SQL statement that initiates sort activity, the OMEGAMON SQL activity
reports provide information on the efficiency of the sort that is involved.
Choose the controls that best match your goals. You may, for example, want to
minimize resource usage, maximize throughput or response time, ensure a certain
level of service to some users, or avoid conflicts between users. Your goal might be
to favor a certain class of users or to achieve the best overall system performance.
Many of the things you currently do for a single DB2 to improve response time or
reduce processor consumption also hold true in the data sharing environment.
Thus, most of the information in this chapter holds true for data sharing as well.
For more information about tuning in a data sharing environment, see Chapter 6 of
DB2 Data Sharing: Planning and Administration. For more information about
reducing resource usage in distributed applications, see “Tuning distributed
applications” on page 968. For more information about setting thread limits, see
Chapter 28, “Managing DB2 threads,” on page 695
DB2 calculates the number of open data sets with the following formula:
concdb × {(tables × indexes) + tblspaces}
Modifying DSMAX
The formula used by DB2 does not take partitioned or LOB table spaces into
account. Those table spaces can have many data sets. If you have many partitioned
table spaces or LOB table spaces, you might need to increase DSMAX. Don’t forget
to consider the data sets for nonpartitioned indexes defined on partitioned table
spaces. If those indexes are defined with a small PIECESIZE, there could be many
data sets. You can modify DSMAX by updating field DSMAX - MAXIMUM OPEN
DATA SETS on installation panel DSNTIPC.
Calculating the size of DSMAX: DSMAX should be larger than the maximum
number of data sets that are open and in use at one time. For the most accurate
count of open data sets, refer to the OPEN/CLOSE ACTIVITY section of the
OMEGAMON statistics report. Make sure the statistics trace was run at a peak
period, so that you can obtain the most accurate maximum figure.
| The best indicator of when to increase DSMAX is when the open and close activity
| of data sets is high, 1 per second as a general guideline. Refer to the
| OPEN/CLOSE value under the SER.TASK SWITCH section of the OMEGAMON
| accounting report. Consider increasing DSMAX when this value shows more than
| 1 event per second.
To calculate the total number of data sets (rather than the number that are open
during peak periods), you can do the following:
1. To find the number of simple and segmented table spaces, use the following
query. The calculation assumes that you have one data set for each simple,
segmented, and LOB table space.
These catalog queries are included in DSNTESP in SDSNSAMP. You can use
them as input to SPUFI.
|
| End of General-use Programming Interface
| 5. To find the total number of data sets, add the numbers that result from the four
| queries. (For Query 2, use the sum of the partitions that was obtained.)
Recommendations
As with many recommendations in DB2, you must weigh the cost of performance
versus availability when choosing a value for DSMAX. Consider the following
factors:
v For best performance, you should leave enough margin in your specification of
DSMAX so that frequently used data sets can remain open after they are no
This section describes how DB2 manages data set closing and how the CLOSE
value for a table space or index affects the process of closing an object’s data sets.
The term page set refers to a table space or index.
Logical close: This occurs when the application has been deallocated from that
page set. This is at either commit or deallocation time, depending on the
RELEASE(COMMIT/DEALLOCATE) option of the BIND command, and is driven
by the use count. When a page set is logically closed, the page set use count is
decremented. When the page set use count is zero, the page set is considered not
in use; this makes it a candidate for physical close.
Physical close: This happens when DB2 closes and deallocates the data sets for the
page set.
The value you specify for CLOSE determines the order in which page sets that are
not in use are closed. When the open data set count becomes greater than 99% of
DSMAX, DB2 first closes page sets defined with CLOSE YES. The least recently
used page sets are closed first.
If the number of open data sets cannot be limited by closing page sets or partitions
defined with CLOSE YES, DB2 then must close page sets or partitions defined with
CLOSE NO. The least recently used CLOSE NO data sets are closed first.
Delaying the physical closure of page sets or partitions until necessary is called
deferred close. Deferred closing of a page set or partition that is no longer being
used means that another application or user can access the table space and employ
You could find CLOSE NO appropriate for page sets that contain data you do not
frequently use but is so performance-critical that you cannot afford the delay of
opening the data sets.
If the number of open data sets is a concern, choose CLOSE YES for page sets with
many partitions or data sets.
# With data sharing, DB2 uses the CLOSE YES or CLOSE NO attribute of the table
# space or index space to determine whether to physically close infrequently
# accessed page sets that have had global buffer pool dependencies. Infrequently
# accessed CLOSE YES page sets are physically closed. Infrequently accessed CLOSE
# NO page sets remain open.
Updating SYSLGRNX: For both CLOSE YES and CLOSE NO page sets,
SYSLGRNX entries are updated when the page set is converted from read-write
state to read-only state. When this conversion occurs for table spaces, the
SYSLGRNX entry is closed and any updated pages are externalized to disk. For
indexes defined as COPY NO, there is no SYSLGRNX entry, but the updated pages
are externalized to disk.
This figure is important when you calculate the number of data paths for your
DB2 subsystem.
For transactions:
v DSNDB01.SCT02 and its index
v DSNDB01.SPT01 and its index
v DSNDB01.DBD01
v DSNDB06.SYSPLAN table space and indexes on SYSPLANAUTH table
v DSNDB06.SYSPKAGE
| v Active and archive logs
v Most frequently used user table spaces and indexes
For queries:
v DSNDB01.DBD01
v DSNDB06.SYSPLAN table space and indexes on SYSPLANAUTH
v DSNDB06.SYSPKAGE
v DSNDB06.SYSDBASE table space and its indexes
v DSNDB06.SYSVIEWS table space and the index on SYSVTREE
v Work file table spaces
v QMF system table data sets
v Most frequently used user table spaces and indexes
To change the size or location of DB2 catalog or directory data sets, you must run
the RECOVER utility on the appropriate database, or the REORG utility on the
appropriate table space. A hierarchy of recovery dependencies determines the
order in which you should try to recover data sets. This order is discussed in the
description of the RECOVER utility in Part 2 of DB2 Utility Guide and Reference.
More detailed information is available, with more overhead, with the I/O trace
(performance class 4). If you want information about I/O activity to the log and
BSDS data sets, use performance class 5.
You can also use RMF to monitor data set activity. SMF record type 42-6 provides
activity information and response information for a data set over a time interval.
These time intervals include the components of I/O time, such as IOS queue time.
Using RMF incurs about the same overhead as statistics class 8.
More about temporary work file result tables: Part 2 of DB2 Installation Guide
contains information about how to estimate the disk storage required for
temporary work file result tables. The storage is similar to that required for regular
tables. When a temporary work file result table is populated using an INSERT
statement, it uses work file space.
No other process can use the same work file space as that temporary work file
table until the table goes away. The space is reclaimed when the application
process commits or rolls back, or when it is deallocated, depending which
RELEASE option was used when the plan or package was bound.
DB2 logging
DB2 logs changes made to data, and other significant events, as they occur. You
can find background information on the DB2 log in Chapter 17, “Managing the log
and the bootstrap data set,” on page 393. When you focus on logging performance
issues, remember that the characteristics of your workload have a direct effect on
log write performance. Long-running tasks that commit infrequently have a lot
| more data to write at commit than a typical transaction. These tasks can cause
| subsystem impact because of the excess storage consumption, locking contention,
| and resources that are consumed for a rollback.
Don’t forget to consider the cost of reading the log as well. The cost of reading the
log directly affects how long a restart or a recovery occurs because DB2 must read
the log data before applying the log records back to the table space.
Log writes
Log writes are divided into two categories: synchronous and asynchronous.
Time line
Application
If there are two logs (recommended for availability), the write to the first log, in
general, must complete before the write to the second log begins. The first time a
log control interval is written to disk, the write I/Os to the log data sets are
performed in parallel. However, if the same 4 KB log control interval is again
written to disk, then the write I/Os to the log data sets must be done serially to
prevent any possibility of losing log data in case of I/O errors on both copies
simultaneously.
Two-phase commit log writes: Because they use two-phase commit, applications
that use the CICS, IMS, and RRS attachment facilities force writes to the log twice,
as shown in Figure 71. The first write forces all the log records of changes to be
written (if they have not been written previously because of the write threshold
being reached). The second write writes a log record that takes the unit of recovery
into an in-commit state.
v Choose fast devices for log data sets: The devices assigned to the active log
data sets must be fast ones. Because of its very high sequential performance, ESS
is particularly recommended in environments in which the write activity is high
to avoid logging bottlenecks.
v Avoid device contention: Place the copy of the bootstrap data set and, if using
dual active logging, the copy of the active log data sets, on volumes that are
accessible on a path different than that of their primary counterparts.
v Preformat new active log data sets: Whenever you allocate new active log data
sets, preformat them using the DSNJLOGF utility described in Part 3 of DB2
Utility Guide and Reference. This action avoids the overhead of preformatting the
log, which normally occurs at unpredictable times.
Log reads
During a rollback, restart, and database recovery, the performance impact of log
reads is evident. DB2 must read from the log and apply changes to the data on
disk. Every process that requests a log read has an input buffer dedicated to that
process. DB2 searches for log records in the following order:
1. Output buffer
2. Active log data set
3. Archive log data set
If the log records are in the output buffer, DB2 reads the records directly from that
buffer. If the log records are in the active or archive log, DB2 moves those log
records into the input buffer used by the reading process (such as a recovery job or
a rollback).
It is always fastest for DB2 to read the log records from the active log rather than
the archive log. Access to archived information can be delayed for a considerable
length of time if a unit is unavailable or if a volume mount is required (for
example, a tape mount).
Log capacity
The capacity that you specify for the active log affects DB2 performance
significantly. If you specify a capacity that is too small, DB2 might need to access
data in the archive log during rollback, restart, and recovery. Accessing an archive
takes a considerable amount of time.
The following subsystem parameters affect the capacity of the active log. In each
case, increasing the value you specify for the parameter increases the capacity of
the active log. See Part 2 of DB2 Installation Guide for more information on
updating the active log parameters. The parameters are:
v The NUMBER OF LOGS field on installation panel DSNTIPL controls the
number of active log data sets you create.
v The ARCHIVE LOG FREQ field on installation panel DSNTIPL is where you
provide an estimate of how often active log data sets are copied to the archive
log.
v The UPDATE RATE field on installation panel DSNTIPL is where you provide
an estimate of how many database changes (inserts, update, and deletes) you
expect per hour.
The DB2 installation CLIST uses UPDATE RATE and ARCHIVE LOG FREQ to
calculate the data set size of each active log data set.
v The CHECKPOINT FREQ field on installation panel DSNTIPN specifies the
number of log records that DB2 writes between checkpoints or the number of
minutes between checkpoints.
Part 2 of DB2 Installation Guide goes into more detail on the relationships among
these parameters and their effects on operations and performance.
| For more information about the change log inventory utility, see DB2 Utility Guide
| and Reference. For more information about the BSDS conversion utility, see Part 2 of
| DB2 Installation Guide.
Utilities
The utility operations REORG and LOAD LOG(YES) cause all reorganized or
loaded data to be logged. For example, if a table space contains 200 million rows
of data, this data, along with control information, is logged when this table space
is the object of a REORG utility job. If you use REORG with the DELETE option to
eliminate old data in a table and run CHECK DATA to delete rows that are no
longer valid in dependent tables, you can use LOG(NO) to control log volume.
SQL
The amount of logging performed for applications depends on how much data is
changed. Certain SQL statements are quite powerful, making it easy to modify a
large amount of data with a single statement. These statements include:
v INSERT with a fullselect
v Mass deletes and mass updates (except for deleting all rows for a table in a
segmented table space)
v Data definition statements log an entire database descriptor for which the
change was made. For very large DBDs, this can be a significant amount of
logging.
v Modification to a row that contains a LOB column defined as LOG YES.
For nonsegmented table spaces, each of these statements results in the logging of
all database data that change. For example, if a table contains 200 million rows of
data, that data and control information are logged if all of the rows are deleted in
a table using the SQL DELETE statement. No intermediate commit points are taken
during this operation.
For segmented table spaces, a mass delete results in the logging of the data of the
deleted records when any of the following conditions are true:
v The table is the parent table of a referential constraint.
v The table is defined as DATA CAPTURE(CHANGES), which causes additional
information to be logged for certain SQL operations.
v A delete trigger is defined on the table.
Recommendations:
v For mass delete operations, consider using segmented table spaces. If segmented
table spaces are not an option, create one table per table space and use LOAD
REPLACE with no rows in the input data set to empty the entire table space.
v For inserting a large amount of data, instead of using an SQL INSERT
statement, use the LOAD utility with LOG(NO) and take an inline copy.
v For updates, consider your workload when defining a table’s columns. The
amount of data that is logged for update depends on whether the row contains
all fixed-length columns or not. For fixed-length non-compressed rows, changes
are logged only from the beginning of the first updated column to the end of the
last updated column.
For varying-length rows, data is logged from the first changed byte to the end of
the last updated column. (A varying-length row contains one or more
varying-length columns.)
To determine your workload type, read-intensive or update-intensive, check the
log data rate. Use the formula in “Calculating average log record size” on page
666 to determine the average log size and divide that by 60 to get the average
number of log bytes written per second.
– If you log less than 5 MB per second, the workload is read-intensive.
– If you log more than 5 MB per second, the workload is update-intensive.
Table 108 on page 669 summarizes the recommendations for the type of row and
type of workload you run.
v If you have many data definition statements (CREATE, ALTER, DROP) for a
single database, issue them within a single unit of work to avoid logging the
changed DBD for each data definition statement. However, be aware that the
DBD is locked until the COMMIT is issued.
v Use LOG NO for any LOBs that require frequent updating and for which the
tradeoff of nonrecoverability of LOB data from the log is acceptable. (You can
still use the RECOVER utility on LOB table spaces to recover control information
that ensures physical consistency of the LOB table space.)
Because LOB table spaces defined as LOG NO are nonrecoverable from the DB2
log, make a recovery plan for that data. For example, if you run batch updates,
be sure to take an image copy after the updates are complete.
To manage the use of disk, you can use RMF to monitor how your devices are
used. Watch for usage rates that are higher than 30% to 35%, and for disk devices
| with high activity rates. Log devices can have more than 50% utilization without
| performance problems.
In general, the primary allocation must be large enough to handle the storage
needs that you anticipate. The secondary allocation must be large enough for your
applications to continue operating until the data set is reorganized.
IFCID 0258 allows you to monitor data set extension activities by providing
information, such as the primary allocation quantity, maximum data set size, high
allocated space before and after extension activity, number of extents before and
after the extend, maximum volumes of a VSAM data set, and number of volumes
before and after the extend. Access IFCID 0258 in Statistics Class 3 (SC03) through
an IFI READA request.
See Chapter 3, “Creating storage groups and managing DB2 data sets,” on page 27
for more information about extending data sets.
With compressed data, you might see some of the following performance benefits,
depending on the SQL work load and the amount of compression:
v Higher buffer pool hit ratios
v Fewer I/Os
v Fewer getpage operations
Tuning recommendation
In some cases, using compressed data results in an increase in the number of
getpages, lock requests, and synchronous read I/Os. Sometimes, updated
compressed rows cannot fit in the home page, and they must be stored in the
overflow page. This can cause additional getpage and lock requests. If a page
contains compressed fixed-length rows with no free space, an updated row
probably has to be stored in the overflow page.
To avoid the potential problem of more getpage and lock requests, add more free
space within the page. Start with 10% additional free space and adjust further, as
needed. If, for example, 10% free space was used without compression, then start
with 20% free space with compression for most cases. This recommendation is
especially important for data that is heavily updated.
| Having an ascending index on C1 would not have prevented a sort to order the
| data. To avoid the sort, you needed a descending index on C1. Beginning in
| Version 8, DB2 can scan an index either forwards or backwards, which can
| eliminate the need to have indexes with the same columns but with different
| ascending and descending characteristics.
| However, DB2 would need to sort for either of these two ORDER BY clauses:
| v ORDER BY C1 ASC, C2 ASC
| v ORDER BY C1 DESC, C2 DESC
| For example, assume that you have an index key that is defined on a
| VARCHAR(128) column and the actual length of the key is 8 bytes. An index that
| is defined as NOT PADDED would require approximately 9 times less storage than
| an index that is defined as PADDED, as shown by the following calculation:
| (128 + 4) / (8 + 2 + 4) = 9
|
Improving real storage utilization
This section provides specific information for both real and virtual storage tuning.
With DB2, the amount of real storage often needs to be close to the amount of
virtual storage. For a general overview of some factors relating to virtual storage
planning, see Part 2 of DB2 Installation Guide.
Minimize storage needed for locks: You can save real storage by using the
LOCKSIZE TABLESPACE option on the CREATE TABLESPACE statements for
large tables, which affects concurrency. This option is most practical when
concurrent read activity without a write intent, or a single write process, is used.
You can use LOCKSIZE PAGE or LOCKSIZE ROW more efficiently when you
commit your data more frequently or when you use cursor stability with
CURRENTDATA NO. For more information on specifying LOCKSIZE
TABLESPACE, see “Monitoring of DB2 locking” on page 832.
Reduce the number of open data sets: You can reduce the number of open data sets
by:
v Including multiple tables in segmented table spaces
v Using fewer indexes
v Reducing the value you use for DSMAX
Improve the performance for sorting: The highest performance sort is the sort that
is avoided. However, because some sorting cannot be avoided, make sorting as
efficient as possible. For example, assign the buffer pool for your work file table
spaces in database DSNDB07, which are used in sorting, to a buffer pool other
than BP0, such as to BP07. For more information about how to:
# Release unused thread storage: If you experience virtual storage constraints due to
# thread storage, consider having DB2 periodically free unused thread storage. To
# release unused storage threads, specify YES for the CONTRACT THREAD STG
# field on installation panel DSNTIPE. However, you should use this option only
# when your DB2 subsystem has many long-running threads and your virtual
# storage is constrained. To determine the level of thread storage on your subsystem,
# see IFCID 0225 or QW0225AL in statistics trace class 6.
Recommendation:
Ensure ECSA size is adequate: The extended common service area (ECSA) is a
system area that DB2 shares with other programs. Shortage of ECSA at the system
level leads to use of the common service area.
DB2 places some load modules and data into the common service area. These
modules require primary addressability to any address space, including the
application’s address space. Some control blocks are obtained from common
storage and require global addressability. For more information, see DB2 Installation
Guide.
Ensure EDM pool space is being used efficiently: Monitor your use of EDM pool
storage using DB2 statistics and see “Tips for managing EDM storage” on page
650.
Use less buffer pool storage: Using fewer and smaller buffer pools reduces the
amount of real storage space DB2 requires. Buffer pool size can also affect the
number of I/O operations performed; the smaller the buffer pool, the more I/O
operations needed. Also, some SQL operations, such as joins, can create a result
row that will not fit on a 4-KB page. For information about this, see “Making
buffer pools large enough for the workload” on page 622.
On completion of the call to LE, the token is returned to the pool. The MAXIMUM
LE TOKENS (LEMAX) field on installation panel DSNTIP7 controls the maximum
number of LE tokens that are active at any time. The LEMAX default value is 20
with a range of 0 to 50. If the value is zero, no tokens are available. If a large
number of functions are executing at the same time, all the token may be used.
Thus, if a statement needs a token and none is available, the statement is queued.
If the statistics trace QLENTRDY is very large, indicating a delay for an application
because an LE token is not immediately available, the LEMAX may be too small. If
the statistics trace QLETIMEW for cumulative time spent is very large, the LEMAX
may be too small. Increase the number of tokens for the MAXIMUM LE TOKENS
field on installation panel DSNTIP7.
| The levels in the DB2 storage hierarchy include real storage, storage controller
| cache, disk, and auxiliary storage. Increasing the I/O rate and decreasing the
| frequency of disk access to move data between real storage and storage devices is
| key to good performance.
Real storage
Real storage refers to the processor storage where program instructions reside
| while they are executing. It also refers to where data is held, for example, data in
| DB2 buffer pools that has not been paged out to auxiliary storage, the EDM pools,
| and the sort pool. To be used, data must either reside or be brought into processor
| storage or processor special registers. The maximum amount of real storage that
| one DB2 subsystem can use is the real storage of the processor, although other
| limitations may be encountered first.
DB2’s large capacity for buffers in real storage and its write avoidance and
sequential access techniques allow applications to avoid a substantial amount of
read and write I/O, combining single accesses into sequential access, so that the
disk devices are used more effectively.
| Storage servers
| An I/O subsystem typically consists of many storage disks, which are housed in
| storage servers, for example, the IBM Enterprise Storage Server (ESS). Storage
| servers provide increased functionality and performance over that of Just a Bunch
| of Disks (JBOD) technology.
| Cache is one of the additional functions. Cache acts as a secondary buffer as data
| is moved between real storage and disk. Storing the same data in processor storage
| and the cache is not useful. To be useful, the cache must be significantly larger
| than the buffers in real storage, store different data, or provide another
| performance advantage. ESS and many other new storage servers use large caches
| and always prestage the data in the cache. You do not need to actively manage the
| cache in the newer storage servers as you must do with older storage device types.
| With ESS and other new storage servers, disk performance does not generally
| affect sequential I/O performance. The measure of disk speed in terms of RPM
| (revolutions per minute) is relevant only if the cache hit ratio is low and the I/O
| rate is very high. If the I/O rate per disk is proportional to the disk size, small
| disks perform better than large disks. Large disks are very efficient for storing
| infrequently accessed data. As with cache, spreading the data across more disks is
| always better.
| However, not all channels are alike. ESCON channels, which used to be the
| predominant channel type, have a maximum instantaneous data transfer rate of
| approximately 17 MB per second. FICON channels currently have a speed of 200
| MB per second. FICON is the z/OS equivalent of Open Systems Fibre Channel
| Protocol (FCP). The FICON speed is bidirectional, theoretically allowing 200 MB
| per second to be sustained in both directions. Channel adaptors in the host
| processor and the storage server limit the actual speed. The FICON channels in the
| zSeries 900 servers are faster than those in the prior processors.
| Parallel Access Volumes (PAV): The Parallel Access Volumes (PAV) feature allows
| multiple concurrent I/Os on a given device when the I/O requests originate from
| the same system. PAVs make storing multiple partitions on the same volume with
| almost no loss of performance possible. In older disk subsystems, if more than one
| partition is placed on the same volume (intentionally or otherwise), attempts to
| read the partitions result in contention, which shows up as I/O subsystem queue
| (IOSQ) time. Without PAVs, poor placement of a single data set can almost double
| the elapsed time of a parallel query.
| Flashcopy: The Flashcopy feature provides for fast copying of full volumes. After
| an initialization period is complete, the logical copy is considered complete but the
| physical movement of the data is deferred.
| Peer-to-Peer Remote Copy (PPRC): The PPRC and PPRC XD (Extended Distance)
| provides a faster method for recovering DB2 subsystems at a remote site in the
| event of a disaster at the local site. For more information about using PPRC and
| PPRC XD, see “Backing up with RVA storage control or Enterprise Storage Server”
| on page 459.
The amount of storage controller cache: The amount of cache to use for DB2
depends primarily on the relative importance of price and performance. It is not
often effective to have large memory resources for both DB2 buffers and storage
controller cache. If you decide to concentrate on the storage controller cache for
performance gains, then use the maximum available cache size. If the cache is
substantially larger than the DB2 buffer pools, DB2 can make effective use of the
cache to reduce I/O times for random I/O. For sequential I/O, the improvement
the cache provides is generally small.
Sort work files: Sort work files can have a large number of concurrent processes
that can overload a storage controller with a small cache and thereby degrade
system performance. For example, one large sort could use 100 sequential files,
needing 60 MB of storage. Unless the cache sizes are large, you might need to
specify BYPASS on installation panel DSNTIPE or use DFSMS controls to prevent
the use of the cache during sort processing. Separate units for sort work can give
better performance.
WLM controls the dispatching priority based on the goals you supply. WLM raises
or lowers the priority as needed to meet the specified goal. Thus, you do not need
to fine-tune the exact priorities of every piece of work in the system and can focus
instead on business objectives.
Response times are appropriate goals for “end user” applications, such as QMF
users running under the TSO address space goals, or users of CICS using the CICS
work load goals. You can also set response time goals for distributed users, as
described in “Using z/OS workload management to set performance objectives” on
page 705.
For DB2 address spaces, velocity goals are more appropriate. A small amount of
the work done in DB2 is counted toward this velocity goal. Most of the work done
in DB2 applies to the end user goal.
This section describes ways to set performance options for DB2 address spaces:
v “Determining z/OS workload management velocity goals”
v “How DB2 assigns I/O priorities” on page 680
For information about WLM and defining goals through the service definition, see
z/OS MVS Planning: Workload Management.
The velocity goals for CICS and IMS regions are only important during startup or
restart. After transactions begin running, WLM ignores the CICS or IMS velocity
goals and assigns priorities based on the goals of the transactions that are running
in the regions. A high velocity goal is good for ensuring that startups and restarts
are performed as quickly as possible.
Similarly, when you set response time goals for DDF threads or for stored
procedures in a WLM-established address space, the only work controlled by the
DDF or stored procedure velocity goals are the DB2 service tasks (work performed
for DB2 that cannot be attributed to a single user). The user work runs under
separate goals for the enclave, as described in “Using z/OS workload management
to set performance objectives” on page 705.
For the DB2-established stored procedures address space, use a velocity goal that
reflects the requirements of the stored procedures in comparison to other
application work. Depending on what type of distributed work you do, this might
be equal to or lower than the goal for PRODREGN.
IMS BMPs can be treated along with other batch jobs or given a velocity goal,
depending on what business and functional requirements you have at your site.
Each of the objectives presented in Table 111 is matched with a control facility that
you can use to achieve the objective.
Table 111. Controlling the use of resources
Objective How to accomplish it Where it is described
Prioritizing resources z/OS workload management “Prioritizing resources” on
page 682, “z/OS performance
options for DB2” on page 679
and “Using z/OS workload
management to set
performance objectives” on
page 705
Limiting resources for Time limit on job or step (through “Limiting resources for each
each job z/OS system settings or JCL) job” on page 682
Limiting resources for Time limit for TSO logon “Limiting resources for TSO
TSO sessions sessions” on page 682
Limiting resources for IMS and CICS controls “Limiting resources for IMS
IMS and CICS and CICS” on page 682
Limiting resources for ASUTIME column of “Limiting resources for a
a stored procedure SYSIBM.SYSROUTINES catalog stored procedure” on page
table. 683
Limiting dynamic QMF governor and DB2 resource “Resource limit facility
statement execution limit facility (governor)” on page 683
time
Reducing locking DB2 locking parameters, DISPLAY Chapter 30, “Improving
contention DB LOCKS, lock trace data, concurrency,” on page 773
database design
Evaluating long-term Accounting trace data, “OMEGAMON” on page 1162
resource usage OMEGAMON reports
Predicting resource DB2 EXPLAIN statement, Visual Chapter 33, “Using EXPLAIN
consumption Explain, DB2 Estimator, predictive to improve SQL
governing capability performance,” on page 891
and “Predictive governing”
on page 690
Controlling use of DB2 resource limit facility, SET “Disabling query parallelism”
parallelism CURRENT DEGREE statement on page 965
In other environments such as batch and TSO, which typically have a single task
requesting DB2 services, the task-level processor dispatching priority is irrelevant.
Access to processor and I/O resources for synchronous portions of the request is
governed solely by WLM.
Refer to the z/OS MVS JCL User's Guide for more information on setting resource
limits.
You can find more information about setting the resource limit for a TSO session in
these manuals:
v z/OS TSO/E Programming Guide
v z/OS TSO/E Customization
For information about controlling the amount of storage used by stored procedures
address spaces, see “Controlling address space storage” on page 983.
Data sharing: See DB2 Data Sharing: Planning and Administration for information
about special considerations for using the resource limit facility in a data sharing
group.
Creating an RLST
Resource limit specification tables can reside in any database; however, because a
database has some special attributes while the resource limit facility is active, it is
best to put RLSTs in their own database.
When you install DB2, installation job DSNTIJSG creates a database, table space,
table, and descending index for the resource limit specification. You can tailor
those statements. For more information about job DSNTIJSG, see Part 2 of DB2
Installation Guide.
To create a new resource limit specification table, use the following statements, also
included in installation job DSNTIJSG. You must have sufficient authority to define
objects in the DSNRLST database and to specify authid, which is the authorization
ID specified on field RESOURCE AUTHID of installation panel DSNTIPP.
All future column names defined by IBM will appear as RLFxxxxx. To avoid future
naming conflicts, begin your own column names with characters other than RLF.
Creating the index: To create an index for the 11-column format, use the
following SQL:
CREATE UNIQUE INDEX authid.DSNARLxx
ON authid.DSNRLSTxx
(RLFFUNC, AUTHID DESC, PLANNAME DESC,
RLFCOLLN DESC, RLFPKG DESC, LUNAME DESC)
CLUSTER CLOSE NO;
The xx in the index name (DSNARLxx) must match the xx in the table name
(DSNRLSTxx) and it must be a descending index.
To insert, update, or delete from the resource limit specification table, you need
only the usual table privileges on the RLST. No higher authority is required.
Starting and stopping the RLST: Activate any particular RLST by using the DB2
command START RLIMIT ID=xx where xx is the two-character identifier that you
specified on the name DSNRLSTxx. This command gives you the flexibility to use
a different RLST at different times; however, only one RLST can be active at a time.
Example: You can use different RLSTs for the day shift and the evening shift, as
shown in Table 112 and Table 113.
Table 112. Example of RLST for the day shift
AUTHID PLANNAME ASUTIME LUNAME
BADUSER 0 LUDBD1
ROBYN 100000 LUDBD1
PLANA 300000 LUDBD1
50000 LUDBD1
Table 113. Example of RLST for the night shift. During the night shift AUTHID, ROBYN, and
all PLANA users from LUDBD1 run without limit.
AUTHID PLANNAME ASUTIME LUNAME
BADUSER 0 LUDBD1
ROBYN NULL LUDBD1
PLANA NULL LUDBD1
50000 LUDBD1
At installation time, you can specify a default RLST to be used each time that DB2
is restarted. For more information on resource limit facility subsystem parameters,
see Part 2 of DB2 Installation Guide.
If the governor is active and you restart it without stopping it, any jobs that are
active continue to use their original limits, and all new jobs use the limits in the
new table.
If you stop the governor while a job is executing, the job runs with no limit, but its
processing time continues to accumulate. If you later restart the governor, the new
limit takes effect for an active job only when the job passes one of several internal
checkpoints. A typical dynamic statement, which builds a result table and fetches
from it, passes those checkpoints at intervals that can range from moments to hours.
As a result, your change to the governor might not stop an active job within the
time you expect.
Use the DB2 command CANCEL THREAD to stop an active job that does not pick
up the new limit when you restart the governor.
You cannot stop a database or table space that contains an active RLST; nor can
you start the database or table space with ACCESS(UT).
Search order: DB2 tries to find the most exact match when it determines which row
to use for a particular function. The search order depends on which function is
being requested (type of governing, bind operations, or parallelism restrictions).
The search order is described under each of those functions.
AUTHID
The resource specification limits apply to this primary authorization ID. A
blank means that the limit specifications in this row apply to all
authorization IDs for the location that is specified in LUNAME.
PLANNAME
The resource specification limits apply to this plan. If you are specifying a
function that applies to plans (RLFFUNC=blank or '6'), a blank means that
the limit specifications in this row apply to all plans for the location that is
specified in LUNAME. Qualify by plan name only if the dynamic
statement is issued from a DBRM bound in a plan, not a package;
otherwise, DB2 does not find this row. If the RLFFUNC column contains a
function for packages ('1,' '2,' or '7'), this column must be blank; if it is not
blank, the row is ignored.
ASUTIME
The number of processor service units allowed for any single dynamic
SELECT, INSERT, UPDATE, or DELETE statement. Use this column for
reactive governing.
Other possible values and their meanings are:
null No limit
0 (zero) or a negative value
No dynamic SELECT, INSERT, UPDATE, or DELETE statements
are permitted.
The governor samples the processing time in service units. Service units
are independent of processor changes. The processing time for a particular
SQL statement varies according to the processor on which it is executed,
but the service units required remains roughly constant. The service units
consumed are not exact between different processors because the
Important: Make sure the value for RLFASUWARN is less than that for
RLFASUERR. If the warning value is higher, the warning is never reported.
The error takes precedence over the warning.
RLF_CATEGORY_B
Used for predictive governing (RLFFUNC='6' or '7'). Tells the governor the
default action to take when the cost estimate for a given statement falls
into cost category B, which means that the predicted cost is indeterminate
and probably too low. You can tell if a statement is in cost category B by
running EXPLAIN and checking the COST_CATEGORY column of the
DSN_STATEMNT_TABLE.
The acceptable values are:
blank By default, prepare and execute the SQL statement.
Y Prepare and execute the SQL statement.
N Do not prepare or execute the SQL statement. Return SQLCODE
-495 to the application.
W Complete the prepare, return SQLCODE +495, and allow the
application logic to decide whether to execute the SQL statement
or not.
Any statement that exceeds a limit you set in the RLST terminates with a -905
SQLCODE and a corresponding '57014' SQLSTATE. You can establish a single limit
for all users, different limits for individual users, or both. Limits do not apply to
primary or secondary authorization IDs with installation SYSADM or installation
SYSOPR authority. For queries entering DB2 from a remote site, the local site limits
are used.
See “Qualifying rows in the RLST” for more information about how to qualify
rows in the RLST. See “Predictive governing” on page 690 for more information
about using predictive governing.
The first row in Table 114 shows that when Joe runs PLANA at the local
location, there are no limits for any dynamic statements in that plan.
The second row shows that if anyone runs WSPLAN from SAN_JOSE, the
dynamic statements in that plan are restricted to 15 000 SUs each.
The first row in Table 115 shows that when Joe runs any package in collection
# DSNESPCS from the local location, dynamic statements are restricted to 40 000
SUs.
The second row indicates that if anyone from any location (including the local
# location) runs SPUFI packageDSNESM68, dynamic statements are limited to
15 000 SUs.
Setting a default for when no row matches: If no row in the RLST matches the
currently executing statement, then DB2 uses the default set on the RLST ACCESS
ERROR field of installation panel DSNTIPO (for queries that originate locally) or
DSNTIPR (for queries that originate remotely). This default applies to reactive
governing only.
Predictive governing
DB2’s predictive governing capability has an advantage over the reactive governor
in that it avoids wasting processing resources by giving you the ability to prevent
a query from running when it appears that it exceeds processing limits. With the
reactive governor, those resources are already used before the query is stopped.
See Figure 73 on page 691 for an overview of how predictive governing works.
N Category B
Category A?
Y RLF
'W' CATEGORY 'N'
B?
Y Cost >
-495 SQLCODE RLFASUERR? 'Y'
+495 SQLCODE
N
Application -495 SQLCODE
decides
N Cost >
Execute RLFASUWARN? Execute
Y
+495 SQLCODE
Application decides
Example: Table 116 is an RLST with two rows that use predictive governing.
Table 116. Predictive governing example
RLFFUNC AUTHID RLFCOLLN RLFPKG RLFASUWARN RLFASUERR RLF_CATEGORY_B
7 (blank) COLL1 C1PKG1 900 1500 Y
7 (blank) COLL2 C2PKG1 900 1500 W
The rows in the RLST for this example cause DB2 to act as follows for all dynamic
INSERT, UPDATE, DELETE, and SELECT statements in the packages listed in this
table (C1PKG1 and C2PKG1):
Cost category B: The two rows differ only in how statements in cost category B are
treated. For C1PKG1, the statement will execute. For C2PKG2, the statements
receive a +495 SQLCODE and the user or application must decide whether to
execute the statement.
The rows in the RLST for this example cause DB2 to act as follows for a dynamic
SQL statement that runs under PLANA:
Predictive mode:
v If the statement is in COST_CATEGORY A and the cost estimate is greater than
1000 SUs, USER1 receives SQLCODE -495 and the statement is not executed.
v If the statement is in COST_CATEGORY A and the cost estimate is greater than
800 SUs but less than 1000 SUs, USER1 receives SQLCODE +495.
v If the statement is in COST_CATEGORY B, USER1 receives SQLCODE +495.
Reactive mode: In either of the following cases, a statement is limited to 1100 SUs:
v The cost estimate for a statement in COST_CATEGORY A is less than 800 SUs
v The cost estimate for a COST_CATEGORY A is greater than 800 and less than
1000 or is in COST_CATEGORY B and the user chooses to execute the statement
However, if you don’t have statement table information or if there are adhoc
queries for which you have no information, you can use the following formula to
calculate SU time:
SU time = processor time × service units per second value
The value for service units per second depends on the processor model. You can
find this value for your processor model in z/OS MVS Initialization and Tuning
Guide, where SRM is discussed.
For example, if processor A is rated at 900 service units per second and you do not
want any single dynamic SQL statement to use more than 10 seconds of processor
time, you could set ASUTIME as follows:
ASUTIME time = 10 seconds × 900 service units/second = 9000 service units
Later, you could upgrade to processor B, which is rated at 1000 service units per
second. If the value you set for ASUTIME remains the same (9000 service units),
your dynamic SQL is only allowed 9 seconds for processing time but an equivalent
number of processor service units:
ASUTIME = 9 seconds × 1000 service units/second = 9000 service units
When writing an application, you should know when threads are created and
terminated and when they can be reused, because thread allocation can be a
significant part of the cost in a short transaction.
This chapter provides a general introduction on how DB2 uses threads. It includes
the following sections:
v A discussion of how to choose the maximum number of concurrent threads, in
“Setting thread limits”
v A description of the steps in creating and terminating an allied thread, in “Allied
thread allocation” on page 696
v An explanation of the differences between allied threads and database access
threads (DBATs) and a description of how DBATs are created, including how
they become active or pooled and how to set performance goals for individual
DDF threads, under “Database access threads” on page 701
v Design options for reducing thread allocations and improving performance
generally, under “CICS design options” on page 708, “IMS design options” on
page 708, “TSO design options” on page 709, and “DB2 QMF design options” on
page 710
Set these values to provide good response time without wasting resources, such as
virtual and real storage. The value you specify depends upon your machine size,
your work load, and other factors. When specifying values for these fields,
consider the following:
v Fewer threads than needed under utilize the processor and cause queuing for
threads.
v More threads than needed do not improve the response time. They require more
real storage for the additional threads and might cause more paging and, hence,
performance degradation.
| If virtual storage or real storage is the limiting factor, set MAX USERS and MAX
REMOTE ACTIVE according to the available storage. For more information on
storage, refer to Part 2 of DB2 Installation Guide.
Thread limits for TSO and call attachment: For the TSO and call attachment
facilities, you limit the number of threads indirectly by choosing values for the
MAX TSO CONNECT and MAX BATCH CONNECT fields of installation panel
The most relevant factors from a system performance point of view are:
Thread reuse: Thread creation is a significant cost for small and medium
transactions. When execution of a transaction is terminated, the thread can
sometimes be reused by another transaction using the same plan. For more
information on thread reuse, see “Providing for thread reuse” on page 699.
The most important performance factors for resource allocation are the same as the
factors for thread creation.
When the package is allocated, DB2 checks authorization using the package
authorization cache or the SYSPACKAUTH catalog table. DB2 checks to see that
the plan owner has execute authority on the package. On the first execution, the
information is not in the cache; therefore, the catalog is used. Thereafter, the cache
is used. For more information about package authorization caching, see “Caching
authorization IDs for best performance” on page 152.
For dynamic bind, authorization checking also occurs at statement execution time.
A summary record, produced at the end of the statement (IFCID 0058), contains
information about each scan that is performed. Included in the record is the
following information:
v The number of updated rows
v The number of processed rows
v The number of deleted rows
v The number of examined rows
v The number of pages that are requested through a getpage operation
v The number of rows that are evaluated during the first stage (stage 1) of
processing
The significant events that show up in a performance trace of a commit and thread
termination operation are as follows:
1. Commit phase 1
In commit phase 1 (IFCID 0084), DB2 writes an end of phase 1 record to the log
(IFCIDs 0032 and 0033). There are two I/Os, one to each active log data set
(IFCIDs 0038 and 0039).
2. Commit phase 2
In commit phase 2 (IFCID 0070), DB2 writes a beginning of phase 2 record to
the log. Again, the trace shows two I/Os. Page and row locks (except those
protecting the current position of cursors declared with the WITH HOLD
option), held to a commit point, are released. An unlock (IFCID 0021) with a
requested token of zeros frees any lock for the specified duration. A summary
lock record (IFCID 0020) is produced, which gives the maximum number of
page locks held and the number of lock escalations. DB2 writes an end of phase
2 record to the log.
If RELEASE(COMMIT) is used, the following events also occur:
v Table space locks are released.
v All the storage used by the thread is freed, including storage for control
blocks, CTs and PTs, and working areas.
v The use counts of the DBDs are decreased by one. If space is needed in the
EDM DBD cache, a DBD can be freed when its use count reaches zero.
v Those table spaces and index spaces with no claimers are made candidates
for deferred close. See “Understanding the CLOSE YES and CLOSE NO
options” on page 658 for more information on deferred close.
For more information about DB2 QMF connections, see DB2 Query Management
Facility: Installing and Managing DB2 QMF for TSO/CICS.
Later in this chapter, the following sections cover thread reuse for specific
situations:
v “Reusing threads for remote connections” on page 705 provides information on
the conditions for thread reuse for database access threads.
v “CICS design options” on page 708 tells how to write CICS transactions to reuse
threads.
v “IMS design options” on page 708 tells how to write IMS transactions to reuse
threads.
This technique is accurate in IMS but not in CICS, where threads are reused
frequently by the same user. For CICS, also consider looking at the number of
commits and aborts per thread. For CICS:
v NEW USER (A) is thread reuse with a different authorization ID or transaction
code.
| v RESIGN-ON (D) is thread reuse with the same authorization ID.
| Database access threads differ from allied threads in the following ways:
| v Database access threads have two modes of processing: ACTIVE MODE and
| INACTIVE MODE. These modes are controlled by setting the DDF THREADS
| field on the installation panel DSNTIPR.
| – ACTIVE MODE: When the value of DDF THREADS is ACTIVE, a database
| access thread is always active from initial creation to termination.
| – INACTIVE MODE: When the value of DDF THREADS is INACTIVE, a
| database access thread can be active or pooled. When a database access
| thread in INACTIVE MODE is active, it is processing requests from client
| connections within units of work. When a database access thread is pooled, it
| is waiting for the next request from a client to start a new unit of work.
| v Database access threads run in enclave SRB mode.
| v Only when in INACTIVE MODE, a database access thread is terminated after it
| has processed 200 units of work or after the database access thread has been idle
| in the pool for the amount of time specified in the POOL THREAD TIMEOUT
| field on the installation panel DSNTIP5. This termination does not prevent
| another database access thread from being created to meet processing demand,
| as long as the value of the MAX REMOTE ACTIVE field on panel DSNTIPE has
| not been reached.
| In the MAX REMOTE CONNECTED field of panel DSNTIPE, you can specify up
| to 150 000 as the maximum number concurrent remote connections that can
| concurrently exist within DB2. This upper limit is only obtained if you specify the
|
| Figure 75. Relationship between active threads and maximum number of connections.
|
| Using threads in INACTIVE MODE for DRDA-only connections
| When the value of DDF THREADS is INACTIVE, DRDA connections with a client
| cause a pooling behavior in database access threads. A different database access
| behavior is used when private-protocol connections are involved. For information
| about private-protocol connections and database access threads, see “Using threads
| with private-protocol connections” on page 704.
| A database access thread that is not currently processing a unit of work is called a
| pooled thread, and it is disconnected. DB2 always tries to pool database access
| threads, but in some cases cannot do so. The conditions listed in Table 120
| determine if a thread can be pooled.
| Table 120. Requirements for pooled threads
| If there is... Thread can be pooled?
| A DRDA hop to another location Yes
| A package that is bound with RELEASE(COMMIT) Yes
| A package that is bound with RELEASE(DEALLOCATE) Yes
| A held cursor, a held LOB locator, or a package bound with No
| KEEPDYNAMIC(YES)
| A declared temporary table that is active (the table was not No
| explicitly dropped through the DROP TABLE statement or
| the ON COMMIT DROP TABLE clause on the DECLARE
| GLOBAL TEMPORARY TABLE statement)
|
| When the conditions listed in Table 120 are true, the thread can be pooled when a
| COMMIT is issued. After a ROLLBACK, a thread can be pooled even if it had
| open cursors defined WITH HOLD or a held LOB locator because ROLLBACK
| Recommendation: Use the default option with the option ACTIVE for the DDF
| THREADS field on DSNTIPR. If you specify a timeout interval with ACTIVE, an
| application would have to start its next unit of work within the timeout period
| specification, or risk being canceled.
| TCP/IP keep_alive interval for the DB2 subsystem: For TCP/IP connections, it is a
| good idea to specify the IDLE THREAD TIMEOUT value in conjunction with a
| TCP/IP keep_alive interval of 5 minutes or less to make sure that resources are not
| locked for a long time when a network outage occurs. It is recommended that you
| override the TCP/IP stack keep_alive interval on a single DB2 subsystem by
| specifying a numeric value in the field TCP/IP KEEPALIVE on installation panel
| DSNTIPS.
| The MAX INACTIVE DBATS setting determines whether DB2 will reduce the
| memory footprint of a private-protocol connection that involves a database access
| thread. When a private-protocol connection ends a unit of work, DB2 first
| compares the number of current inactive database access threads to the value that
| is specified for your installation for MAX INACTIVE DBATS. Based on these
| values, DB2 either makes the thread inactive or allows it to remain active:
| v If the current number of inactive database access threads is below the value in
| MAX INACTIVE DBATS, the thread becomes inactive. It cannot be used by
| another connection.
| v If the current number of inactive database access threads meets or exceeds the
| value in MAX INACTIVE DBATS, the thread remains active. However, too many
| active threads (that is, more than MAX REMOTE ACTIVE) can cause the thread
| and its connection to be terminated.
| To limit the number of inactive database access threads that can be created, specify
| a value in the MAX INACTIVE DBATS field of installation panel DSNTIPR. The
| default is 0, which means that any database access thread that involves a
| private-protocol-connection remains active.
If your server is not DB2 UDB for z/OS, or some other server that can reuse
threads, then reusing threads for your requesting CICS, IMS, or RRS applications is
not a benefit for distributed access. Thread reuse occurs when sign-on occurs with
a new authorization ID. If that request is bound for a server that does not support
thread reuse, that change in the sign-on ID causes the connection between the
requester and server to be released so that it can be rebuilt again for the new ID.
The z/OS performance objective of the DDF address space does not govern the
performance objective of the user thread. As described in “z/OS performance
options for DB2” on page 679, you should assign the DDF address space to a z/OS
performance objective that is similar to the DB2 database services address space
(ssnmDBM1). The z/OS performance objective of the DDF address space
determines how quickly DB2 is able to perform operations associated with
managing the distributed DB2 work load, such as adding new users or removing
users that have terminated their connections. This performance objective should be
a service class with a single velocity goal. This performance objective is assigned
by modifying the WLM Classification Rules for started tasks (STC).
Figure 76 on page 707 shows how you can associate DDF threads with service
classes.
-------Qualifier------------- -------Class--------
Action Type Name Start Service Report
DEFAULTS: PRDBATCH ________
____ 1 SI DB2P ___ PRDBATCH ________
____ 2 CN ONLINE ___ PRDONLIN ________
____ 2 PRC PAYPROC ___ PRDONLIN ________
____ 2 UI SYSADM ___ PRDONLIN ________
____ 2 PK QMFOS2 ___ PRDQUERY ________
____ 1 SI DB2T ___ TESTUSER ________
____ 2 PRC PAYPROCT ___ TESTPAYR ________
****************************** BOTTOM OF DATA *****************************
Figure 76. Classifying DDF threads using z/OS workload management. You assign
performance goals to service classes using the services classes menu of WLM.
To design performance strategies for these threads, take into account the events
that cause a DDF thread to reset its z/OS performance period. The z/OS
performance period is reset by terminating the z/OS enclave for the thread and
creating a new z/OS enclave for the thread, as described in “Monitoring
distributed processing with RMF” on page 981.
Because threads that are always active do not terminate the enclave and thus do
not reset the performance period to period 1, a long-running thread always ends
up in the last performance period. Any new business units of work that use that
thread will suffer the performance consequences. This makes performance periods
unattractive for long-running threads. For always active threads, therefore, use
velocity goals and use a single-period service class.
Threads are assigned a service class by the classification rules in the active WLM
service policy. Each service class period has a performance objective (goal), and
WLM raises or lowers that period’s access to system resources as needed to meet
the specified goal. For example, the goal might be “application APPL8 should run
in less than 3 seconds of elapsed time 90% of the time”.
No matter what your service class goals are, a request to start an address space
might time out, depending on the timeout value that is specified in the TIMEOUT
VALUE field of installation DSNTIPX. If the timeout value is too small, you might
need to increase it to account for a busy system.
Because DB2 must be stopped to set new values, consider setting a higher MAX
BATCH CONNECT for batch periods. The statistics record (IFCID 0001) provides
information on the create thread queue. The OMEGAMON statistics report (in
Figure 77 on page 710) shows that information under the SUBSYSTEM SERVICES
section.
For more information on these aspects of DB2 QMF and how they affect
performance, see DB2 Query Management Facility: Installing and Managing DB2 QMF
for TSO/CICS.
This chapter tells you how to improve the performance of your queries. It begins
with “General tips and questions.”
If you still have performance problems after you have tried the suggestions in
these sections, you can use other, more risky techniques. See “Special techniques to
influence access path selection” on page 754 for information.
| For example, assume that a host variable and an SQL column are defined as
| follows:
|| C language declaration SQL definition
| char string_hv[15] STRING_COL CHAR(12)
|
| A predicate such as WHERE STRING_COL > :string_hv is not a matching predicate
| for an index scan because the length of string_hv is greater than the length of
| STRING_COL. One way to avoid an inefficient predicate using character host
| variables is to declare the host variable with a length that is less than or equal to
| the column length:
| char string_hv[12]
| Because this is a C language example, the host variable length could be 1 byte
| greater than the column length:
| char string_hv[13]
If you are unsure, run EXPLAIN on the query with both a correlated and a
noncorrelated subquery. By examining the EXPLAIN output and understanding
your data distribution and SQL statements, you should be able to determine which
form is more efficient.
This general principle can apply to all types of predicates. However, because
subquery predicates can potentially be thousands of times more processor- and
I/O-intensive than all other predicates, the order of subquery predicates is
particularly important.
Refer to “DB2 predicate manipulation” on page 735 to see in what order DB2 will
evaluate predicates and when you can control the evaluation order.
If your query involves the functions MAX or MIN, refer to “One-fetch access
(ACCESSTYPE=I1)” on page 917 to see whether your query could take advantage
of that method.
See “Using host variables efficiently” on page 739 for more information.
DB2 might not determine the best access path when your queries include
correlated columns. If you think you have a problem with column correlation, see
“Column correlation” on page 732 for ideas on what to do about it.
If you rewrite the predicate in the following way, DB2 can evaluate it more
efficiently:
WHERE SALARY > 50000/(1 + :hv1)
In the second form, the column is by itself on one side of the operator, and all the
other values are on the other side of the operator. The expression on the right is
called a noncolumn expression. DB2 can evaluate many predicates with noncolumn
expressions at an earlier stage of processing called stage 1, so the queries take less
time to run.
| For information about materialized query tables, see Chapter 31, “Using
| materialized query tables,” on page 845.
Example: The following query has three predicates: an equal predicate on C1, a
BETWEEN predicate on C2, and a LIKE predicate on C3.
SELECT * FROM T1
WHERE C1 = 10 AND
C2 BETWEEN 10 AND 20 AND
C3 NOT LIKE ’A%’
Effect on access paths: This section explains the effect of predicates on access
paths. Because SQL allows you to express the same query in different ways,
knowing how predicates affect path selection helps you write queries that access
data efficiently.
Properties of predicates
Predicates in a HAVING clause are not used when selecting access paths; hence, in
this section the term ’predicate’ means a predicate after WHERE or ON.
There are special considerations for “Predicates in the ON clause” on page 718.
Predicate types
The type of a predicate depends on its operator or syntax. The type determines
what type of processing and filtering occurs when the predicate is evaluated.
Table 122 shows the different predicate types.
Table 122. Definitions and examples of predicate types
Type Definition Example
Subquery Any predicate that includes another C1 IN (SELECT C10 FROM
SELECT statement. TABLE1)
Equal Any predicate that is not a subquery C1=100
predicate and has an equal operator and
no NOT operator. Also included are
predicates of the form C1 IS NULL and C
IS NOT DISTINCT FROM.
Range Any predicate that is not a subquery C1>100
predicate and has an operator in the
following list: >, >=, <, <=, LIKE, or
BETWEEN.
IN-list A predicate of the form column IN (list of C1 IN (5,10,15)
values).
NOT Any predicate that is not a subquery COL1 <> 5 or COL1 NOT
predicate and contains a NOT operator. BETWEEN 10 AND 20
Also included are predicates of the form
C1 IS DISTINCT FROM.
Example: Influence of type on access paths: The following two examples show how
the predicate type can influence DB2’s choice of an access path. In each one,
assume that a unique index I1 (C1) exists on table T1 (C1, C2), and that all values
of C1 are positive integers.
DB2 chooses the index access in this case because the index is highly selective on
column C1.
Examples: If the employee table has an index on the column LASTNAME, the
following predicate can be a matching predicate:
SELECT * FROM DSN8810.EMP WHERE LASTNAME = ’SMITH’;
| The predicate is not indexable because the length of the column is shorter than
| the length of the constant.
| Example: The following predicate is not stage 1:
| DECCOL>34.5, where DECCOL is defined as DECIMAL(18,2)
| The predicate is not stage 1 because the precision of the decimal column is
| greater than 15.
| If T2 is the first table in the join sequence, the predicate is stage 1, but if T1 is
| the first table in the join sequence, the predicate is stage 2.
| You can determine the join sequence by executing EXPLAIN on the query and
| examining the resulting plan table. See Chapter 33, “Using EXPLAIN to improve
| SQL performance,” on page 891 for details.
All indexable predicates are stage 1. The predicate C1 LIKE %BC is stage 1, but is
not indexable.
Effect on access paths: In single-index processing, only Boolean term predicates are
chosen for matching predicates. Hence, only indexable Boolean term predicates are
candidates for matching index scans. To match index columns by predicates that
are not Boolean terms, DB2 considers multiple-index access.
In join operations, Boolean term predicates can reject rows at an earlier stage than
can non-Boolean term predicates.
For left and right outer joins, and for inner joins, join predicates in the ON clause
are treated the same as other stage 1 and stage 2 predicates. A stage 2 predicate in
the ON clause is treated as a stage 2 predicate of the inner table.
In an outer join, predicates that are evaluated after the join are stage 2 predicates.
Predicates in a table expression can be evaluated before the join and can therefore
be stage 1 predicates.
Example: In the following statement, the predicate “EDLEVEL > 100” is evaluated
before the full join and is a stage 1 predicate:
SELECT * FROM (SELECT * FROM DSN8810.EMP
WHERE EDLEVEL > 100) AS X FULL JOIN DSN8810.DEPT
ON X.WORKDEPT = DSN8810.DEPT.DEPTNO;
For more information about join methods, see “Interpreting access to two or more
tables (join)” on page 918.
The second set of rules describes the order of predicate evaluation within each of
the stages:
After both sets of rules are applied, predicates are evaluated in the order in which
they appear in the query. Because you specify that order, you have some control
over the order of evaluation.
By using correlation names, the query treats one table as if it were two
separate tables. Therefore, indexes on columns C1 and C2 are considered for
access.
4. If the subquery has already been evaluated for a given correlation value, then
the subquery might not have to be reevaluated.
5. Not indexable or stage 1 if a field procedure exists on that column.
6. The column on the left side of the join sequence must be in a different table
from any columns on the right side of the join sequence.
7. The tables that contain the columns in expression1 or expression2 must already
have been accessed.
8. The processing for WHERE NOT COL = value is like that for WHERE COL <>
value, and so on.
9. If noncol expr, noncol expr1, or noncol expr2 is a noncolumn expression of one of
these forms, then the predicate is not indexable:
v noncol expr + 0
v noncol expr - 0
v noncol expr * 1
v noncol expr / 1
v noncol expr CONCAT empty string
| 10. COL, COL1, and COL2 can be the same column or different columns. The
| columns are in the same table.
| 11. Any of the following sets of conditions make the predicate stage 2:
| v The left side of the join sequence is DECIMAL(p,s), where p>15, and the
| right side of the join sequence is REAL or FLOAT.
Example: Suppose that DB2 can determine that column C1 of table T contains only
five distinct values: A, D, Q, W and X. In the absence of other information, DB2
estimates that one-fifth of the rows have the value D in column C1. Then the
predicate C1=’D’ has the filter factor 0.2 for table T.
How DB2 uses filter factors: Filter factors affect the choice of access paths by
estimating the number of rows qualified by a set of predicates.
Recommendation: Control the first two of those variables when you write a
predicate. Your understanding of how DB2 uses filter factors should help you write
more efficient predicates.
Values of the third variable, statistics on the column, are kept in the DB2 catalog.
You can update many of those values, either by running the utility RUNSTATS or
by executing UPDATE for a catalog table. For information about using RUNSTATS,
see “Gathering monitor statistics and update statistics” on page 875. For
information on updating the catalog manually, see “Updating catalog statistics” on
page 764.
If you intend to update the catalog with statistics of your own choice, you should
understand how DB2 uses:
Example: The default filter factor for the predicate C1 = ’D’ is 1/25 (0.04). If D is
actually not close to 0.04, the default probably does not lead to an optimal access
path.
Table 124. DB2 default filter factors by predicate type
Predicate Type Filter Factor
Col = literal 1/25
Col <> literal 1 – (1/25)
Col IS NULL 1/25
| Col IS NOT DISTINCT FROM 1/25
| Col IS DISTINCT FROM 1 – (1/25)
Col IN (literal list) (number of literals)/25
Col Op literal 1/3
Col LIKE literal 1/10
Col BETWEEN literal1 and literal2 1/10
Note:
Op is one of these operators: <, <=, >, >=.
Literal is any constant value that is known at bind time.
Example: If D is one of only five values in column C1, using RUNSTATS puts the
value 5 in column COLCARDF of SYSCOLUMNS. If there are no additional
statistics available, the filter factor for the predicate C1 = ’D’ is 1/5 (0.2).
Table 125. DB2 uniform filter factors by predicate type
Predicate type Filter factor
Col = literal 1/COLCARDF
Col <> literal 1 – (1/COLCARDF)
Col IS NULL 1/COLCARDF
| Col IS NOT DISTINCT FROM 1/COLCARDF
| Col IS DISTINCT FROM 1 – (1/COLCARDF)
Col IN (literal list) number of literals /COLCARDF
Col Op1 literal interpolation formula
Col Op2 literal interpolation formula
Filter factors for other predicate types: The examples selected in Table 124 on page
727 and Table 125 on page 727 represent only the most common types of
predicates. If P1 is a predicate and F is its filter factor, then the filter factor of the
predicate NOT P1 is (1 - F). But, filter factor calculation is dependent on many
things, so a specific filter factor cannot be given for all predicate types.
Interpolation formulas
Definition: For a predicate that uses a range of values, DB2 calculates the filter
factor by an interpolation formula. The formula is based on an estimate of the ratio
of the number of values in the range to the number of values in the entire column
of the table.
The formulas: The formulas that follow are rough estimates, subject to further
modification by DB2. They apply to a predicate of the form col op. literal. The
value of (Total Entries) in each formula is estimated from the values in columns
HIGH2KEY and LOW2KEY in catalog table SYSIBM.SYSCOLUMNS for column col:
Total Entries = (HIGH2KEY value - LOW2KEY value).
v For the operators < and <=, where the literal is not a host variable:
(Literal value - LOW2KEY value) / (Total Entries)
v For the operators > and >=, where the literal is not a host variable:
(HIGH2KEY value - Literal value) / (Total Entries)
v For LIKE or BETWEEN:
(High literal value - Low literal value) / (Total Entries)
For the predicate C1 BETWEEN 800 AND 1100, DB2 calculates the filter factor F as:
F = (1100 - 800)/1200 = 1/4 = 0.25
Defaults for interpolation: DB2 might not interpolate in some cases; instead, it can
use a default filter factor. Defaults for interpolation are:
v Relevant only for ranges, including LIKE and BETWEEN predicates
v Used only when interpolation is not adequate
v Based on the value of COLCARDF
When they are used: Table 127 lists the types of predicates on which these statistics
are used.
Table 127. Predicates for which distribution statistics are used
Single column or
Type of statistic concatenated columns Predicates
Frequency Single COL=literal
COL IS NULL
COL IN (literal-list)
COL op literal
COL BETWEEN literal AND literal
COL=host-variable
COL1=COL2
T1.COL=T2.COL
| COL IS NOT DISTINCT FROM
Frequency Concatenated COL=literal
| COL IS NOT DISTINCT FROM
| You can run RUNSTATS without the FREQVAL option, with the FREQVAL option
| in the correl-spec, with the FREQVAL option in the colgroup-spec, or in both, with
| the following effects:
| v If you run RUNSTATS without the FREQVAL option, RUNSTATS inserts rows
| for the 10 most frequent values for the first column of the specified index.
| v If you run RUNSTATS with the FREQVAL option in the correl-spec, RUNSTATS
| inserts rows for concatenated columns of an index. The NUMCOLS option
| specifies the number of concatenated index columns. The COUNT option
| specifies the number of frequent values. You can collect most-frequent values,
| least-frequent values, or both.
| v If you run RUNSTATS with the FREQVAL option in the colgroup-spec,
| RUNSTATS inserts rows for the columns in the column group that you specify.
| The COUNT option specifies the number of frequent values. You can collect
| most-frequent values, least-frequent values, or both.
| v If you specify the FREQVAL option, RUNSTATS inserts rows for columns of the
| specified index and for columns in a column group.
See Part 2 of DB2 Utility Guide and Reference for more information about
RUNSTATS. DB2 uses the frequencies in column FREQUENCYF for predicates that
use the values in column COLVALUE and assumes that the remaining data are
uniformly distributed.
Suppose that the predicate is C1 IN (’3’,’5’) and that SYSCOLDIST contains these
values for column C1:
COLVALUE FREQUENCYF
’3’ .0153
’5’ .0859
’8’ .0627
| Suppose that columns C1 and C2 are correlated. Suppose also that the predicate is
C1=’3’ AND C2=’5’ and that SYSCOLDIST contains these values for columns C1
and C2:
COLVALUE FREQUENCYF
’1’ ’1’ .1176
’2’ ’2’ .0588
’3’ ’3’ .0588
’3’ ’5’ .1176
’4’ ’4’ .0588
’5’ ’3’ .1764
’5’ ’5’ .3529
’6’ ’6’ .0588
| Table T1 consists of columns C1, C2, C3, and C4. Index I1 is defined on table T1
| and contains columns C1, C2, and C3.
| Suppose that the simple predicates in the compound predicate have the following
| characteristics:
| C1='A' Matching predicate
| C3='B' Screening predicate
| C4='C' Stage 1, nonindexable predicate
| To determine the cost of accessing table T1 through index I1, DB2 performs these
| steps:
| 1. Estimates the matching index cost. DB2 determines the index matching filter
| factor by using single-column cardinality and single-column frequency statistics
| because only one column can be a matching column.
| 2. Estimates the total index filtering. This includes matching and screening
| filtering. If statistics exist on column group (C1,C3), DB2 uses those statistics.
| Otherwise DB2 uses the available single-column statistics for each of these
| columns.
| DB2 will also use FULLKEYCARDF as a bound. Therefore, it can be critical to
| have column group statistics on column group (C1, C3) to get an accurate
| estimate.
| 3. Estimates the table-level filtering. If statistics are available on column group
| (C1,C3,C4), DB2 uses them. Otherwise, DB2 uses statistics that exist on subsets
| of those columns.
Column correlation
Two columns of data, A and B of a single table, are correlated if the values in
column A do not vary independently of the values in column B.
Example: Table 128 is an excerpt from a large single table. Columns CITY and
STATE are highly correlated, and columns DEPTNO and SEX are entirely
independent.
Table 128. Data from the CREWINFO table
CITY STATE DEPTNO SEX EMPNO ZIPCODE
Fresno CA A345 F 27375 93650
Fresno CA J123 M 12345 93710
Fresno CA J123 F 93875 93650
Fresno CA J123 F 52325 93792
New York NY J123 M 19823 09001
New York NY A345 M 15522 09530
Miami FL B499 M 83825 33116
Miami FL A345 F 35785 34099
Los Angeles CA X987 M 12131 90077
Los Angeles CA A345 M 38251 90091
In this simple example, for every value of column CITY that equals 'FRESNO',
there is the same value in column STATE ('CA').
The result of the count of each distinct column is the value of COLCARDF in the
DB2 catalog table SYSCOLUMNS. Multiply the previous two values together to get
a preliminary result:
| CITYCOUNT x STATECOUNT = ANSWER1
When the columns in a predicate correlate but the correlation is not reflected in
catalog statistics, the actual filtering effect to be significantly different from the DB2
filter factor. Table 129 shows how the actual filtering effect and the DB2 filter factor
can differ, and how that difference can affect index choice and performance.
Table 129. Effects of column correlation on matching columns
INDEX 1 INDEX 2
Matching predicates Predicate1 Predicate2
CITY=FRESNO AND STATE=CA DEPTNO=A345 AND SEX=F
Matching columns 2 2
DB2 estimate for column=CITY, COLCARDF=4 column=DEPTNO,
matching columns Filter Factor=1/4 COLCARDF=4
(Filter Factor) column=STATE, COLCARDF=3 Filter Factor=1/4
Filter Factor=1/3 column=SEX, COLCARDF=2
Filter Factor=1/2
Compound filter factor 1/4 × 1/3 = 0.083 1/4 × 1/2 = 0.125
for matching columns
Qualified leaf pages 0.083 × 10 = 0.83 0.125 × 10 = 1.25
based on DB2 estimations INDEX CHOSEN (.8 < 1.25)
Actual filter factor based on data 4/10 2/10
distribution
Actual number of qualified leaf pages 4/10 × 10 = 4 2/10 × 10 = 2
based on compound predicate BETTER INDEX CHOICE
(2 < 4)
DB2 chooses an index that returns the fewest rows, partly determined by the
smallest filter factor of the matching columns. Assume that filter factor is the only
influence on the access path. The combined filtering of columns CITY and STATE
seems very good, whereas the matching columns for the second index do not seem
to filter as much. Based on those calculations, DB2 chooses Index 1 as an access
path for Query 1.
The problem is that the filtering of columns CITY and STATE should not look
good. Column STATE does almost no filtering. Since columns DEPTNO and SEX
do a better job of filtering out rows, DB2 should favor Index 2 over Index 1.
In the case of Index 3, because the columns CITY and STATE of Predicate 1 are
correlated, the index access is not improved as much as estimated by the screening
predicates and therefore Index 4 might be a better choice. (Note that index
screening also occurs for indexes with matching columns greater than zero.)
Multiple table joins: In Query 2, Table 130 is added to the original query (see
“Query 1” on page 733) to show the impact of column correlation on join queries.
Table 130. Data from the DEPTINFO table
CITY STATE MANAGER DEPT DEPTNAME
Fresno CA Smith J123 ADMIN
Los Angeles CA Jones A345 LEGAL
Query 2
SELECT ... FROM CREWINFO T1,DEPTINFO T2
WHERE T1.CITY = ’FRESNO’ AND T1.STATE=’CA’ (PREDICATE 1)
AND T1.DEPTNO = T2.DEPT AND T2.DEPTNAME = ’LEGAL’;
The order that tables are accessed in a join statement affects performance. The
estimated combined filtering of Predicate1 is lower than its actual filtering. So table
CREWINFO might look better as the first table accessed than it should.
Also, due to the smaller estimated size for table CREWINFO, a nested loop join
might be chosen for the join method. But, if many rows are selected from table
CREWINFO because Predicate1 does not filter as many rows as estimated, then
another join method or join sequence might be better.
The last two techniques are discussed in “Special techniques to influence access
path selection” on page 754.
The RUNSTATS utility collects the statistics DB2 needs to make proper choices
about queries. With RUNSTATS, you can collect statistics on the concatenated key
where:
v The first three index keys are used (MATCHCOLS = 3).
v An index exists on C1, C2, C3, C4, C5.
v Some or all of the columns in the index are correlated in some way.
See “Using RUNSTATS to keep access path statistics current” on page 619 for
information on using RUNSTATS to influence access path selection. See “Updating
catalog statistics” on page 764 for information on updating catalog statistics
manually.
A set of simple, Boolean term, equal predicates on the same column that are
connected by OR predicates can be converted into an IN-list predicate. For
example: C1=5 or C1=10 or C1=15 converts to C1 IN (5,10,15).
The outer join operation gives you these result table rows:
v The rows with matching values of C1 in tables T1 and T2 (the inner join result)
v The rows from T1 where C1 has no corresponding value in T2
v The rows from T2 where C1 has no corresponding value in T1
Example: The predicate, X.C2>12, filters out all null values that result from the
right join:
SELECT * FROM T1 X RIGHT JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2>12;
Therefore, DB2 can transform the right join into a more efficient inner join without
changing the result:
SELECT * FROM T1 X INNER JOIN T2 Y
ON X.C1=Y.C1
WHERE X.C2>12;
The predicate that follows a join operation must have the following characteristics
before DB2 transforms an outer join into a simpler outer join or into an inner join:
v The predicate is a Boolean term predicate.
v The predicate is false if one table in the join operation supplies a null value for
all of its columns.
These predicates are examples of predicates that can cause DB2 to simplify join
operations:
T1.C1 > 10
T1.C1 IS NOT NULL
T1.C1 > 10 OR T1.C2 > 15
T1.C1 > T2.C1
T1.C1 IN (1,2,4)
T1.C1 LIKE 'ABC%'
T1.C1 BETWEEN 10 AND 100
12 BETWEEN T1.C1 AND 100
Example: This examples shows how DB2 can simplify a join operation because the
query contains an ON clause that eliminates rows with unmatched values:
SELECT * FROM T1 X LEFT JOIN T2 Y
FULL JOIN T3 Z ON Y.C1=Z.C1
ON X.C1=Y.C1;
Because the last ON clause eliminates any rows from the result table for which
column values that come from T1 or T2 are null, DB2 can replace the full join with
a more efficient left join to achieve the same result:
SELECT * FROM T1 X LEFT JOIN T2 Y
LEFT JOIN T3 Z ON Y.C1=Z.C1
ON X.C1=Y.C1;
In one case, DB2 transforms a full outer join into a left join when you cannot write
code to do it. This is the case where a view specifies a full outer join, but a
subsequent query on that view requires only a left outer join.
This view contains rows for which values of C2 that come from T1 are null.
However, if you execute the following query, you eliminate the rows with null
values for C2 that come from T1:
SELECT * FROM V1
WHERE T1C2 > 10;
Therefore, for this query, a left join between T1 and T2 would have been adequate.
DB2 can execute this query as if the view V1 was generated with a left outer join
so that the query runs more efficiently.
Rules for generating predicates: For single-table or inner join queries, DB2
generates predicates for transitive closure if:
v The query has an equal type predicate: COL1=COL2. This could be:
– A local predicate
– A join predicate
v The query also has a Boolean term predicate on one of the columns in the first
predicate with one of the following formats:
– COL1 op value
op is =, <>, >, >=, <, or <=.
value is a constant, host variable, or special register.
– COL1 (NOT) BETWEEN value1 AND value2
– COL1=COL3
For outer join queries, DB2 generates predicates for transitive closure if the query
has an ON clause of the form COL1=COL2 and a before join predicate that has one
of the following formats:
v COL1 op value
op is =, <>, >, >=, <, or <=
v COL1 (NOT) BETWEEN value1 AND value2
DB2 generates a transitive closure predicate for an outer join query only if the
generated predicate does not reference the table with unmatched rows. That is, the
generated predicate cannot reference the left table for a left outer join or the right
table for a right outer join.
| For a multiple-CCSID query, DB2 does not generate a transitive closure predicate if
| the predicate that would be generated has these characteristics:
| v The generated predicate is a range predicate (op is >, >=, <, or <=).
| v Evaluation of the query with the generated predicate results in different CCSID
| conversion from evaluation of the query without the predicate. See Chapter 4 of
| DB2 SQL Reference for information on CCSID conversion.
Example of transitive closure for an inner join: Suppose that you have written this
query, which meets the conditions for transitive closure:
SELECT * FROM T1, T2
WHERE T1.C1=T2.C1 AND
T1.C1>10;
| Example of transitive closure for an outer join: Suppose that you have written this
| outer join query:
| SELECT * FROM
| (SELECT T1.C1 FROM T1 WHERE T1.C1>10) X
| LEFT JOIN
| (SELECT T2.C1 FROM T2) Y
| ON X.C1 = Y.C1;
| The before join predicate, T1.C1>10, meets the conditions for transitive closure, so
| DB2 generates a query that has the same result as this more-efficient query:
| SELECT * FROM
| (SELECT T1.C1 FROM T1 WHERE T1.C1>10) X
| LEFT JOIN
| (SELECT T2.C1 FROM T2 WHERE T2.C1>10) Y
| ON X.C1 = Y.C1;
Adding extra predicates: DB2 performs predicate transitive closure only on equal
and range predicates. However, you can help DB2 to choose a better access path
by adding transitive closure predicates for other types of operators, such as IN or
LIKE. For example, consider the following SELECT statement:
SELECT * FROM T1,T2
WHERE T1.C1=T2.C1
AND T1.C1 LIKE ’A%’;
DB2 often chooses an access path that performs well for a query with several host
variables. However, in a new release or after maintenance has been applied, DB2
might choose a new access path that does not perform as well as the old access
path. In many cases, the change in access paths is due to the default filter factors,
which might lead DB2 to optimize the query in a different way.
The two ways to change the access path for a query that contains host variables
are:
v Bind the package or plan that contains the query with the option
REOPT(ALWAYS) or the option REOPT(ONCE).
v Rewrite the query.
| To use the REOPT(ALWAYS) bind option most efficiently, first determine which
| SQL statements in your applications perform poorly with the REOPT(NONE) bind
| option and the REOPT(ONCE) bind option. Separate the code containing those
| statements into units that you bind into packages with the REOPT(ALWAYS)
| option. Bind the rest of the code into packages using the REOPT(NONE) bind
| option or the REOPT(ONCE) bind option, as appropriate. Then bind the plan with
| the REOPT(NONE) bind option. Statements in the packages bound with
| REOPT(ALWAYS) are candidates for repeated reoptimization at run time.
| Example: To determine which queries in plans and packages that are bound with
| the REOPT(ALWAYS) bind option will be reoptimized at run time, execute the
| following SELECT statements:
| SELECT PLNAME,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, TEXT
| FROM SYSIBM.SYSSTMT
| WHERE STATUS IN (’B’,’F’,’G’,’J’)
| ORDER BY PLNAME, STMTNUM, SEQNO;
| SELECT COLLID, NAME, VERSION,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, STMT
| FROM SYSIBM.SYSPACKSTMT
| WHERE STATUS IN (’B’,’F’,’G’,’J’)
| ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
| If you specify the bind option VALIDATE(RUN), and a statement in the plan or
| package is not bound successfully, that statement is incrementally bound at run
| time. If you also specify the bind option REOPT(ALWAYS), DB2 reoptimizes the
| access path during the incremental bind.
| Example: To determine which plans and packages have statements that will be
| incrementally bound, execute the following SELECT statements:
| SELECT DISTINCT NAME
| FROM SYSIBM.SYSSTMT
| WHERE STATUS = ’F’ OR STATUS = ’H’;
| To use the REOPT(ONCE) bind option most efficiently, first determine which
| dynamic SQL statements in your applications perform poorly with the
| REOPT(NONE) bind option and the REOPT(ALWAYS) bind option. Separate the
| code containing those statements into units that you bind into packages with the
| REOPT(ONCE) option. Bind the rest of the code into packages using the
| REOPT(NONE) bind option or the REOPT(ALWAYS) bind option, as appropriate.
| Then bind the plan with the REOPT(NONE) bind option. A dynamic statement in
| a package that is bound with REOPT(ONCE) is a candidate for reoptimization the
| first time that the statement is run.
| Example: To determine which queries in plans and packages that are bound with
| the REOPT(ONCE) bind option will be reoptimized at run time, execute the
| following SELECT statements:
| SELECT PLNAME,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, TEXT
| FROM SYSIBM.SYSSTMT
| WHERE STATUS IN (’J’)
| ORDER BY PLNAME, STMTNUM, SEQNO;
| SELECT COLLID, NAME, VERSION,
| CASE WHEN STMTNOI <> 0
| THEN STMTNOI
| ELSE STMTNO
| END AS STMTNUM,
| SEQNO, STMT
| FROM SYSIBM.SYSPACKSTMT
| WHERE STATUS IN (’J’)
| ORDER BY COLLID, NAME, VERSION, STMTNUM, SEQNO;
| Example: To determine which plans and packages have statements that will be
| incrementally bound, execute the following SELECT statements:
| SELECT DISTINCT NAME
| FROM SYSIBM.SYSSTMT
| WHERE STATUS = ’F’ OR STATUS = ’H’;
| SELECT DISTINCT COLLID, NAME, VERSION
| FROM SYSIBM.SYSPACKSTMT
| WHERE STATUS = ’F’ OR STATUS = ’H’;
| Before and after you make any permanent changes, take performance
| measurements. When you migrate to a new release, evaluate the performance
| again. Be prepared to back out any changes that have degraded performance.
The examples that follow identify potential performance problems and offer
suggestions for tuning the queries. However, before you rewrite any query, you
should consider whether the REOPT(ALWAYS) or REOPT(ONCE) bind options can
solve your access path problems. See “Changing the access path at run time” on
page 739 for more information about REOPT(ALWAYS) and REOPT(ONCE).
An equal predicate has a default filter factor of 1/COLCARDF. The actual filter
factor might be quite different.
Query:
SELECT * FROM DSN8810.EMP
WHERE SEX = :HV1;
Assumptions: Because the column SEX has only two different values, 'M' and 'F',
the value COLCARDF for SEX is 2. If the numbers of male and female employees
are not equal, the actual filter factor of 1/2 is larger or smaller than the default,
depending on whether :HV1 is set to 'M' or 'F'.
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND C2 BETWEEN :HV3 AND :HV4;
Recommendation: If DB2 does not choose T1X1, rewrite the query as follows, so
that DB2 does not choose index T1X2 on C2:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
AND C2 BETWEEN :HV3 AND :HV4;
Assumptions: You know that the application provides both narrow and wide
ranges on C1 and C2. Hence, default filter factors do not allow DB2 to choose the
Recommendation: If DB2 does not choose the best access path, try either of the
following changes to your application:
v Use a dynamic SQL statement and embed the ranges of C1 and C2 in the
statement. With access to the actual range values, DB2 can estimate the actual
filter factors for the query. Preparing the statement each time it is executed
requires an extra step, but it can be worthwhile if the query accesses a large
amount of data.
v Include some simple logic to check the ranges of C1 and C2, and then execute
one of these static SQL statements, based on the ranges of C1 and C2:
SELECT * FROM T1 WHERE C1 BETWEEN :HV1 AND :HV2
AND (C2 BETWEEN :HV3 AND :HV4 OR 0=1);
Example 4: ORDER BY
Table T1 has two indexes: T1X1 on column C1 and T1X2 on column C2.
Query:
SELECT * FROM T1
WHERE C1 BETWEEN :HV1 AND :HV2
ORDER BY C2;
If the actual number of rows that satisfy the range predicate is significantly
different from the estimate, DB2 might not choose the best access path.
Tables A, B, and C each have indexes on columns C1, C2, C3, and C4.
The result of making the join predicate between A and B a nonindexable predicate
(which cannot be used in single index access) disfavors the use of the index on
column C1. This, in turn, might lead DB2 to access table A or B first. Or, it might
lead DB2 to change the access type of table A or B, thereby influencing the join
sequence of the other tables.
Decision needed: You can often write two or more SQL statements that achieve
identical results, particularly if you use subqueries. The statements have different
access paths, however, and probably perform differently.
Topic overview: The topics that follow describe different methods to achieve the
results intended by a subquery and tell what DB2 does for each method. The
information should help you estimate what method performs best for your query.
Finally, for a comparison of the three methods as applied to a single task, see:
v “Subquery tuning” on page 750
Correlated subqueries
Definition: A correlated subquery refers to at least one column of the outer query.
What DB2 does: A correlated subquery is evaluated for each qualified row of the
outer query that is referred to. In executing the example, DB2:
1. Reads a row from table EMP where JOB=’DESIGNER’.
2. Searches for the value of WORKDEPT from that row, in a table stored in
memory.
The in-memory table saves executions of the subquery. If the subquery has
already been executed with the value of WORKDEPT, the result of the
subquery is in the table and DB2 does not execute it again for the current row.
Instead, DB2 can skip to step 5.
3. Executes the subquery, if the value of WORKDEPT is not in memory. That
requires searching the PROJ table to check whether there is any project, where
MAJPROJ is ’MA2100’, for which the current WORKDEPT is responsible.
4. Stores the value of WORKDEPT and the result of the subquery in memory.
5. Returns the values of the current row of EMP to the application.
DB2 repeats this whole process for each qualified row of the EMP table.
Notes on the in-memory table: The in-memory table is applicable if the operator of
the predicate that contains the subquery is one of the following operators:
Noncorrelated subqueries
Definition: A noncorrelated subquery makes no reference to outer queries.
Example:
SELECT * FROM DSN8810.EMP
WHERE JOB = ’DESIGNER’
AND WORKDEPT IN (SELECT DEPTNO
FROM DSN8810.PROJ
WHERE MAJPROJ = ’MA2100’);
What DB2 does: A noncorrelated subquery is executed once when the cursor is
opened for the query. What DB2 does to process it depends on whether it returns a
single value or more than one value. The query in the preceding example can
return more than one value.
<, <=, >, >=, =, <>, NOT <, NOT <=, NOT >, NOT >=
What DB2 does: When the cursor is opened, the subquery executes. If it returns
more than one row, DB2 issues an error. The predicate that contains the subquery
is treated like a simple predicate with a constant specified, for example,
WORKDEPT <= ’value’.
| Stage 1 and stage 2 processing: The rules for determining whether a predicate with
| a noncorrelated subquery that returns a single value is stage 1 or stage 2 are
| generally the same as for the same predicate with a single variable.
Multiple-value subqueries
A subquery can return more than one value if the operator is one of the following:
op ANY, op ALL , op SOME, IN, EXISTS
where op is any of the operators >, >=, <, <=, NOT <, NOT <=, NOT >, NOT >=.
What DB2 does: If possible, DB2 reduces a subquery that returns more than one
row to one that returns only a single row. That occurs when there is a range
comparison along with ANY, ALL, or SOME. The following query is an example:
SELECT * FROM DSN8810.EMP
WHERE JOB = ’DESIGNER’
AND WORKDEPT <= ANY (SELECT DEPTNO
FROM DSN8810.PROJ
WHERE MAJPROJ = ’MA2100’);
DB2 calculates the maximum value for DEPTNO from table DSN8810.PROJ and
removes the ANY keyword from the query. After this transformation, the subquery
is treated like a single-value subquery.
That transformation can be made with a maximum value if the range operator is:
v > or >= with the quantifier ALL
v < or <= with the quantifier ANY or SOME
The transformation can be made with a minimum value if the range operator is:
v < or <= with the quantifier ALL
v > or >= with the quantifier ANY or SOME
When the subquery result is a character data type and the left side of the predicate
is a datetime data type, then the result is placed in a work file without sorting. For
some noncorrelated subqueries that use IN, NOT IN, = ANY, <> ANY, = ALL, or
<> ALL comparison operators, DB2 can more accurately pinpoint an entry point
into the work file, thus further reducing the amount of scanning that is done.
Results from EXPLAIN: For information about the result in a plan table for a
subquery that is sorted, see “When are aggregate functions evaluated?
(COLUMN_FN_EVAL)” on page 910.
For a SELECT statement, DB2 does the transformation if the following conditions
are true:
v The transformation does not introduce redundancy.
v The subquery appears in a WHERE clause.
v The subquery does not contain GROUP BY, HAVING, or aggregate functions.
v The subquery has only one table in the FROM clause.
| v For a correlated subquery, the comparison operator of the predicate containing
| the subquery is IN, = ANY, or = SOME.
| v For a noncorrelated subquery, the comparison operator of the predicate
| containing the subquery is IN, EXISTS, = ANY, or = SOME.
| v For a noncorrelated subquery, the subquery select list has only one column,
| guaranteed by a unique index to have unique values.
v For a noncorrelated subquery, the left side of the predicate is a single column
with the same data type and length as the subquery’s column. (For a correlated
subquery, the left side can be any expression.)
For an UPDATE or DELETE statement, or a SELECT statement that does not meet
the previous conditions for transformation, DB2 does the transformation of a
correlated subquery into a join if the following conditions are true:
v The transformation does not introduce redundancy.
v The subquery is correlated to its immediate outer query.
v The FROM clause of the subquery contains only one table, and the outer query
(for SELECT), UPDATE, or DELETE references only one table.
v If the outer predicate is a quantified predicate with an operator of =ANY or an
IN predicate, the following conditions are true:
– The left side of the outer predicate is a single column.
– The right side of the outer predicate is a subquery that references a single
column.
– The two columns have the same data type and length.
v The subquery does not contain the GROUP BY or DISTINCT clauses.
v The subquery does not contain aggregate functions.
For a statement with multiple subqueries, DB2 does the transformation only on the
last subquery in the statement that qualifies for transformation.
Example: The following subquery can be transformed into a join because it meets
the first set of conditions for transformation:
SELECT * FROM EMP
WHERE DEPTNO IN
(SELECT DEPTNO FROM DEPT
WHERE LOCATION IN (’SAN JOSE’, ’SAN FRANCISCO’)
AND DIVISION = ’MARKETING’);
If there is a department in the marketing division which has branches in both San
Jose and San Francisco, the result of the SQL statement is not the same as if a join
were done. The join makes each employee in this department appear twice because
it matches once for the department of location San Jose and again of location San
Francisco, although it is the same department. Therefore, it is clear that to
transform a subquery into a join, the uniqueness of the subquery select list must be
guaranteed. For this example, a unique index on any of the following sets of
columns would guarantee uniqueness:
v (DEPTNO)
v (DIVISION, DEPTNO)
v (DEPTNO, DIVISION).
Example: The following subquery can be transformed into a join because it meets
the second set of conditions for transformation:
UPDATE T1 SET T1.C1 = 1
WHERE T1.C1 =ANY
(SELECT T2.C1 FROM T2
WHERE T2.C2 = T1.C2);
Results from EXPLAIN: For information about the result in a plan table for a
subquery that is transformed into a join operation, see “Is a subquery transformed
into a join?” on page 909.
If you need columns from both tables EMP and PROJ in the output, you must use
a join.
In general, query A might be the one that performs best. However, if there is no
index on DEPTNO in table PROJ, then query C might perform best. The
IN-subquery predicate in query C is indexable. Therefore, if an index on
WORKDEPT exists, DB2 might do IN-list access on table EMP. If you decide that a
join cannot be used and there is an available index on DEPTNO in table PROJ,
then query B might perform best.
When looking at a problem subquery, see if the query can be rewritten into
another format or see if there is an index that you can create to help improve the
performance of the subquery.
| The following example demonstrates how you can use a partitioning index to
| enable a limited partition scan on a set of partitions that DB2 needs to examine to
| satisfy a query predicate.
| Suppose that you create table Q1, with partitioning index DATE_IX and DPSI
| STATE_IX:
| CREATE TABLESPACE TS1 NUMPARTS 3;
|
| CREATE TABLE Q1 (DATE DATE,
| CUSTNO CHAR(5),
| STATE CHAR(2),
| PURCH_AMT DECIMAL(9,2))
| IN TS1
| PARTITION BY (DATE)
| (PARTITION 1 ENDING AT (’2002-1-31’),
| PARTITION 2 ENDING AT (’2002-2-28’),
| PARTITION 3 ENDING AT (’2002-3-31’));
|
| CREATE INDEX DATE_IX ON Q1 (DATE) PARTITIONED CLUSTER;
|
| CREATE INDEX STATE_IX ON Q1 (STATE) PARTITIONED;
| Now suppose that you want to execute the following query against table Q1:
| SELECT CUSTNO, PURCH_AMT
| FROM Q1
| WHERE STATE = ’CA’;
| Because the predicate is based only on values of a DPSI key (STATE), DB2 must
| examine all partitions to find the matching rows.
| Now suppose that you modify the query in the following way:
| SELECT CUSTNO, PURCH_AMT
| FROM Q1
| WHERE DATE BETWEEN ’2002-01-01’ AND ’2002-01-31’ AND
| STATE = ’CA’;
| Because the predicate is now based on values of a partitioning index key (DATE)
| and on values of a DPSI key (STATE), DB2 can eliminate the scanning of data
| Now suppose that you use host variables instead of constants in the same query:
| SELECT CUSTNO, PURCH_AMT
| FROM Q1
| WHERE DATE BETWEEN :hv1 AND :hv2 AND
| STATE = :hv3;
| DB2 can use the predicate on the partitioning column to eliminate the scanning of
| unneeded partitions at run time.
| For example, suppose that you create table Q2, with partitioning index DATE_IX
| and DPSI ORDERNO_IX:
| CREATE TABLESPACE TS2 NUMPARTS 3;
|
| CREATE TABLE Q2 (DATE DATE,
| ORDERNO CHAR(8),
| STATE CHAR(2),
| PURCH_AMT DECIMAL(9,2))
| IN TS2
| PARTITION BY (DATE)
| (PARTITION 1 ENDING AT (’2000-12-31’),
| PARTITION 2 ENDING AT (’2001-12-31’),
| PARTITION 3 ENDING AT (’2002-12-31’));
|
| CREATE INDEX DATE_IX ON Q2 (DATE) PARTITIONED CLUSTER;
|
| CREATE INDEX ORDERNO_IX ON Q2 (ORDERNO) PARTITIONED;
| Also suppose that the first 4 bytes of each ORDERNO column value represent the
| four-digit year in which the order is placed. This means that the DATE column and
| the ORDERNO column are correlated.
| To take advantage of limited partition scan, when you write a query that has the
| ORDERNO column in the predicate, also include the DATE column in the
| predicate. The partitioning index on DATE lets DB2 eliminate the scanning of
| partitions that are not needed to satisfy the query. For example:
| SELECT ORDERNO, PURCH_AMT
| FROM Q2
| WHERE ORDERNO BETWEEN ’2002AAAA’ AND ’2002ZZZZ’ AND
| DATE BETWEEN ’2002-01-01’ AND ’2002-12-31’;
Important
This section describes tactics for rewriting queries and modifying catalog
statistics to influence how DB2 selects access paths. The access path selection
"tricks"that are described in the section might cause significant performance
degradation if they are not carefully implemented and monitored.
Save the old catalog statistics or SQL before you consider making any
changes to control the choice of access path. Before and after you make any
changes, take performance measurements. When you migrate to a new
release, evaluate the performance again. Be prepared to back out any changes
that have degraded performance.
This section contains the following information about determining and changing
access paths:
v Obtaining information about access paths
v “Minimizing overhead for retrieving few rows: OPTIMIZE FOR n ROWS” on
page 755
v “Fetching a limited number of rows: FETCH FIRST n ROWS ONLY” on page
755
v “Using the CARDINALITY clause to improve the performance of queries with
user-defined table function references” on page 758
v “Reducing the number of matching columns” on page 759
v “Rearranging the order of tables in a FROM clause” on page 763
v “Updating catalog statistics” on page 764
v “Using a subsystem parameter” on page 765
v “Giving optimization hints to DB2” on page 766
Example: Suppose that you write an application that requires information on only
the 20 employees with the highest salaries. To return only the rows of the
employee table for those 20 employees, you can write a query like this:
SELECT LASTNAME, FIRSTNAME, EMPNO, SALARY
FROM EMP
ORDER BY SALARY DESC
FETCH FIRST 20 ROWS ONLY;
Interaction between OPTIMIZE FOR n ROWS and FETCH FIRST n ROWS ONLY:
In general, if you specify FETCH FIRST n ROWS ONLY but not OPTIMIZE FOR n
ROWS in a SELECT statement, DB2 optimizes the query as if you had specified
OPTIMIZE FOR n ROWS.
| When both the FETCH FIRST n ROWS ONLY clause and the OPTIMIZE FOR n
| ROWS clause are specified, the value for the OPTIMIZE FOR n ROWS clause is
| used for access path selection.
| The OPTIMIZE FOR value of 20 rows is used for access path selection.
This section discusses the use of OPTIMIZE FOR n ROWS to affect the
performance of interactive SQL applications. Unless otherwise noted, this
information pertains to local applications. For more information on using
OPTIMIZE FOR n ROWS in distributed applications, see Part 4 of DB2 Application
Programming and SQL Guide.
What OPTIMIZE FOR n ROWS does: The OPTIMIZE FOR n ROWS clause lets an
application declare its intent to do either of these things:
v Retrieve only a subset of the result set
v Give priority to the retrieval of the first few rows
DB2 uses the OPTIMIZE FOR n ROWS clause to choose access paths that minimize
the response time for retrieving the first few rows. For distributed queries, the
value of n determines the number of rows that DB2 sends to the client on each
DRDA network transmission. See Part 4 of DB2 Application Programming and SQL
Guide for more information on using OPTIMIZE FOR n ROWS in the distributed
environment.
Use OPTIMIZE FOR 1 ROW to avoid sorts: You can influence the access path
most by using OPTIMIZE FOR 1 ROW. OPTIMIZE FOR 1 ROW tells DB2 to select
an access path that returns the first qualifying row quickly. This means that
whenever possible, DB2 avoids any access path that involves a sort. If you specify
a value for n that is anything but 1, DB2 chooses an access path based on cost, and
you won’t necessarily avoid sorts.
How to specify OPTIMIZE FOR n ROWS for a CLI application: For a Call Level
Interface (CLI) application, you can specify that DB2 uses OPTIMIZE FOR n ROWS
for all queries. To do that, specify the keyword OPTIMIZEFORNROWS in the
initialization file. For more information, see Chapter 3 of DB2 ODBC Guide and
Reference.
How many rows you can retrieve with OPTIMIZE FOR n ROWS: The OPTIMIZE
FOR n ROWS clause does not prevent you from retrieving all the qualifying rows.
However, if you use OPTIMIZE FOR n ROWS, the total elapsed time to retrieve all
the qualifying rows might be significantly greater than if DB2 had optimized for
the entire result set.
If you add the OPTIMIZE FOR n ROWS clause to the statement, DB2 will probably
use the SALARY index directly because you have indicated that you expect to
retrieve the salaries of only the 20 most highly paid employees.
Example: The following statement uses that strategy to avoid a costly sort
operation:
SELECT LASTNAME,FIRSTNAME,EMPNO,SALARY
FROM EMP
ORDER BY SALARY DESC
OPTIMIZE FOR 20 ROWS;
When you specify OPTIMIZE FOR n ROWS for a remote query, a small value of n
can help limit the number of rows that flow across the network on any given
transmission.
You can improve the performance for receiving a large result set through a remote
query by specifying a large value of n in OPTIMIZE FOR n ROWS. When you
specify a large value, DB2 attempts to send the n rows in multiple transmissions.
For better performance when retrieving a large result set, in addition to specifying
OPTIMIZE FOR n ROWS with a large value of n in your query, do not execute
other SQL statements until the entire result set for the query is processed. If
For local or remote queries, to influence the access path most, specify OPTIMIZE
for 1 ROW. This value does not have a detrimental effect on distributed queries.
| To minimize contention among applications that access tables with this design,
| specify the VOLATILE keyword when you create or alter the tables. A table that is
| defined with the VOLATILE keyword is known as a volatile table. When DB2
| executes queries that include volatile tables, DB2 uses index access whenever
| possible. As well as minimizing contention, using index access preserves the access
| sequence that the primary key provides.
| You can specify a cardinality value for a user-defined table function by using the
| CARDINALITY clause of the SQL CREATE FUNCTION or ALTER FUNCTION
| statement. However, this value applies to all invocations of the function, whereas a
| user-defined table function might return different numbers of rows, depending on
| the query in which it is referenced.
| To give DB2 a better estimate of the cardinality of a user-defined table function for
| a particular query, you can use the CARDINALITY or CARDINALITY
| MULTIPLIER clause in that query. DB2 uses those clauses at bind time when it
| calculates the access cost of the user-defined table function. Using this clause is
| recommended only for programs that run on DB2 UDB for z/OS because the
| clause is not supported on earlier versions of DB2.
| Add the CARDINALITY 30 clause to tell DB2 that, for this query, TUDF1 should
| return 30 rows:
| SELECT *
| FROM TABLE(TUDF1(3) CARDINALITY 30) AS X;
| Add the CARDINALITY MULTIPLIER 30 clause to tell DB2 that, for this query,
| TUDF1 should return 5*30, or 150, rows:
| SELECT *
| FROM TABLE(TUDF2(10) CARDINALITY MULTIPLIER 30) AS X;
Q1:
SELECT * FROM PART_HISTORY -- SELECT ALL PARTS
WHERE PART_TYPE = ’BB’ P1 -- THAT ARE ’BB’ TYPES
AND W_FROM = 3 P2 -- THAT WERE MADE IN CENTER 3
AND W_NOW = 3 P3 -- AND ARE STILL IN CENTER 3
+------------------------------------------------------------------------------+
| Filter factor of these predicates. |
| P1 = 1/1000= .001 |
| P2 = 1/50 = .02 |
| P3 = 1/50 = .02 |
|------------------------------------------------------------------------------|
| ESTIMATED VALUES | WHAT REALLY HAPPENS |
| filter data | filter data |
| index matchcols factor rows | index matchcols factor rows |
| ix2 2 .02*.02 40 | ix2 2 .02*.50 1000 |
| ix1 1 .001 100 | ix1 1 .001 100 |
+------------------------------------------------------------------------------+
DB2 picks IX2 to access the data, but IX1 would be roughly 10 times quicker. The
problem is that 50% of all parts from center number 3 are still in Center 3; they
have not moved. Assume that there are no statistics on the correlated columns in
catalog table SYSCOLDIST. Therefore, DB2 assumes that the parts from center
number 3 are evenly distributed among the 50 centers.
You can get the desired access path by changing the query. To discourage the use
of IX2 for this particular query, you can change the third predicate to be
nonindexable.
SELECT * FROM PART_HISTORY
WHERE PART_TYPE = ’BB’
AND W_FROM = 3
AND (W_NOW = 3 + 0) <-- PREDICATE IS MADE NONINDEXABLE
You can make a predicate nonindexable in many ways. The recommended way is
to add 0 to a predicate that evaluates to a numeric value or to concatenate an
empty string to a predicate that evaluates to a character value.
Indexable Nonindexable
These techniques do not affect the result of the query and cause only a small
amount of overhead.
The preferred technique for improving the access path when a table has correlated
columns is to generate catalog statistics on the correlated columns. You can do that
either by running RUNSTATS or by updating catalog table SYSCOLDIST manually.
To access the data in a star schema design, you often write SELECT statements that
include join operations between the fact table and the dimension tables, but no join
operations between dimension tables. These types of queries are known as star-join
queries.
For a star-join query, DB2 uses a special join type called a star join if the following
conditions are true:
v The tables meet the conditions that are specified in “Star join (JOIN_TYPE=’S’)”
on page 926.
v The STARJOIN system parameter is set to ENABLE, and the number of tables in
the query block is greater than or equal to the minimum number that is
specified in the SJTABLES system parameter.
See “Star join (JOIN_TYPE=’S’)” on page 926 for detailed discussions of these
system parameters.
This section gives suggestions for choosing indexes might improve star-join query
performance.
Follow these steps to derive a fact table index for a star-join query that joins n
columns of fact table F to n dimension tables D1 through Dn:
Example of determining column order for a fact table index: Suppose that a star
schema has three dimension tables with the following cardinalities:
cardD1=2000
cardD2=500
cardD3=100
Now suppose that the cardinalities of single columns and pairs of columns in the
fact table are:
cardC1=2000
cardC2=433
cardC3=100
cardC12=625000
cardC13=196000
cardC23=994
Step 1: Calculate the density of all pairs of columns in the fact table:
density(C1,C2)=625000⁄(2000*500)=0.625
density(C1,C3)=196000⁄(2000*100)=0.98
density(C2,C3)=994⁄(500*100)=0.01988
Step 2: Find the pair of columns with the lowest density. That pair is (C2,C3).
Determine which column of the fact table is not in that pair. That column is C1.
Step 4: Repeat steps 1 through 3 to determine the second and first columns of the
index key:
density(C2)=433⁄500=0.866
density(C3)=100⁄100=1.0
The column with the lowest density is C2. Therefore, C3 is the second column of
the index. The remaining column, C2, is the first column of the index. That is, the
best order for the multi-column index is C2, C3, C1.
| If you update catalog statistics for a table space or index manually, and you are
| using dynamic statement caching, you need to invalidate statements in the cache
| that involve those table spaces or indexes. To invalidate statements in the dynamic
| statement cache without updating catalog statistics or generating reports, you can
| run the RUNSTATS utility with the REPORT NO and UPDATE NONE options on
| the table space or the index that the query is dependent on.
This query has a problem with data correlation. DB2 does not know that 50% of
the parts that were made in Center 3 are still in Center 3. The problem was
circumvented by making a predicate nonindexable. But suppose that hundreds of
users are writing queries similar to that query. Having all users change their
queries would be impossible. In this type of situation, the best solution is to
change the catalog statistics.
For the query in Figure 78 on page 760, you can update the catalog statistics in one
of two ways:
v Run the RUNSTATS utility, and request statistics on the correlated columns
W_FROM and W_NOW. This is the preferred method. See “Gathering monitor
statistics and update statistics” on page 875 and Part 2 of DB2 Utility Guide and
Referencefor more information.
v Update the catalog statistics manually.
Updating the catalog to adjust for correlated columns: One catalog table that you
| can update is SYSIBM.SYSCOLDIST, which gives information about a column or
| set of columns in a table. Assume that because columns W_NOW and W_FROM
are correlated, only 100 distinct values exist for the combination of the two
columns, rather than 2500 (50 for W_FROM * 50 for W_NOW). Insert a row like
this to indicate the new cardinality:
INSERT INTO SYSIBM.SYSCOLDIST
(FREQUENCY, FREQUENCYF, IBMREQD,
TBOWNER, TBNAME, NAME, COLVALUE,
TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS)
VALUES(0, -1, ’N’,
’USRT001’,’PART_HISTORY’,’W_FROM’,’ ’,
’C’,100,X’00040003’,2);
You tell DB2 about the frequency of a certain combination of column values by
updating SYSIBM.SYSCOLDIST. For example, you can indicate that 1% of the rows
in PART_HISTORY contain the values 3 for W_FROM and 3 for W_NOW by
inserting this row into SYSCOLDIST:
INSERT INTO SYSIBM.SYSCOLDIST
(FREQUENCY, FREQUENCYF, STATSTIME, IBMREQD,
TBOWNER, TBNAME, NAME, COLVALUE,
TYPE, CARDF, COLGROUPCOLNO, NUMCOLUMNS)
VALUES(0, .0100, ’1996-12-01-12.00.00.000000’,’N’,
’USRT001’,’PART_HISTORY’,’W_FROM’,X’00800000030080000003’,
’F’,-1,X’00040003’,2);
Updating the catalog for joins with table functions: Updating catalog statistics
might cause extreme performance problems if the statistics are not updated
correctly. Monitor performance, and be prepared to reset the statistics to their
original values if performance problems arise.
The best solution to the problem is to run RUNSTATS again after the table is
populated. However, if you cannot do that, you can use subsystem parameter
NPGTHRSH to cause DB2 to favor matching index access over a table space scan
and over nonmatching index access.
The value of NPGTHRSH is an integer that indicates the tables for which DB2
favors matching index access. Values of NPGTHRSH and their meanings are:
−1 DB2 favors matching index access for all tables.
0 DB2 selects the access path based on cost, and no tables qualify for
special handling. This is the default.
n>=1 If data access statistics have been collected for all tables, DB2
favors matching index access for tables for which the total number
of pages on which rows of the table appear (NPAGES) is less than
n.
# Tables with default statistics for NPAGES (NPAGES =-1) are presumed to have 501
# pages. For such tables, DB2 will favor matching index access only when
# NPGTHRSH is set above 501.
| When you enable the INLISTP parameter, you enable two primary means of
| optimizing some queries that contain IN-list predicates:
| v The IN-list predicate is pushed down from the parent query block into the
| materialized table expression.
| v A correlated IN-list predicate in a subquery that is generated by transitive
| closure is moved up to the parent query block.
For more information about reasons to use the QUERYNO clause, see “Reasons
to use the QUERYNO clause” on page 770.
2. Make the PLAN_TABLE rows for that query (QUERYNO=100) into a hint by
updating the OPTHINT column with the name you want to call the hint. In
this case, the name is OLDPATH:
UPDATE PLAN_TABLE
SET OPTHINT = ’OLDPATH’ WHERE
QUERYNO = 100 AND
APPLNAME = ’ ’ AND
PROGNAME = ’DSNTEP2’ AND
VERSION = ’’ AND
COLLID = ’DSNTEP2’;
3. Tell DB2 to use the hint, and indicate in the PLAN_TABLE that DB2 used the
hint.
v For dynamic SQL statements in the program, follow these steps:
a. Execute the SET CURRENT OPTIMIZATION HINT statement in the
program to tell DB2 to use OLDPATH. For example:
SET CURRENT OPTIMIZATION HINT = 'OLDPATH';
The PLAN_TABLE in Table 131 shows the OLDPATH hint, indicated by a value
in OPTHINT and it also shows that DB2 used that hint, indicated by
OLDPATH in the HINT_USED column.
Table 131. PLAN_TABLE that shows that the OLDPATH optimization hint is used.
QUERYNO METHOD TNAME OPTHINT HINT_USED
100 0 EMP OLDPATH
100 4 EMPPROJACT OLDPATH
100 3 OLDPATH
100 0 EMP OLDPATH
100 4 EMPPROJECT OLDPATH
100 3 OLDPATH
The PLAN_TABLE in Table 132 shows the NOHYB hint, indicated by a value in
OPTHINT and it also shows that DB2 used that hint, indicated by NOHYB in
the HINT_USED column.
Table 132. PLAN_TABLE that shows that the NOHYB optimization hint is used.
QUERYNO METHOD TNAME OPTHINT HINT_USED
200 0 EMP NOHYB
200 2 EMPPROJACT NOHYB
200 3 NOHYB
200 0 EMP NOHYB
200 2 EMPPROJECT NOHYB
200 3 NOHYB
DB2 validates the information in only the PLAN_TABLE columns in Table 133.
Table 133. PLAN_TABLE columns that DB2 validates
Column Correct values or other explanation
METHOD Must be 0, 1, 2, 3, or 4. Any other value invalidates the
hints. See “Interpreting access to two or more tables (join)”
on page 918 for more information about join methods.
CREATOR and TNAME Must be specified and must name a table, materialized view,
materialized nested table expression. Blank if method is 3. If
a table is named that does not exist or is not involved in the
query, then the hints are invalid.
After the basic recommendations, the chapter covers some of the major techniques
that DB2 uses to control concurrency:
v Transaction locks mainly control access by SQL statements. Those locks are the
ones over which you have the most control.
– “Aspects of transaction locks” on page 781 describes the various types of
transaction locks that DB2 uses and how they interact.
– “Options for tuning locks” on page 795 describes what you can change to
control locking. Your choices include:
- “IRLM startup procedure options” on page 796
- “Installation options for wait times” on page 796
- “Other options that affect locking” on page 801
- “Bind options” on page 806
- “Isolation overriding with SQL statements” on page 821
- “The LOCK TABLE statement” on page 822
Under those headings, lock (with no qualifier) refers to transaction lock.
v LOB locks are described separately from transaction locks because the purpose
of LOB locks is different than that of regular transition locks. See “LOB locks”
on page 823.
v Latches are conceptually similar to locks in that they control serialization. They
can improve concurrency because they are usually held for shorter duration than
locks and they cannot “deadlatch”. However, latches can wait, and this wait
time is reported in accounting trace class 3. Because latches are not under your
control, they are not described in any detail.
v Claims and drains provide another mechanism to control serialization. SQL
applications and some utilities make claims for objects when they first access
them. Operations that drain can take control of an object by quiescing the
existing claimers and preventing new claims. After a drainer completes its
operations, claimers can resume theirs. DB2 utilities, commands, and some SQL
statements can act as drainers.
“Claims and drains for concurrency control” on page 827 describes claims and
drains in more detail and explains how to plan utility jobs and other activities to
maximize efficiency.
The chapter also describes how you can monitor DB2’s use of locks and how you
can analyze a sample problem in “Monitoring of DB2 locking” on page 832.
Finally, “Deadlock detection scenarios” on page 840 shows how to detect deadlock
situations.
DB2 extends its concurrency controls to multiple subsystems for data sharing. For
information about that, see DB2 Data Sharing: Planning and Administration.
To prevent those situations from occurring unless they are specifically allowed,
DB2 might use locks to control concurrency.
What do locks do? A lock associates a DB2 resource with an application process in
a way that affects how other processes can access the same resource. The process
associated with the resource is said to “hold” or “own” the lock. DB2 uses locks to
ensure that no process accesses data that has been changed, but not yet committed,
by another process.
What do you do about locks? To preserve data integrity, your application process
acquires locks implicitly, that is, under DB2 control. It is not necessary for a process
to request a lock explicitly to conceal uncommitted data. Therefore, sometimes you
need not do anything about DB2 locks. Nevertheless processes acquire, or avoid
acquiring, locks based on certain general parameters. You can make better use of
your resources and improve concurrency by understanding the effects of those
parameters.
Suspension
Definition: An application process is suspended when it requests a lock that is
already held by another application process and cannot be shared. The suspended
process temporarily stops running.
774 Administration Guide
Order of precedence for lock requests: Incoming lock requests are queued. Requests
for lock promotion, and requests for a lock by an application process that already
holds a lock on the same object, precede requests for locks by new applications.
Within those groups, the request order is “first in, first out”.
Example: Using an application for inventory control, two users attempt to reduce
the quantity on hand of the same item at the same time. The two lock requests are
queued. The second request in the queue is suspended and waits until the first
request releases its lock.
Timeout
Definition: An application process is said to time out when it is terminated because
it has been suspended for longer than a preset interval.
Effects: DB2 terminates the process, issues two messages to the console, and
returns SQLCODE -911 or -913 to the process (SQLSTATEs '40001' or '57033').
Reason code 00C9008E is returned in the SQLERRD(3) field of the SQLCA.
| Alternatively, you can use the GET DIAGNOSTICS statement to check the reason
| code. If statistics trace class 3 is active, DB2 writes a trace record with IFCID 0196.
COMMIT and ROLLBACK operations do not time out. The command STOP
DATABASE, however, may time out and send messages to the console, but it will
retry up to 15 times.
For more information about setting the timeout interval, see “Installation options
for wait times” on page 796.
Deadlock
Definition: A deadlock occurs when two or more application processes each hold
locks on resources that the others need and without which they cannot proceed.
Table M (2) OK
(4)
000300 Page B Job PROJNCHG
Suspend
Notes:
1. Jobs EMPLJCHG and PROJNCHG are two transactions. Job EMPLJCHG accesses table
M, and acquires an exclusive lock for page B, which contains record 000300.
2. Job PROJNCHG accesses table N, and acquires an exclusive lock for page A, which
contains record 000010.
3. Job EMPLJCHG requests a lock for page A of table N while still holding the lock on
page B of table M. The job is suspended, because job PROJNCHG is holding an
exclusive lock on page A.
4. Job PROJNCHG requests a lock for page B of table M while still holding the lock on
page A of table N. The job is suspended, because job EMPLJCHG is holding an exclusive
lock on page B. The situation is a deadlock.
Effects: After a preset time interval (the value of DEADLOCK TIME), DB2 can roll
back the current unit of work for one of the processes or request a process to
terminate. That frees the locks and allows the remaining processes to continue. If
statistics trace class 3 is active, DB2 writes a trace record with IFCID 0172. Reason
| code 00C90088 is returned in the SQLERRD(3) field of the SQLCA. Alternatively,
| you can use the GET DIAGNOSTICS statement to check the reason code. (The
codes that describe DB2’s exact response depend on the operating environment; for
details, see Part 5 of DB2 Application Programming and SQL Guide.)
Plan for batch inserts: If your application does sequential batch insertions,
excessive contention on the space map pages for the table space can occur. This
problem is especially apparent in data sharing, where contention on the space map
means the added overhead of page P-lock negotiation. For these types of
applications, consider using the MEMBER CLUSTER option of CREATE
TABLESPACE. This option causes DB2 to disregard the clustering index (or implicit
clustering index) when assigning space for the SQL INSERT statement. For more
information about using this option in data sharing, see Chapter 6 of DB2 Data
Sharing: Planning and Administration. For the syntax, see Chapter 5 of DB2 SQL
Reference.
Use LOCKSIZE ANY until you have reason not to: LOCKSIZE ANY is the default
for CREATE TABLESPACE. It allows DB2 to choose the lock size, and DB2 usually
chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB table spaces. For
LOB table spaces, it chooses LOCKSIZE LOB and LOCKMAX SYSTEM. You should
use LOCKSIZE TABLESPACE or LOCKSIZE TABLE only for read-only table spaces
or tables, or when concurrent access to the object is not needed. Before you choose
LOCKSIZE ROW, you should estimate whether there will be an increase in
overhead for locking and weigh that against the increase in concurrency.
Examine small tables: For small tables with high concurrency requirements,
estimate the number of pages in the data and in the index. If the index entries are
short or they have many duplicates, then the entire index can be one root page and
a few leaf pages. In this case, spread out your data to improve concurrency, or
consider it a reason to use row locks.
Partition the data: Large tables can be partitioned to take advantage of parallelism
for online queries, batch jobs, and utilities. When batch jobs are run in parallel and
each job goes after different partitions, lock contention is reduced. In addition, in a
data sharing environment, data sharing overhead is reduced when applications
that are running on different members go after different partitions.
| However, the use of data-partitioned secondary indexes does not always improve
| the performance of queries. For example, for a query with a predicate that
| references only the columns of a data-partitioned secondary index, DB2 must probe
| each partition of the index for values that satisfy the predicate if index access is
| chosen as the access path. Therefore, take into account data access patterns and
| maintenance practices when deciding to use a data-partitioned secondary index.
| Replace a nonpartitioned index with a partitioned index only if there are
| perceivable benefits such as improved data or index availability, easier data or
| index maintenance, or improved performance.
Fewer rows of data per page: By using the MAXROWS clause of CREATE or
ALTER TABLESPACE, you can specify the maximum number of rows that can be
on a page. For example, if you use MAXROWS 1, each row occupies a whole page,
and you confine a page lock to a single row. Consider this option if you have a
reason to avoid using row locking, such as in a data sharing environment where
row locking overhead can be greater.
| Consider volatile tables to ensure index access: If multiple applications access the
| same table, consider defining the table as VOLATILE. DB2 uses index access
| whenever possible for volatile tables, even if index access does not appear to be
| the most efficient access method because of volatile statistics. Because each
| application generally accesses the rows in the table in the same order, lock
| contention can be reduced.
| Taking commit points frequently in a long running unit of recovery (UR) has the
| following benefits at the possible cost of more CPU usage and log write I/Os:
v Reduces lock contention, especially in a data sharing environment
v Improves the effectiveness of lock avoidance, especially in a data sharing
environment
v Reduces the elapsed time for DB2 system restart following a system failure
v Reduces the elapsed time for a unit of recovery to rollback following an
application failure or an explicit rollback request by the application
v Provides more opportunity for utilities, such as online REORG, to break in
Close cursors: If you define a cursor using the WITH HOLD option, the locks it
needs can be held past a commit point. Use the CLOSE CURSOR statement as
soon as possible in your program to cause those locks to be released and the
resources they hold to be freed at the first commit point that follows the CLOSE
CURSOR statement. Whether page or row locks are held for WITH HOLD cursors
is controlled by the RELEASE LOCKS parameter on installation panel DSNTIP4.
Closing cursors is particularly important in a distributed environment.
Free locators: If you have executed the HOLD LOCATOR statement, the LOB
locator holds locks on LOBs past commit points. Use the FREE LOCATOR
statement to release these locks.
Bind plans with ACQUIRE(USE): ACQUIRE(USE), which indicates that DB2 will
acquire table and table space locks when the objects are first used and not when
the plan is allocated, is the best choice for concurrency. Packages are always bound
with ACQUIRE(USE), by default. ACQUIRE(ALLOCATE) can provide better
protection against timeouts. Consider ACQUIRE(ALLOCATE) for applications that
need gross locks instead of intent locks or that run with other applications that
may request gross locks instead of intent locks. Acquiring the locks at plan
allocation also prevents any one transaction in the application from incurring the
cost of acquiring the table and table space locks. If you need
ACQUIRE(ALLOCATE), you might want to bind all DBRMs directly to the plan.
For information about intent and gross locks, see “The mode of a lock” on page
785.
For more information about the ISOLATION option, see “The ISOLATION option”
on page 810.
For updatable dynamic scrollable cursors and ISOLATION(CS), DB2 holds row or
page locks on the base table (DB2 does not use a temporary global table). The most
recently fetched row or page from the base table remains locked to maintain data
integrity for a positioned update or delete.
| The use of sequences can avoid the lock contention problems that can result when
| applications implement their own sequences, such as in a one-row table that
| contains a sequence number that each transaction must increment. With DB2
| sequences, many users can access and increment the sequence concurrently
| without waiting. DB2 does not wait for a transaction that has incremented a
| sequence to commit before allowing another transaction to increment the sequence
| again.
For information on how to make an agent part of a global transaction for RRSAF
applications, see Section 7 of DB2 Application Programming and SQL Guide.
Knowing the aspects helps you understand why a process suspends or times out
or why two processes deadlock. To change the situation, you also need to know:
v “DB2 choices of lock types” on page 790
Table space lock Table space lock LOB table space lock
Table lock
Row lock Page lock Row lock Page lock LOB lock
Row lock Page lock Row lock Page lock Row lock Page lock
As the figure also shows, in a segmented table space, a table lock applies only to
segments assigned to a single table. Thus, User 1 can lock all pages assigned to the
segments of T1 while User 2 locks all pages assigned to segments of T2. Similarly,
User 1 can lock a page of T1 without locking any data in T2.
User 1
Lock on TS1
User 1 User 2
Lock on table T1 Lock on table T2
User 1 User 2
Lock on table T1 Lock on table T2
Figure 81. Page locking for simple and segmented table spaces
Effects
For maximum concurrency, locks on a small amount of data held for a short
duration are better than locks on a large amount of data held for a long duration.
Duration of partition, table, and table space locks: Partition, table, and table space
locks can be acquired when a plan is first allocated, or you can delay acquiring
them until the resource they lock is first used. They can be released at the next
commit point or be held until the program terminates.
On the other hand, LOB table space locks are always acquired when needed and
released at a commit or held until the program terminates. See “LOB locks” on
page 823 for information about locking LOBs and LOB table spaces.
Duration of page and row locks: If a page or row is locked, DB2 acquires the lock
only when it is needed. When the lock is released depends on many factors, but it
is rarely held beyond the next commit point.
For information about controlling the duration of locks, see “Bind options” on
page 806 for information about the ACQUIRE and RELEASE, ISOLATION, and
CURRENTDATA bind options.
The possible modes for page and row locks and the modes for partition, table, and
table space locks are listed in “Modes of page and row locks” and “Modes of table,
partition, and table space locks” on page 786. See “LOB locks” on page 823 for
more information about modes for LOB locks and locks on LOB table spaces.
When a page or row is locked, the table, partition, or table space containing it is
also locked. In that case, the table, partition, or table space lock has one of the
intent modes: IS, IX, or SIX. The modes S, U, and X of table, partition, and table
space locks are sometimes called gross modes. In the context of reading, SIX is a
gross mode lock because you don’t get page or row locks; in this sense, it is like an
S lock.
Example: An SQL statement locates John Smith in a table of customer data and
changes his address. The statement locks the entire table space in mode IX and the
specific row that it changes in mode X.
Definition: Locks of some modes do not shut out all other users. Assume that
application process A holds a lock on a table space that process B also wants to
access. DB2 requests, on behalf of B, a lock of some particular mode. If the mode
of A’s lock permits B’s request, the two locks (or modes) are said to be compatible.
Effects of incompatibility: If the two locks are not compatible, B cannot proceed. It
must wait until A releases its lock. (And, in fact, it must wait until all existing
incompatible locks are released.)
Compatible lock modes: Compatibility for page and row locks is easy to define.
Table 134 shows whether page locks of any two modes, or row locks of any two
modes, are compatible (Yes) or not (No). No question of compatibility of a page
lock with a row lock can arise, because a table space cannot use both page and
row locks.
Table 134. Compatibility of page lock and row lock modes
Lock Mode S U X
S Yes Yes No
U Yes No No
X No No No
Compatibility for table space locks is slightly more complex. Table 135 shows
whether or not table space locks of any two modes are compatible.
Table 135. Compatibility of table and table space (or partition) lock modes
Lock Mode IS IX S U SIX X
The underlying data page or row locks are acquired to serialize the reading and
updating of index entries to ensure the data is logically consistent, meaning that
the data is committed and not subject to rollback or abort. The data locks can be
held for a long duration such as until commit. However, the page latches are only
held for a short duration while the transaction is accessing the page. Because the
index pages are not locked, hot spot insert scenarios (which involve several
transactions trying to insert different entries into the same index page at the same
time) do not cause contention problems in the index.
A query that uses index-only access might lock the data page or row, and that lock
can contend with other processes that lock the data. However, using lock
avoidance techniques can reduce the contention. See “Lock avoidance” on page 818
for more information about lock avoidance.
Contention within table space SYSDBASE: SQL statements that update the
catalog table space SYSDBASE contend with each other when those statements are
on the same table space. Those statements are:
CREATE, ALTER, and DROP TABLESPACE, TABLE, and INDEX
CREATE and DROP VIEW, SYNONYM, and ALIAS
COMMENT ON and LABEL ON
GRANT and REVOKE of table privileges
Reading the table: The following SQL statements and sample steps provide a
way to understand the table that shows the modes of locks.
EXEC SQL DELETE FROM DSN8810.EMP WHERE CURRENT OF C1;
LOCKSIZE ISOLATION Access method1 Table space9 Table2 Data page or row3
Processing statement: SELECT with read-only or ambiguous cursor, or with no cursor. UR isolation is allowed and
requires none of these locks.
TABLESPACE CS RS RR Any S n/a n/a
2
TABLE CS RS RR Any IS S n/a
4, 10 4
PAGE, ROW, CS Index, any use IS IS S5
or ANY
Table space scan IS4, 10 IS4 S5
| PAGE, ROW, RS Index, any use IS4, 10 IS4 S5, U11, or X11
| or ANY
| Table space scan IS4, 10 IS4 S5, U11, or X11
| PAGE, ROW, RR Index/data probe IS4 IS4 S5, U11, or X11
or ANY
| Index scan6 IS4 or S S, IS4, or n/a S5, U11, X11, or n/a
Table space scan6 IS2 or S S or n/a n/a
7
Processing statement: INSERT ... VALUES(...) or INSERT ... fullselect
TABLESPACE CS RS RR Any X n/a n/a
2
TABLE CS RS RR Any IX X n/a
# PAGE, ROW,
# CS RS RR Any IX IX X
# or ANY12
Processing statement: UPDATE or DELETE, without cursor. Data page and row locks apply only to selected data.
TABLESPACE CS RS RR Any X n/a n/a
2
TABLE CS RS RR Any IX X n/a
PAGE, ROW, CS Index selection IX IX v For delete: X
or ANY v For update: U→X
Index/data selection IX IX U→X
Table space scan IX IX U→X
PAGE, ROW, RS Index selection IX IX v For update: S or
or ANY U8 →X
v For delete: [S→X]
or X
Index/data selection IX IX S or U8 →X
Table space scan IX IX S or U8 →X
PAGE, ROW, RR Index selection IX IX v For update: [S or
or ANY U8 →X] or X
v For delete: [S→X]
or X
Index/data selection IX IX S or U8 →X
Table space scan IX2 or X X or n/a n/a
Processing statement: SELECT with FOR UPDATE OF. Data page and row locks apply only to selected data.
TABLESPACE CS RS RR Any U n/a n/a
2
TABLE CS RS RR Any IS or IX U n/a
PAGE, ROW, CS Index, any use IX IX U
or ANY
Table space scan IX IX U
LOCKSIZE ISOLATION Access method1 Table space9 Table2 Data page or row3
Lock promotion
Definition: Lock promotion is the action of exchanging one lock on a resource for a
more restrictive lock on the same resource, held by the same application process.
Effects: When promoting the lock, DB2 first waits until any incompatible locks held
by other processes are released. When locks are promoted, they are promoted in
the direction of increasing control over resources: from IS to IX, S, or X; from IX to
SIX or X; from S to X; from U to X; and from SIX to X.
Lock escalation
Definition: Lock escalation is the act of releasing a large number of page, row or
LOB locks, held by an application process on a single table or table space, to
acquire a table or table space lock, or a set of partition locks, of mode S or X
instead. When it occurs, DB2 issues message DSNI031I, which identifies the table
space for which lock escalation occurred, and some information to help you
identify what plan or package was running when the escalation occurred.
Lock counts are always kept on a table or table space level. For an application
process that is accessing LOBs, the LOB lock count on the LOB table space is
maintained separately from the base table space, and lock escalation occurs
separately from the base table space.
| When escalation occurs for a partitioned table space, only partitions that are
| currently locked are escalated. Unlocked partitions remain unlocked. After lock
escalation occurs, any unlocked partitions that are subsequently accessed are
locked with a gross lock.
Example: Assume that a segmented table space is defined with LOCKSIZE ANY
and LOCKMAX 2000. DB2 can use page locks for a process that accesses a table in
the table space and can escalate those locks. If the process attempts to lock more
than 2000 pages in the table at one time, DB2 promotes its intent locks on the table
to mode S or X and then releases its page locks.
If the process is using Sysplex query parallelism and a table space that it accesses
has a LOCKMAX value of 2000, lock escalation occurs for a member only if more
than 2000 locks are acquired for that member.
See “Controlling LOB lock escalation” on page 826 for information about lock
escalation for LOBs.
Recommendations: The DB2 statistics and performance traces can tell you how
often lock escalation has occurred and whether it has caused timeouts or
deadlocks. As a rough estimate, if one quarter of your lock escalations cause
timeouts or deadlocks, then escalation is not effective for you. You might alter the
table to increase LOCKMAX and thus decrease the number of escalations.
Example: Assume that a table space is used by transactions that require high
concurrency and that a batch job updates almost every page in the table space. For
high concurrency, you should probably create the table space with LOCKSIZE
PAGE and make the batch job commit every few seconds.
LOCKSIZE ANY is a possible choice, if you take other steps to avoid lock
escalation. If you use LOCKSIZE ANY, specify a LOCKMAX value large enough so
that locks held by transactions are not normally escalated. Also, LOCKS PER USER
must be large enough so that transactions do not reach that limit.
The maximum amount of storage available for IRLM locks is limited to 90% of the
total space given to the IRLM private address space during the startup procedure.
The other 10% is reserved for IRLM system services, z/OS system services, and
“must complete” processes to prevent the IRLM address space from abending,
which would bring down your DB2 system. When the storage limit is reached,
lock requests are rejected with an out-of-storage reason code.
The following fields of the installation panels are relevant to transaction locks:
Default: 5 seconds.
Default: 60 seconds.
The timeout period: From the value of RESOURCE TIMEOUT and DEADLOCK
TIME, DB2 calculates a timeout period. Assume that DEADLOCK TIME is 5 and
RESOURCE TIMEOUT is 18.
1. Divide RESOURCE TIMEOUT by DEADLOCK TIME (18/5 = 3.6). IRLM limits
the result of this division to 255.
2. Round the result to the next largest integer (Round up 3.6 to 4).
3. Multiply the DEADLOCK TIME by that integer (4 * 5 = 20).
The result, the timeout period (20 seconds), is always at least as large as the value
of RESOURCE TIMEOUT (18 seconds), except when the RESOURCE TIMEOUT
divided by DEADLOCK TIME exceeds 255.
The timeout multiplier: Requests from different types of processes wait for
different multiples of the timeout period. In a data sharing environment, you can
add another multiplier to those processes to wait for retained locks.
Changing the multiplier for IMS BMP and DL/I batch: You can modify the
multipliers for IMS BMP and DL/I batch by modifying the following subsystem
parameters on installation panel DSNTIPI:
IMS BMP TIMEOUT The timeout multiplier for IMS BMP connections. A
value from 1 to 254 is acceptable. The default is 4.
DL/I BATCH TIMEOUT The timeout multiplier for IMS DL/I batch
connections. A value from 1 to 254 is acceptable.
The default is 6.
Additional multiplier for retained locks: For data sharing, you can specify an
additional timeout multiplier to be applied to the connection’s normal timeout
multiplier. This multiplier is used when the connection is waiting for a retained
lock, which is a lock held by a failed member of a data sharing group. A zero
means don’t wait for retained locks. See DB2 Data Sharing: Planning and
Administration for more information about retained locks.
The scanning schedule: Figure 82 on page 799 illustrates the following example of
scanning to detect a timeout:
v DEADLOCK TIME has the default value of 5 seconds.
v RESOURCE TIMEOUT was chosen to be 18 seconds. Therefore, the timeout
period is 20 seconds.
v A bind operation starts 4 seconds before the next scan. The operation multiplier
for a bind operation is 3.
0 4 9 14 19 24 29 34 39 44 49 54 59 64 69
Time in seconds
Deadlock time
Grace period
Effects: An operation can remain inactive for longer than the value of RESOURCE
TIMEOUT.
If you are in a data sharing environment, the deadlock and timeout detection
process is longer than that for non-data-sharing systems. See DB2 Data Sharing:
Planning and Administration for more information about global detection processing
and elongation of the timeout period.
Recommendation: Consider the length of inaction time when choosing your own
values of DEADLOCK TIME and RESOURCE TIMEOUT.
Default: 0. That value disables the scan to time out idle threads. The threads can
then remain idle indefinitely.
Default: 6.
Recommendation: With the default value, a utility generally waits longer for a
resource than does an SQL application. To specify a different inactive period, you
must consider how DB2 times out a process that is waiting for a drain, as
described in “Wait time for drains.”
If the process drains more than one claim class, it must wait for those events to
occur for each claim class it drains.
Wait times for drain lock and claim release: Both wait times are based on the
timeout period that is calculated in “RESOURCE TIMEOUT on installation panel
DSNTIPI” on page 797. For the REORG utility with the SHRLEVEL REFERENCE
or SHRLEVEL CHANGE option, you can use utility parameters to specify the wait
time for a drain lock and to indicate if additional attempts should be made to
acquire the drain lock. For more information, see DB2 Utility Guide and Reference.
Drainer Each wait time is:
Utility (timeout period) * (value of UTILITY TIMEOUT)
Other process timeout period
Maximum wait time: Because the maximum wait time for a drain lock is the same
as the maximum wait time for releasing claims, you can calculate the total
maximum wait time as follows:
For utilities:
2 * (timeout period) * (UTILITY TIMEOUT) * (number of claim classes)
For other processes:
2 * (timeout period) * (operation multiplier) * (number of claim classes)
Example: Suppose that LOAD must drain 3 claim classes, that the timeout period
is 20 seconds, and that the value of UTILITY TIMEOUT is 6. Use the following
calculation to determine how long the LOAD might utility be suspended before
being timed out:
Maximum wait time = 2 * 20 * 6 * 3 = 720 seconds
Wait times less than maximum: The maximum drain wait time is the longest
possible time a drainer can wait for a drain, not the length of time it always waits.
When a request for a page, row, or LOB lock exceeds the specified limit, it receives
SQLCODE -904: “resource unavailable” (SQLSTATE '57011'). The requested lock
cannot be acquired until some of the existing locks are released.
Default: 10 000
Recommendation: The default should be adequate for 90 percent of the work load
when using page locks. If you use row locks on very large tables, you might want
a higher value. If you use LOBs, you might need a higher value.
Review application processes that require higher values to see if they can use table
space locks rather than page, row, or LOB locks. The accounting trace shows the
maximum number of page, row, or LOB locks a process held while running.
Effect: Specifies the size for locks held on a table or table space by any application
process that accesses it. In addition to using the ALTER TABLESPACE statement to
change the lock size for user data, you can also change the lock size of any DB2
catalog table space that is neither a LOB table space nor a table space that contains
links. The options are:
LOCKSIZE TABLESPACE
A process acquires no table, page, row, or LOB locks within the table space.
That improves performance by reducing the number of locks maintained, but
greatly inhibits concurrency.
LOCKSIZE TABLE
A process acquires table locks on tables in a segmented table space. If the table
space contains more than one table, this option can provide acceptable
concurrency with little extra cost in processor resources.
LOCKSIZE PAGE
A process acquires page locks, plus table, partition, or table space locks of
modes that permit page locks (IS, IX, or SIX). The effect is not absolute: a
process can still acquire a table, partition, or table space lock of mode S or X,
without page locks, if that is needed. In that case, the bind process issues a
message warning that the lock size has been promoted as described under
“Lock promotion” on page 793.
LOCKSIZE ROW
A process acquires row locks, plus table, partition, or table space locks of
modes that permit row locks (IS, IX, or SIX). The effect is not absolute: a
process can still acquire a table or table space lock of mode S or X, without
row locks, if that is needed. In that case, the bind process issues a message
warning that the lock size has been promoted as described under “Lock
promotion” on page 793.
LOCKSIZE ANY
DB2 chooses the size of the lock, usually LOCKSIZE PAGE.
LOCKSIZE LOB
If a LOB must be accessed, a process acquires LOB locks and the necessary
LOB table space locks (IS or IX). This option is valid only for LOB table spaces.
See “LOB locks” on page 823 for more information about LOB locking.
DB2 attempts to acquire an S lock on table spaces that are started with read-only
access. If the LOCKSIZE is PAGE or ROW, and DB2 cannot get the S lock, it
| requests an IS lock. If a partition is started with read-only access, DB2 attempts to
| get an S lock on the partition that is started RO. For a complete description of how
the LOCKSIZE clause affects lock attributes, see “DB2 choices of lock types” on
page 790.
Recommendation: If you do not use the default, base your choice upon the results
of monitoring applications that use the table space. When considering changing the
lock size for a DB2 catalog table space, be aware that, in addition to user queries,
DB2 internal processes such as bind and authorization checking and utility
processing can access the DB2 catalog.
Row locks or page locks? The question of whether to use row or page locks
depends on your data and your applications. If you are experiencing contention on
data pages of a table space now defined with LOCKSIZE PAGE, consider
LOCKSIZE ROW. But consider also the trade-offs.
The resource required to acquire, maintain, and release a row lock is about the
same as that required for a page lock. If your data has 10 rows per page, a table
space scan or an index scan can require nearly 10 times as much resource for row
locks as for page locks. But locking only a row at a time, rather than a page, might
reduce the chance of contention with some other process by 90%, especially if
access is random. (Row locking is not recommended for sequential processing.)
| Lock avoidance is very important when row locking is used. Therefore, use
| ISOLATION(CS) CURRENTDATA(NO) or ISOLATION(UR) whenever possible. In
many cases, DB2 can avoid acquiring a lock when reading data that is known to be
committed. Thus, if only 2 of 10 rows on a page contain uncommitted data, DB2
must lock the entire page when using page locks, but might ask for locks on only
the 2 rows when using row locks. Then, the resource required for row locks would
be only twice as much, not 10 times as much, as that required for page locks.
On the other hand, if two applications update the same rows of a page, and not in
the same sequence, then row locking might even increase contention. With page
locks, the second application to access the page must wait for the first to finish and
might time out. With row locks, the two applications can access the same page
simultaneously, and might deadlock while trying to access the same set of rows.
Effect: You can specify these values not only for tables of user data but also, by
using ALTER TABLESPACE, for tables in the DB2 catalog.
LOCKMAX n
Specifies the maximum number of page or row locks that a single application
process can hold on the table space before those locks are escalated as
described in “Lock escalation” on page 793. For LOB table spaces, this value
specifies the number of LOB locks that the application process can hold before
escalating. For an application that uses Sysplex query parallelism, a lock count
is maintained on each member.
LOCKMAX SYSTEM
Specifies that n is effectively equal to the system default set by the field
LOCKS PER TABLE(SPACE) of installation panel DSNTIPJ.
LOCKMAX 0
Disables lock escalation entirely.
Chapter 30. Improving concurrency 803
Default: The default depends on the value of LOCKSIZE, as shown in Table 141.
Table 141. How the default for LOCKMAX is determined
LOCKSIZE Default for LOCKMAX
ANY SYSTEM
other 0
Recommendations: If you do not use the default, base your choice upon the
results of monitoring applications that use the table space.
Aim to set the value of LOCKMAX high enough that, when lock escalation occurs,
one application already holds so many locks that it significantly interferes with
others. For example, if an application holds half a million locks on a table with a
million rows, it probably already locks out most other applications. Yet lock
escalation can prevent it from potentially acquiring another half million locks.
If you alter a table space from LOCKSIZE PAGE or LOCKSIZE ANY to LOCKSIZE
ROW, consider increasing LOCKMAX to allow for the increased number of locks
that applications might require.
Default: 1000
Recommendation: Use the default or, if you are migrating from a previous release
of DB2, continue to use the existing value. The value should be less than the value
for LOCKS PER USER, unless the value or LOCKS PER USER is 0. When you
create or alter a table space, especially when you alter one to use row locks, use
the LOCKMAX clause explicitly for that table space.
Default: YES
Recommendation: The default, YES, causes DB2, at commit time, to release the
data page or row lock for the row on which the cursor is positioned. This lock is
unnecessary for maintaining cursor position. To improve concurrency, specify YES.
Specify NO only for those cases in which existing applications rely on that
particular data lock.
Default: NO
When YES is specified, DB2 uses an X lock on rows or pages that qualify during
stage 1 processing. With ISOLATION(CS), the lock is released if the row or page is
not updated or deleted because it is rejected by stage 2 processing. With
ISOLATION(RR) or ISOLATION(RS), DB2 acquires an X lock on all rows that fall
within the range of the selection expression. Thus, a lock upgrade request is not
needed for qualifying rows though the lock duration is changed from manual to
commit. The lock duration change is not as costly as a lock upgrade.
A value of YES specifies that predicate evaluation can occur on uncommitted data
of other transactions. With YES, data might be excluded from the answer set. Data
that does not satisfy the predicate during evaluation but then, because of undo
processing (ROLLBACK or statement failure), reverts to a state that does satisfy the
predicate is missing from the answer set. A value of YES enables DB2 to take fewer
locks during query processing. The number of locks avoided depends on:
v The query’s access path
v The number of evaluated rows that do not satisfy the predicate
v The number of those rows that are on overflow pages
Default: NO
| Default: NO
| Example: Suppose that you frequently modify data by deleting the data and
| inserting the new image of the data. In such cases that avoid UPDATE statements,
| the default should be used.
Bind options
The information under this heading, up to “Isolation overriding with SQL
statements” on page 821, is General-use Programming Interface and Associated
Guidance Information, as defined in “Notices” on page 1237.
The RELEASE option and dynamic statement caching: Generally, the RELEASE
option has no effect on dynamic SQL statements with one exception. When you
use the bind options RELEASE(DEALLOCATE) and KEEPDYNAMIC(YES), and
your subsystem is installed with YES for field CACHE DYNAMIC SQL on
installation panel DSNTIP4, DB2 retains prepared SELECT, INSERT, UPDATE, and
DELETE statements in memory past commit points. For this reason, DB2 can honor
the RELEASE(DEALLOCATE) option for these dynamic statements. The locks are
held until deallocation, or until the commit after the prepared statement is freed
from memory, in the following situations:
v The application issues a PREPARE statement with the same statement identifier.
v The statement is removed from memory because it has not been used.
v An object that the statement is dependent on is dropped or altered, or a
privilege needed by the statement is revoked.
v RUNSTATS is run against an object that the statement is dependent on.
For partitioned table spaces, lock demotion occurs for each partition for which
there is a lock.
Defaults: The defaults differ for different types of bind operations, as shown in
Table 144.
Table 144. Default ACQUIRE and RELEASE values for different bind options
Operation Default values
BIND PLAN ACQUIRE(USE) and RELEASE(COMMIT).
BIND PACKAGE There is no option for ACQUIRE; ACQUIRE(USE) is
always used. At the local server the default for RELEASE
is the value used by the plan that includes the package in
its package list. At a remote server the default is
COMMIT.
REBIND PLAN or PACKAGE The existing values for the plan or package that is being
rebound.
The RELEASE option and DDL operations for remote requesters: When you
perform DDL operations on behalf of remote requesters and
RELEASE(DEALLOCATE) is in effect, be aware of the following condition. When a
package that is bound with RELEASE(DEALLOCATE) accesses data at a server, it
To allow those operations to complete, you can use the command STOP DDF
MODE(SUSPEND). The command suspends server threads and terminates their
locks so that DDL operations from remote requesters can complete. When these
operations complete, you can use the command START DDF to resume the
suspended server threads. However, even after the command STOP DDF
MODE(SUSPEND) completes successfully, database resources might be held if DB2
is performing any activity other than inbound DB2 processing. You might have to
use the command CANCEL THREAD to terminate other processing and thereby
free the database resources.
Restriction: This combination is not allowed for BIND PACKAGE. Use this
combination if processing efficiency is more important than concurrency. It is a
good choice for batch jobs that would release table and table space locks only to
reacquire them almost immediately. It might even improve concurrency, by
allowing batch jobs to finish sooner. Generally, do not use this combination if your
application contains many SQL statements that are often not executed.
7. The exceptions are mass delete operations and utility jobs that drain all claim classes.
For more detailed examples, see DB2 Application Programming and SQL Guide.
Regardless of the isolation level, uncommitted claims on DB2 objects can inhibit
the execution of DB2 utilities or commands. For more information about how
claims can affect concurrency see “Claims and drains for concurrency control” on
page 827.
ISOLATION (CS)
Allows maximum concurrency with data integrity. However, after the
process leaves a row or page, another process can change the data. With
CURRENTDATA(NO), the process doesn’t have to leave a row or page to
allow another process to change the data. If the first process returns to
read the same row or page, the data is not necessarily the same. Consider
these consequences of that possibility:
v For table spaces created with LOCKSIZE ROW, PAGE, or ANY, a change
can occur even while executing a single SQL statement, if the statement
reads the same row more than once. In the following example:
SELECT * FROM T1
WHERE COL1 = (SELECT MAX(COL1) FROM T1);
For packages and plans that contain updatable static scrollable cursors,
ISOLATION(CS) lets DB2 use optimistic concurrency control. DB2 can use
optimistic concurrency control to shorten the amount of time that locks are
held in the following situations:
v Between consecutive fetch operations
v Between fetch operations and subsequent positioned update or delete
operations
Figure 83. Positioned updates and deletes without optimistic concurrency control
Figure 84. Positioned updates and deletes with optimistic concurrency control
ISOLATION (UR)
Allows the application to read while acquiring few locks, at the risk of
reading uncommitted data. UR isolation applies only to read-only
operations: SELECT, SELECT INTO, or FETCH from a read-only result
table.
Reading uncommitted data introduces an element of uncertainty.
Example: An application tracks the movement of work from station to
station along an assembly line. As items move from one station to another,
the application subtracts from the count of items at the first station and
adds to the count of items at the second. Assume you want to query the
count of items at all the stations, while the application is running
concurrently.
What can happen if your query reads data that the application has
changed but has not committed?
If the application subtracts an amount from one record before adding it
to another, the query could miss the amount entirely.
If the application adds first and then subtracts, the query could add the
amount twice.
If those situations can occur and are unacceptable, do not use UR isolation.
When can you use uncommitted read (UR)? You can probably use UR
isolation in cases like the following ones:
v When errors cannot occur.
Example: A reference table, like a table of descriptions of parts by part
number. It is rarely updated, and reading an uncommitted update is
probably no more damaging than reading the table 5 seconds earlier. Go
ahead and read it with ISOLATION(UR).
Example: The employee table of Spiffy Computer, our hypothetical user.
For security reasons, updates can be made to the table only by members
of a single department. And that department is also the only one that
can query the entire table. It is easy to restrict queries to times when no
updates are being made and then run with UR isolation.
Application
Time line
Figure 85. How an application using RS isolation acquires locks when no lock avoidance
techniques are used. Locks L2 and L4 are held until the application commits. The other locks
aren’t held.
Applications using read stability can leave rows or pages locked for long
periods, especially in a distributed environment.
Application
Time line
Figure 86. How an application using RR isolation acquires locks. All locks are held until the
application commits.
Applications that use repeatable read can leave rows or pages locked for
longer periods, especially in a distributed environment, and they can claim
more logical partitions than similar applications using cursor stability.
Because so many locks can be taken, lock escalation might take place.
Frequent commits release the locks and can help avoid lock escalation.
With repeatable read, lock promotion occurs for table space scan to prevent
the insertion of rows that might qualify for the predicate. (If access is via
index, DB2 locks the key range. If access is via table space scans, DB2 locks
the table, partition, or table space.)
Plans and packages that use UR isolation: Auditors and others might need to
determine what plans or packages are bound with UR isolation. For queries that
select that information from the catalog, see “Ensuring that concurrent users access
consistent data” on page 295.
Local access: Locally, CURRENTDATA(YES) means that the data upon which the
cursor is positioned cannot change while the cursor is positioned on it. If the
cursor is positioned on data in a local base table or index, then the data returned
with the cursor is current with the contents of that table or index. If the cursor is
positioned on data in a work file, the data returned with the cursor is current only
with the contents of the work file; it is not necessarily current with the contents of
the underlying table or index.
Application
Request Request next
row or page row or page
Time line
Figure 87. How an application using CS isolation with CURRENTDATA(YES) acquires locks.
This figure shows access to the base table. The L2 and L4 locks are released after DB2
moves to the next row or page. When the application commits, the last lock is released.
As with work files, if a cursor uses query parallelism, data is not necessarily
current with the contents of the table or index, regardless of whether a work file is
used. Therefore, for work file access or for parallelism on read-only queries, the
CURRENTDATA option has no effect.
To take the best advantage of this method of avoiding locks, make sure all
applications that are accessing data concurrently issue COMMITs frequently.
Figure 88 shows how DB2 can avoid taking locks and Table 151 summarizes the
factors that influence lock avoidance.
Application
Request Request next
row or page row or page
Time line
DB2
Figure 88. Best case of avoiding locks using CS isolation with CURRENTDATA(NO). This
figure shows access to the base table. If DB2 must take a lock, then locks are released when
DB2 moves to the next row or page, or when the application commits (the same as
CURRENTDATA(YES)).
Table 151. Lock avoidance factors. “Returned data” means data that satisfies the predicate.
“Rejected data” is that which does not satisfy the predicate.
Avoid locks Avoid locks
on returned on rejected
Isolation CURRENTDATA Cursor type data? data?
UR N/A Read-only N/A N/A
| Note:
1. For RS, locks are avoided on rejected data only when multi-row fetch is used
and when stage 1 predicates fail.
For example, the plan value for CURRENTDATA has no effect on the packages
executing under that plan. If you do not specify a CURRENTDATA option
explicitly when you bind a package, the default is CURRENTDATA(YES).
The rules are slightly different for the bind options RELEASE and ISOLATION.
The values of those two options are set when the lock on the resource is acquired
and usually stay in effect until the lock is released. But a conflict can occur if a
Table 152 shows how conflicts between isolation levels are resolved. The first
column is the existing isolation level, and the remaining columns show what
happens when another isolation level is requested by a new application process.
Table 152. Resolving isolation conflicts
UR CS RS RR
UR n/a CS RS RR
CS CS n/a RS RR
RS RS RS n/a RR
RR RR RR RR n/a
For locks and claims that are needed for cursor position, the following exceptions
exist for special cases:
Page and row locks: If you specify NO on the RELEASE LOCKS field on
installation panel DSNTIP4, described in “Option to release locks for cursors
defined WITH HOLD” on page 805, a page or row lock, if the lock is not
successfully avoided through lock avoidance, is held past the commit point. This
page or row lock is not necessary for cursor position, but the NO option is
provided for compatibility with applications that might rely on this lock. However,
an X or U lock is demoted to an S lock at that time. (Because changes have been
committed, exclusive control is no longer needed.) After the commit point, the lock
is released at the next commit point, provided that no cursor is still positioned on
that page or row.
Table, table space, and DBD locks: All necessary locks are held past the commit
point. After that, they are released according to the RELEASE option under which
they were acquired: for COMMIT, at the next commit point after the cursor is
closed; for DEALLOCATE, when the application is deallocated.
Claims: All claims, for any claim class, are held past the commit point. They are
released at the next commit point after all held cursors have moved off the object
or have been closed.
Function of the WITH clause: You can override the isolation level with which a
plan or package is bound by the WITH clause on certain SQL statements.
finds the maximum, minimum, and average bonus in the sample employee table.
The statement is executed with uncommitted read isolation, regardless of the value
of ISOLATION with which the plan or package containing the statement is bound.
| USE AND KEEP ... LOCKS options of the WITH clause: If you use the WITH RR
| or WITH RS clause, you can use the USE AND KEEP EXCLUSIVE LOCKS, USE
| AND KEEP UPDATE LOCKS, USE AND KEEP SHARE LOCKS options in SELECT
| and SELECT INTO statements.
| Example: To use these options, specify them as shown in the following example:
| SELECT ...
| WITH RS USE KEEP UPDATE LOCKS;
| By using one of these options, you tell DB2 to acquire and hold a specific mode of
| lock on all the qualified pages or rows. Table 153 shows which mode of lock is
| held on rows or pages when you specify the SELECT using the WITH RS or WITH
| RR isolation clause.
| Table 153. Which mode of lock is held on rows or pages when you specify the SELECT
| using the WITH RS or WITH RR isolation clause
| Option Value Lock Mode
| USE AND KEEP EXCLUSIVE LOCKS X
| USE AND KEEP UPDATE LOCKS U
| USE AND KEEP SHARE LOCKS S
|
| With repeatable read (RR) isolation, DB2 acquires locks on all pages or rows that
| fall within the range of the selection expression.
| All locks are held until the application commits. Although this option can reduce
| concurrency, it can prevent some types of deadlocks and can better serialize access
| to data.
For information about using LOCK TABLE on an auxiliary table, see “The LOCK
TABLE statement for LOBs” on page 827.
Executing the statement requests a lock immediately, unless a suitable lock exists
already. The bind option RELEASE determines when locks acquired by LOCK
TABLE or LOCK TABLE with the PART option are released.
You can use LOCK TABLE on any table, including auxiliary tables of LOB table
spaces. See “The LOCK TABLE statement for LOBs” on page 827 for information
about locking auxiliary tables.
For example, suppose that you intend to execute an SQL statement to change job
code 21A to code 23 in a table of employee data. The table is defined with:
v The name PERSADM1.EMPLOYEE_DATA
v LOCKSIZE ROW
v LOCKMAX 0, which disables lock escalation
Because the change affects about 15% of the employees, the statement can require
many row locks of mode X. To avoid the overhead for locks, first execute:
LOCK TABLE PERSADM1.EMPLOYEE_DATA IN EXCLUSIVE MODE;
When the statement is executed, DB2 locks partition 1 with an X lock. The lock has
no effect on locks that already exist on other partitions in the table space.
LOB locks
The locking activity for LOBs is described separately from transaction locks
because the purpose of LOB locks is different than that of regular transaction locks.
A lock that is taken on a LOB value in a LOB table space is called a LOB lock.
DB2 also obtains locks on the LOB table space and the LOB values stored in that
LOB table space, but those locks have the following primary purposes:
v To determine whether space from a deleted LOB can be reused by an inserted or
updated LOB
Storage for a deleted LOB is not reused until no more readers (including held
locators) are on the LOB and the delete operation has been committed.
v To prevent deallocating space for a LOB that is currently being read
In summary, the main purpose of LOB locks is for managing the space used by
LOBs and to ensure that LOB readers do not read partially updated LOBs.
Applications need to free held locators so that the space can be reused.
Table 155 shows the relationship between the action that is occurring on the LOB
value and the associated LOB table space and LOB locks that are acquired.
Table 155. Locks that are acquired for operations on LOBs. This table does not account for
gross locks that can be taken because of LOCKSIZE TABLESPACE, the LOCK TABLE
statement, or lock escalation.
Action on LOB value LOB table space
lock LOB lock Comment
Read (including UR) IS S Prevents storage from being
reused while the LOB is
being read or while locators
are referencing the LOB
Insert IX X Prevents other processes
from seeing a partial LOB
Delete IS S To hold space in case the
delete is rolled back. (The X
is on the base table row or
page.) Storage is not
reusable until the delete is
committed and no other
readers of the LOB exist.
Update IS->IX Two LOB Operation is a delete
locks: an followed by an insert.
S-lock for the
delete and an
X-lock for the
insert.
Update the LOB to null IS S No insert, just a delete.
or zero-length
Update a null or IX X No delete, just an insert.
zero-length LOB to a
value
If a cursor is defined WITH HOLD, LOB locks are held through commit
operations.
Because LOB locks are held until commit and because locks are put on each LOB
column in both a source table and a target table, it is possible that a statement such
as an INSERT with a fullselect that involves LOB columns can accumulate many
more locks than a similar statement that does not involve LOB columns. To
prevent system problems caused by too many locks, you can:
v Ensure that you have lock escalation enabled for the LOB table spaces that are
involved in the INSERT. In other words, make sure that LOCKMAX is non-zero
for those LOB table spaces.
v Alter the LOB table space to change the LOCKSIZE to TABLESPACE before
executing the INSERT with fullselect.
v Increase the LOCKMAX value on the table spaces involved and ensure that the
user lock limit is sufficient.
v Use LOCK TABLE statements to lock the LOB table spaces. (Locking the
auxiliary table that is contained in the LOB table space locks the LOB table
space.)
Controlling the number of LOB locks that are acquired for a user
LOB locks are counted toward the total number of locks allowed per user. Control
this number by the value you specify on the LOCKS PER USER field of installation
panel DSNTIPJ. The number of LOB locks that are acquired during a unit of work
is reported in IFCID 0020.
Information about LOB locks and lock escalation is reported in IFCID 0020.
Claims
Definition: A claim is a notification to DB2 that an object is being accessed.
Three classes of claims: Table 156 shows the three classes of claims and the actions
that they allow.
Table 156. Three classes of claims and the actions that they allow
Claim class Actions allowed
Write Reading, updating, inserting, and deleting
Repeatable read Reading only, with repeatable read (RR) isolation
Cursor stability read Reading only, with read stability (RS), cursor stability
(CS), or uncommitted read (UR) isolation
| Detecting long-running read claims: DB2 issues a warning message and generates a
| trace record for each time period that a task holds an uncommitted read claim. You
| can set the length of the period in minutes by using the LRDRTHLD subsystem
| parameter.
Drains
Definition: A drain is the action of taking over access to an object by preventing
new claims and waiting for existing claims to be released.
Example: A utility can drain a partition when applications are accessing it.
Claim classes drained: A drainer does not always need complete control. It could
drain the following combinations of claim classes:
v Only the write claim class
v Only the repeatable read claim class
v All claim classes
Example: The CHECK INDEX utility needs to drain only writers from an index
space and its associated table space. RECOVER, however, must drain all claim
classes from its table space. The REORG utility can drain either writers (with
DRAIN WRITERS) or all claim classes (with DRAIN ALL).
A drain lock also prevents new claimers from accessing an object while a drainer
has control of it.
Types of drain locks: Three types of drain locks on an object correspond to the
three claim classes:
v Write
v Repeatable read
v Cursor stability read
In general, after an initial claim has been made on an object by a user, no other
user in the system needs a drain lock. When the drain lock is granted, no drains
on the object are in process for the claim class needed, and the claimer can
proceed.
Exception: The claimer of an object requests a drain lock in two exceptional cases:
v A drain on the object is in process for the claim class needed. In this case, the
claimer waits for the drain lock.
v The claim is the first claim on an object before its data set has been physically
opened. Here, acquiring the drain lock ensures that no exception states prohibit
allocating the data set.
When the claimer gets the drain lock, it makes its claim and releases the lock
before beginning its processing.
The UTSERIAL lock: Access to the SYSUTILX table space in the directory is
controlled by a unique lock called UTSERIAL. A utility must acquire the
UTSERIAL lock to read or write in SYSUTILX, whether SYSUTILX is the target of
the utility or is used only incidentally.
Compatibility of utilities
Definition: Two utilities are considered compatible if they do not need access to the
same object at the same time in incompatible modes.
Before a utility starts, it is checked against all other utilities running on the same
target object. The utility starts only if all the others are compatible.
For details on which utilities are compatible, refer to each utility’s description in
DB2 Utility Guide and Reference.
Figure 89 on page 831 illustrates how SQL applications and DB2 utilities can
operate concurrently on separate partitions of the same table space.
Allocate Deallocate
Write claim, P1 Write claim, P1
Commit Commit
Time line 1 2 3 4 5 6 7 8 9 10
Wait
LOAD, P1
LOAD, P2
Time Event
t1 An SQL application obtains a transaction lock on every partition in the table space.
The duration of the locks extends until the table space is deallocated.
t2 The SQL application makes a write claim on data partition 1 and index partition 1.
t3 The LOAD jobs begin draining all claim classes on data partitions 1 and 2 and
index partitions 1 and 2. LOAD on partition 2 operates concurrently with the SQL
application on partition 1. LOAD on partition 1 waits.
t4 The SQL application commits, releasing its write claims on partition 1. LOAD on
partition 1 can begin.
t6 LOAD on partition 2 completes.
t7 LOAD on partition 1 completes, releasing its drain locks. The SQL application (if it
has not timed out) makes another write claim on data partition 1.
t10 The SQL application deallocates the table space and releases its transaction locks.
Figure 89. SQL and utility concurrency. Two LOAD jobs execute concurrently on two
partitions of a table space
When multiple REORG utility jobs with the SHRLEVEL CHANGE or REFERENCE
and the FASTSWITCH options run against separate partitions of the same
partitioned table space, some of the utilities might fail with reason code 00E40012.
This code, which indicates the unavailability of the database descriptor block
(DBD) is caused by multiple utilities arriving at the SWITCH phase simultaneously.
The switch phase times out if it cannot acquire the DBD within the timeout period
specified by the UTILITY TIMEOUT field on installation panel DSNTIPI. Increase
the value of the installation parameter to alleviate the problem.
Utility processing can be more efficient with partitioned indexes because, with the
correspondence of index partitions to data partitions, they promote partition-level
independence. For example, the REORG utility with the PART option can run
faster and with more concurrency when the indexes are partitioned. REORG
rebuilds the parts for each partitioned index during the BUILD phase, which can
increase parallel processing and reduce the lock contention of nonpartitioned
indexes. In addition, for REORG PART with the SHRLEVEL CHANGE or
REFERENCE option, the BUILD2 phase can be eliminated. Nonpartitioned indexes
are corrected during BUILD2. Thus, if all the indexes on the table are partitioned,
the utility skips the BUILD2 phase.
| Similarly, for the LOAD PART and REBUILD INDEX PART utilities, the parts for
| each partitioned index can be built in parallel during the BUILD phase, which
| reduces lock contention and improves concurrency. The LOAD PART utility also
| processes partitioned indexes with append logic, instead of the insert logic that it
| uses to process nonpartitioned indexes, which also improves performance.
| The accounting reports show the locking activity under the heading of LOCKING
| (as shown in Figure 91 on page 835). The other key indication of locking problems
| are the class 3 suspensions LOCK/LATCH(DB2+IRLM) (C in Figure 92 on page
| 837). If locking and latching are increasing the elapsed time of your transactions or
| batch work, you want to investigate further.
Use EXPLAIN to monitor the locks required by a particular SQL statement, or all
the SQL in a particular plan or package, and see “Using EXPLAIN to tell which
locks DB2 chooses” on page 833.
Procedure:
1. Use the EXPLAIN statement, or the EXPLAIN option of the BIND and REBIND
subcommands, to determine which modes of table and table space locks DB2
initially assigns for an SQL statement. Follow the instructions under “Obtaining
PLAN_TABLE information from EXPLAIN” on page 892. (EXPLAIN does not
return information about the locks acquired on LOB table spaces.)
2. EXPLAIN stores its results in a table called PLAN_TABLE. To review the
results, query PLAN_TABLE. After running EXPLAIN, each row of
PLAN_TABLE describes the processing for a single table, either one named
explicitly in the SQL statement that is being explained or an intermediate table
that DB2 has to create. The column TSLOCKMODE of PLAN_TABLE shows an
initial lock mode for that table. The lock mode applies to the table or the table
space, depending on the value of LOCKSIZE and whether the table space is
segmented or nonsegmented.
3. In Table 157, find what table or table space lock is used and whether page or
row locks are used also, for the particular combination of lock mode and
LOCKSIZE you are interested in.
For statements executed remotely: EXPLAIN gathers information only about data
access in the DBMS where the statement is run or the bind operation is carried out.
To analyze the locks obtained at a remote DB2 location, you must run EXPLAIN at
that location. For more information on running EXPLAIN, and a fuller description
of PLAN_TABLE, see Chapter 33, “Using EXPLAIN to improve SQL performance,”
on page 891.
Table 157. Which locks DB2 chooses. N/A = Not applicable; Yes = Page or row locks are
acquired; No = No page or row locks are acquired.
Lock mode from EXPLAIN
Table space structure IS S IX U X
For nonsegmented table spaces:
Table space lock acquired is: IS S IX U X
Page or row locks acquired? Yes No Yes No No
| Note: For partitioned table spaces, the lock mode applies only to those partitions that are
| locked. Lock modes for LOB table spaces are not reported with EXPLAIN.
For segmented table spaces with:
LOCKSIZE ANY, ROW, or PAGE
Table space lock acquired is: IS IS IX n/a IX
Table lock acquired is: IS S IX n/a X
| Page or row locks acquired? Yes No Yes n/a No
Figure 90 shows a portion of the Statistics Trace, which tells how many
suspensions (A), timeouts (B), deadlocks (C), and lock escalations (D) occur
| in the trace record. You can also use the statistics trace to monitor each occurrence
| of lock escalation.
Figure 91 on page 835 shows a portion of the Accounting Trace, whichgives the
same information for a particular application (suspensions A, timeouts B,
deadlocks C, lock escalations D). It also shows the maximum number of
concurrent page locks held and acquired during the trace (E and F). Review
applications with a large number to see if this value can be lowered. This number
LOCKING TOTAL
------------------- --------
TIMEOUTS B 0
DEADLOCKS C 0
ESCAL.(SHAR) D 0
ESCAL.(EXCL) 0
MAX PG/ROW LCK HELD E 2
LOCK REQUEST F 8
UNLOCK REQST 2
QUERY REQST 0
CHANGE REQST 5
OTHER REQST 0
LOCK SUSPENS. 1
IRLM LATCH SUSPENS. 0
OTHER SUSPENS. 0
TOTAL SUSPENS. A 1
DRAIN/CLAIM TOTAL
------------ --------
DRAIN REQST 0
DRAIN FAILED 0
CLAIM REQST 4
CLAIM FAILED 0
| To determine the effect of lock suspensions on your applications, examine the class
| 3 LOCK/LATCH time in the Accounting Report (C in Figure 92 on page 837).
Scenario description
An application, which has recently been moved into production, is experiencing
timeouts. Other applications have not been significantly affected in this example.
In some cases, the initial and detailed stages of tracing and analysis presented in
this chapter can be consolidated into one. In other cases, the detailed analysis
might not be required at all.
To analyze the problem, generally start with the accounting report - long. (If you
have enough information from program and system messages, you can skip this
first step.)
Accounting report
Figure 92 on page 837 shows a portion of the accounting report - long.
GLOBAL CONTENTION L-LOCKS AVERAGE TIME AV.EVENT GLOBAL CONTENTION P-LOCKS AVERAGE TIME AV.EVENT
------------------------------------- ------------ -------- ------------------------------------- ------------ --------
L-LOCKS 0.000000 0.00 P-LOCKS 0.000000 0.00
PARENT (DB,TS,TAB,PART) 0.000000 0.00 PAGESET/PARTITION 0.000000 0.00
CHILD (PAGE,ROW) 0.000000 0.00 PAGE 0.000000 0.00
OTHER 0.000000 0.00 OTHER 0.000000 0.00
SQL DML AVERAGE TOTAL SQL DCL TOTAL SQL DDL CREATE DROP ALTER LOCKING AVERAGE TOTAL
-------- -------- -------- -------------- -------- ---------- ------ ------ ------ ---------------------- -------- --------
SELECT 20.00 3860 LOCK TABLE 0 TABLE 0 0 0 TIMEOUTS 0.00 0
INSERT 0.00 0 GRANT 0 CRT TTABLE 0 N/A N/A DEADLOCKS 0.00 0
UPDATE 30.00 5790 REVOKE 0 DCL TTABLE 0 N/A N/A ESCAL.(SHARED) 0.00 0
DELETE 10.00 1930 SET CURR.SQLID 0 AUX TABLE 0 N/A N/A ESCAL.(EXCLUS) 0.00 0
SET HOST VAR. 0 INDEX 0 0 0 MAX PG/ROW LOCKS HELD 43.34 47
DESCRIBE 0.00 0 SET CUR.DEGREE 0 TABLESPACE 0 0 0 LOCK REQUEST 63.82 12318
DESC.TBL 0.00 0 SET RULES 0 DATABASE 0 0 0 UNLOCK REQUEST 14.48 2794
PREPARE 0.00 0 SET CURR.PATH 0 STOGROUP 0 0 0 QUERY REQUEST 0.00 0
OPEN 10.00 1930 SET CURR.PREC. 0 SYNONYM 0 0 N/A CHANGE REQUEST 33.35 6436
FETCH 10.00 1930 CONNECT TYPE 1 0 VIEW 0 0 N/A OTHER REQUEST 0.00 0
CLOSE 10.00 1930 CONNECT TYPE 2 0 ALIAS 0 0 N/A LOCK SUSPENSIONS 0.00 0
SET CONNECTION 0 PACKAGE N/A 0 N/A IRLM LATCH SUSPENSIONS 0.03 5
RELEASE 0 PROCEDURE 0 0 0 OTHER SUSPENSIONS 0.00 0
DML-ALL 90.00 17370 CALL 0 FUNCTION 0 0 0 TOTAL SUSPENSIONS 0.03 5
ASSOC LOCATORS 0 TRIGGER 0 0 N/A
ALLOC CURSOR 0 DIST TYPE 0 0 N/A
HOLD LOCATOR 0 SEQUENCE 0 0 0
FREE LOCATOR 0
DCL-ALL 0 TOTAL 0 0 0
RENAME TBL 0
COMMENT ON 0
LABEL ON 0
The accounting report - long shows the average elapsed times and the average
number of suspensions per plan execution. In Figure 92:
v The class 1 average elapsed time A (AET) is 0.072929 seconds. The class 2
times show that 0.029443 seconds B of that are spent in DB2; the rest is spent
in the application.
v The class 2 AET is spent mostly in lock or latch suspensions (LOCK/LATCH C
is 0.000011 seconds).
v The HIGHLIGHTS section D of the report (upper right) shows
#OCCURRENCES as 193; that is the number of accounting (IFCID 3) records.
The report also shows the reason for the suspensions, as described in Table 158.
Table 158. Reasons for suspensions
Reason Includes
LOCAL Contention for a local resource
LATCH Contention for latches within IRLM (with brief suspension)
GLOB. Contention for a global resource
IRLMQ An IRLM queued request
S.NFY Intersystem message sending
OTHER Page latch or drain suspensions, suspensions because of
incompatible retained locks in data sharing, or a value for service
use
The preceding list shows only the first reason for a suspension. When the original
reason is resolved, the request could remain suspended for a second reason.
The report shows that the suspension causing the delay involves access to partition
1 of table space PARADABA.TAB1TS by plan PARALLEL. Two LOCAL
suspensions time out after an average of 5 minutes, 3.278 seconds (303.278
seconds).
Lockout report
Figure 94 on page 839 shows the OMEGAMON lockout report. This report shows
that plan PARALLEL contends with the plan DSNESPRR. It also shows that
contention is occurring on partition 1 of table space PARADABA.TAB1TS.
Lockout trace
Figure 95 shows the OMEGAMON lockout trace.
For each contender, this report shows the database object, lock state (mode), and
duration for each contention for a transaction lock.
.
.
.
PRIMAUTH CORRNAME CONNTYPE
ORIGAUTH CORRNMBR INSTANCE EVENT TIMESTAMP --- L O C K R E S O U R C E ---
PLANNAME CONNECT RELATED TIMESTAMP EVENT TYPE NAME EVENT SPECIFIC DATA
------------------------------ ----------------- -------- --------- ----------------------- ----------------------------------------
FPB FPBPARAL TSO 15:25:27.23692350 TIMEOUT PARTITION DB =PARADABA REQUEST =LOCK UNCONDITIONAL
FPB ’BLANK’ AB09C533F92E N/P OB =TAB1TS STATE =S ZPARM INTERVAL= 300
PARALLEL BATCH PART= 1 DURATION=COMMIT INTERV.COUNTER= 1
HASH =X’000020E0’
------------ HOLDERS/WAITERS -----------
HOLDER
LUW=’BLANK’.IPSAQ421.AB09C51F32CB
MEMBER =N/P CONNECT =TSO
PLANNAME=DSNESPRR CORRID=EOA
DURATION=COMMIT PRIMAUTH=KARELLE
STATE =X
KARL KARL TSO 15:30:32.97267562 TIMEOUT PARTITION DB =PARADABA REQUEST =LOCK UNCONDITIONAL
KARL ’BLANK’ AB09C65528E6 N/P OB =TAB1TS STATE =IS ZPARM INTERVAL= 300
PARALLEL TSO PART= 1 DURATION=COMMIT INTERV.COUNTER= 1
HASH =X’000020E0’
------------ HOLDERS/WAITERS -----------
HOLDER
LUW=’BLANK’.IPSAQ421.AB09C51F32CB
MEMBER =N/P CONNECT =TSO
PLANNAME=DSNESPRR CORRID=EOA
DURATION=COMMIT PRIMAUTH=DAVE
STATE =X
ENDUSER =DAVEUSER
WSNAME =DAVEWS
TRANS =DAVES TRANSACTION
LOCKING TRACE COMPLETE
Corrective decisions
The preceding discussion is a general approach when lock suspensions are
unacceptably long or timeouts occur. In such cases, the DB2 performance trace for
locking and the OMEGAMON reports can be used to isolate the resource causing
the suspensions. The lockout report identifies the resources involved. The lockout
trace tells what contending process (agent) caused the timeout.
In Figure 93 on page 838, the number of suspensions is low (only 2) and both have
ended in a timeout. Rather than use the DB2 performance trace for locking, use the
For specific information about OMEGAMON reports and their usage, see
OMEGAMON Report Reference and Using IBM Tivoli OMEGAMON XE on z/OS.
These examples assume that statistics class 3 and performance class 1 are activated.
Performance class 1 is activated to get IFCID 105 records, which contain the
translated names for the database ID and the page set OBID.
The scenarios that follow use three of the DB2 sample tables, DEPT, PROJ, and
ACT. They are all defined with LOCKSIZE ANY. Type 2 indexes are used to access
all three tables. As a result, contention for locks is only on data pages.
First, transaction LOC2A acquires a lock on one resource while transaction LOC2B
acquires a lock on another. Next, the two transactions each request locks on the
resource held by the other.
DB2 selects one of the transactions and rolls it back, releasing its locks. That allows
the other transaction to proceed to completion and release its locks also.
The report shows that the only transactions involved came from plans LOC2A and
LOC2B. Both transactions came in from BATCH.
.
.
.
PRIMAUTH CORRNAME CONNTYPE
ORIGAUTH CORRNMBR INSTANCE EVENT TIMESTAMP --- L O C K R E S O U R C E ---
PLANNAME CONNECT RELATED TIMESTAMP EVENT TYPE NAME EVENT SPECIFIC DATA
------------------------------ ----------------- -------- --------- ----------------------- ----------------------------------------
SYSADM RUNLOC2A TSO 20:32:30.68850025 DEADLOCK COUNTER = 2 WAITERS = 2
SYSADM ’BLANK’ AADD32FD8A8C N/P TSTAMP =04/02/95 20:32:30.68
LOC2A BATCH DATAPAGE DB =DSN8D42A HASH =X’01060304’
A OB =DEPT ---------------- BLOCKER IS HOLDER -----
PAGE=X’000002’ LUW=’BLANK’.EGTVLU2.AADD32FD8A8C
MEMBER =DB1A CONNECT =BATCH
PLANNAME=LOC2A CORRID=RUNLOC2A
DURATION=MANUAL PRIMAUTH=SYSADM
STATE =U
---------------- WAITER ----------------
LUW=’BLANK’.EGTVLU2.AA65FEDC1022
MEMBER =DB1A CONNECT =BATCH
PLANNAME=LOC2B CORRID=RUNLOC2B
DURATION=MANUAL PRIMAUTH=KATHY
REQUEST =LOCK WORTH = 18
STATE =U
The lock held by transaction 1 (LOC2A) is a data page lock on the DEPT table and
is held in U state. (The value of MANUAL for duration means that, if the plan was
bound with isolation level CS and the page was not updated, then DB2 is free to
release the lock before the next commit point.)
Transaction 2 (LOC2B) was requesting a lock on the same resource, also of mode U
and hence incompatible.
The specifications of the lock held by transaction 2 (LOC2B) are the same.
Transaction 1 was requesting an incompatible lock on the same resource. Hence,
the deadlock.
First, the three transactions each acquire a lock on a different resource. LOC3A
then requests a lock on the resource held by LOC3B, LOC3B requests a lock on the
resource held by LOC3C, and LOC3C requests a lock on the resource held by
LOC3A.
DB2 rolls back LOC3C and releases its locks. That allows LOC3B to complete and
release the lock on PROJ so that LOC3A can complete. LOC3C can then retry.
Figure 97 on page 843 shows the OMEGAMON Locking Trace - Deadlock report
produced for this situation.
| Materialized query tables are tables that contain information that is derived and
| summarized from other tables. Materialized query tables pre-calculate and store
| the results of queries with expensive join and aggregation operations. By providing
| this summary information, materialized query tables can simplify query processing
| and greatly improve the performance of dynamic SQL queries. Materialized query
| tables are particularly effective in data warehousing applications.
| Automatic query rewrite is the process DB2 uses to access data in a materialized
| query table. If you enable automatic query rewrite, DB2 determines if it can resolve
| a dynamic query or part of the query by using a materialized query table. If so,
| DB2 can rewrite the query to use the materialized query table instead of the
| underlying base tables. Keep in mind that a materialized query table can yield
| query results that are not current if the base tables change after the materialized
| query table is updated.
| Despite the increasing amount of data, these queries still require a response time in
| the order of minutes or seconds. In some cases, the only solution is to pre-compute
| the whole or parts of each query. You can store these pre-computed results in a
| materialized query table. You can then use the materialized query table to answer
| these complicated queries when the system receives them. Using a process called
| automatic query rewrite, DB2 recognizes when it can transparently rewrite a
| submitted query to use the stored results in a materialized query table. By
| querying the materialized query table instead of computing the results from the
| underlying base tables, DB2 can process some complicated queries much more
| efficiently. If the estimated cost of the rewritten query is less than the estimated
| cost of the original query, DB2 uses the rewritten query.
| Suppose that you have a very large table named TRANS that contains one row for
| each transaction that a certain company processes. You want to tally the total
| amount of transactions by some time period. Although the table contains many
| columns, you are most interested in these four columns:
| v YEAR, MONTH, DAY, which contain the date of a transaction
| v AMOUNT, which contains the amount of the transaction
| To total the amount of all transactions between 1995 and 2000, by year, you would
| use the following query:
| SELECT YEAR, SUM(AMOUNT)
| FROM TRANS
| WHERE YEAR >= ’1995’ AND YEAR <= ’2000’
| GROUP BY YEAR
| ORDER BY YEAR;
| This query might be very expensive to run, particularly if the TRANS table is a
| very large table with millions of rows and many columns.
| Now suppose that you define a materialized query table named STRANS by using
| the following CREATE TABLE statement:
| CREATE TABLE STRANS AS
| (SELECT YEAR AS SYEAR,
| MONTH AS SMONTH,
| DAY AS SDAY,
| SUM(AMOUNT) AS SSUM
| FROM TRANS
| GROUP BY YEAR, MONTH, DAY)
| DATA INITIALLY DEFERRED REFRESH DEFERRED;
| After you populate STRANS with a REFRESH TABLE statement, the table contains
| one row for each day of each month and year in the TRANS table.
| Using the automatic query rewrite process, DB2 can rewrite the original query into
| a new query. The new query uses the materialized query table STRANS instead of
| the original base table TRANS:
| SELECT SYEAR, SUM(SSUM)
| FROM STRANS
| WHERE SYEAR >= ’1995’ AND SYEAR <= ’2000’
| GROUP BY SYEAR
| ORDER BY SYEAR
| If you maintain data currency in the materialized query table STRANS, the
| rewritten query provides the same results as the original query. The rewritten
| query offers better response time and requires less CPU time.
| The fullselect, together with the DATA INITIALLY DEFERRED clause and the
| REFRESH DEFERRED clause, defines the table as a materialized query table.
| You can explicitly specify the column names of the materialized query table or
| allow DB2 to derive the column names from the fullselect. The column definitions
| of a materialized query table are the same as those for a declared global temporary
| table that is defined with the same fullselect.
| You must include the DATA INITIALLY DEFERRED and REFRESH DEFERRED
| clauses when you define a materialized query table.
| The MAINTAINED BY SYSTEM clause, which is the default, specifies that the
| materialized query table is a system-maintained materialized query table. You
| cannot update a system-maintained materialized query table by using the LOAD
| utility or the INSERT, UPDATE, or DELETE statements. You can update a
| system-maintained materialized query table only by using the REFRESH TABLE
| statement.
| The ENABLE QUERY OPTIMIZATION clause, which is the default, specifies that
| DB2 can consider the materialized query table in automatic query rewrite.
| Alternatively, you can specify DISABLE QUERY OPTIMIZATION to indicate that
| DB2 cannot consider the materialized query table in automatic query rewrite.
| When you enable query optimization, DB2 is more restrictive of what you can
| select in the fullselect for a materialized query table.
| The isolation level of the materialized table is the isolation level at which the
| CREATE TABLE statement is executed.
| After you create a materialized query table, it looks and behaves like other tables
| in the database system, with a few exceptions. DB2 allows materialized query
| tables in database operations wherever it allows other tables, with a few
| restrictions. The restrictions are listed in the description of the CREATE TABLE
| statement in DB2 SQL Reference. As with any other table, you can create indexes on
| the materialized query table; however, the indexes that you create must not be
| unique. Instead, DB2 uses the materialized query table’s definition to determine if
| it can treat the index as a unique index for query optimization.
| For information about using the CREATE TABLE statement to create a materialized
| query table, see DB2 SQL Reference.
| The following SELECT statement then populated TRANSCOUNT with data that
| was derived from aggregating values in the TRANS table:
| SELECT ACCTID, LOCID, YEAR, COUNT(*)
| FROM TRANS
| GROUP BY ACCTID, LOCID, YEAR ;
| You could use the following ALTER TABLE statement to register TRANSCOUNT
| as a materialized query table. The statement specifies the ADD MATERIALIZED
| QUERY clause:
| ALTER TABLE TRANSCOUNT ADD MATERIALIZED QUERY
| (SELECT ACCTID, LOCID, YEAR, COUNT(*) as cnt
| FROM TRANS
| GROUP BY ACCTID, LOCID, YEAR )
| DATA INITIALLY DEFERRED
| REFRESH DEFERRED
| MAINTAINED BY USER;
| The fullselect must specify the same number of columns as the table you register
| as a materialized query table. The columns must have the same definitions and
| have the same column names in the same ordinal positions.
| The DATA INITIALLY DEFERRED clause indicates that the table data is to remain
| the same when the ALTER statement completes. The MAINTAINED BY USER
| clause indicates that the table is user-maintained. You can continue to update the
| data in the table by using the LOAD utility or the INSERT, UPDATE, or DELETE
| statements. You can also use the REFRESH TABLE statement to update the data in
| the table. The table becomes immediately eligible for use in automatic query
| rewrite.
| To ensure the accuracy of data that is used in automatic query rewrite, ensure that
| the summary table is current before registering it as a materialized query table.
| Alternatively, you can follow these steps:
| v Register the summary table as a materialized query table with automatic query
| rewrite disabled.
| v Update the newly registered materialized query table to refresh the data.
| v Use the ALTER TABLE statement on the materialized query table to enable
| automatic query rewrite.
| The isolation level of the materialized query table is the isolation level at which the
| ALTER TABLE statement is executed.
| For more information about using the ALTER TABLE statement to register a base
| table as a materialized query table, see DB2 SQL Reference.
| In addition to using the ALTER TABLE statement, you can change a materialized
| query table by dropping the table and recreating the materialized query table with
| a different definition.
|
| Populating and maintaining materialized query tables
| After you define a materialized query table, you need to maintain the accuracy of
| the data in the table. This maintenance includes populating the table for the first
| time and periodically refreshing the data in the table. You need to refresh the data
| because any changes that are made to the base tables are not automatically
| reflected in the materialized query table.
| The only way to change the data in a system-maintained materialized query table
| is through the REFRESH TABLE statement. The INSERT, UPDATE, and DELETE
| statements, and the LOAD utility cannot refer to a system-maintained materialized
| query table as a target table. Therefore, a system-maintained materialized query
| table is read-only. Any view or cursor that is defined on a system-maintained
| materialized query table is read-only. However, for a user-maintained materialized
| query table, you can alter the data with the INSERT, UPDATE, and DELETE
| statements, and the LOAD utility. This section includes the following topics:
| v “Using the REFRESH TABLE statement” on page 851
| v “Using INSERT, UPDATE, DELETE, and LOAD for materialized query tables”
| on page 851
| You can also use the REFRESH TABLE statement to refresh the data in any
| materialized query table at any time. The REFRESH TABLE statement performs the
| following actions:
| v Deletes all the rows in the materialized query table
| v Executes the fullselect in the materialized query table definition to recalculate
| the data from the tables that are specified in the fullselect with the isolation level
| for the materialized query table
| v Inserts the calculated result into the materialized query table
| v Updates the DB2 catalog with a refresh timestamp and the cardinality of the
| materialized query table
| Although the REFRESH TABLE statement involves both deleting and inserting
| data, DB2 completes these operations in a single commit scope. Therefore, if a
| failure occurs during execution of the REFRESH TABLE statement, DB2 rolls back
| all changes that the statement made.
| Example: Assume that you need to add a large amount of data to a fact table.
| Then, you need to refresh your materialized query table to reflect the new data in
| the fact table. To do this, perform these steps:
| v Collect and stage the new data in a separate table.
| v Evaluate the new data and apply it to the materialized table as necessary.
| v Merge the new data into the fact table
| For an example of such code, see member DSNTEJ3M in DSN810.SDSNSAMP,
| which is shipped with DB2.
| This section describes the rules that are specific to tables with multilevel security
| enabled. For more information about multilevel security, see “Multilevel security”
| on page 191.
| Automatic query rewrite tries to search for materialized query tables that result in
| an access path with the lowest cost after the rewrite. DB2 compares the estimated
| costs of the rewritten query and of the original query and chooses the query with
| the lower estimated cost.
| Many factors determine how well DB2 can exploit automatic query rewrite. This
| section describes the following factors:
| v “Making materialized query tables eligible”
| v “Query requirements and the rewrite process” on page 855
| v “Recommendations for materialized query table and base table design” on page
| 861
| The refresh age of a user-maintained materialized query table might not truly
| represent the freshness of the data in the table. In addition to the REFRESH TABLE
| statement, user-maintained query tables can be updated with the INSERT,
| UPDATE, and DELETE statements and the LOAD utility. Therefore, you can use
| the CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION special register
| to determine which type of materialized query tables, system-maintained or
| user-maintained, are considered in automatic query rewrite. The special register
| has four possible values that indicate which materialized query tables DB2
| considers for automatic query rewrite:
| SYSTEM DB2 considers only system-maintained materialized query tables.
| USER DB2 considers only user-maintained materialized query tables.
| ALL DB2 considers both types of materialized query tables.
| NONE DB2 considers no materialized query tables.
| Table 159 summarizes how to use the CURRENT REFRESH AGE and CURRENT
| MAINTAINED TABLE TYPES FOR OPTIMIZATION special registers together. The
| table shows which materialized query tables DB2 considers in automatic query
| rewrite.
| Table 159. The relationship between CURRENT REFRESH AGE and CURRENT MAINTAINED TABLE TYPES FOR
| OPTIMIZATION special registers
| Value of CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION
| Value of CURRENT
| REFRESH AGE SYSTEM USER ALL None
| ANY All system-maintained All user-maintained All materialized query None
| materialized query materialized query tables (both
| tables tables system-maintained and
| user-maintained)
| 0 None None None None
|
| If none of these items exist in the query block, DB2 considers automatic query
| rewrite. DB2 analyzes the query block in the user query and the fullselect in the
| materialized query table definition to determine if it can rewrite the query. The
| materialized query table must contain the data from the source tables (both
| columns and rows) that DB2 needs to satisfy the query. For DB2 to choose a
| rewritten query, the rewritten query must provide the same results as the user
| query. (DB2 assumes data currency in the materialized query table.) Furthermore,
| the rewritten query must offer better performance than the original user query.
| If all of the preceding analyses succeed, DB2 rewrites the user query. DB2 replaces
| all or some of the references to base tables with references to the materialized
| query table. If DB2 finds several materialized query tables that it can use to rewrite
| the query, it might use multiple tables simultaneously. If DB2 cannot use the tables
| simultaneously, it uses heuristic rules to choose which one to use. After DB2 writes
| the new query, DB2 determines the cost and the access path of that query. DB2
| uses the rewritten query if the estimated cost of the rewritten query is less than the
| estimated cost of the original query. The rewritten query might give only
| approximate results if the data in the materialized query table is not up to date.
NAME STATE
COUNTRY
TRANSITEM TRANS
N:1
ID ID
1:N
PGID LOCID
DISCOUNT DAY
N:1
QUANTITY ACCTID
CUST ACCT
N:1
ID ID
MARITAL_S CUSTID
INCOME_R STATUS
ZIPCODE Account
dimension
RESIDENCE
|
| Figure 98. Multi-fact star schema. In this simplified credit card application, the fact tables TRANSITEM and TRANS
| form the hub of the star schema. The schema also contains four dimensions: product, location, account, and time.
| The data warehouse records transactions that are made with credit cards. Each
| transaction consists of a set of items that are purchased together. At the center of
| the data warehouse are two large fact tables. TRANS records the set of credit card
| purchase transactions. TRANSITEM records the information about the items that
| are purchased. Together, these two fact tables are the hub of the star schema. The
| star schema is a multi-fact star schema because it contains these two fact tables.
| The fact tables are continuously updated for each new credit card transaction.
| In addition to the two fact tables, the schema contains four dimensions that
| describe transactions: product, location, account, and time.
| v The product dimension consists of two normalized tables, PGROUP and PLINE,
| that represent the product group and product line.
| v The location dimension consists of a single, denormalized table, LOC, that
| contains city, state, and country.
| v The account dimension consists of two normalized tables, ACCT and CUST, that
| represent the account and the customer.
| v The time dimension consists of the TRANS table that contains day, month, and
| year.
Chapter 31. Using materialized query tables 857
| Analysts of such a credit card application are often interested in the aggregation of
| the sales data. Their queries typically perform joins of one or more dimension
| tables with fact tables. The fact tables contain significantly more rows than the
| dimension tables, and complicated queries that involve large fact tables can be
| very costly. In many cases, you can use materialized query table to summarize and
| store information from the fact tables. Using materialized query tables can help
| you avoid costly aggregations and joins against large fact tables.
| Assume that the following CREATE TABLE statement created a materialized query
| table named TRANSCNT:
| CREATE TABLE TRANSCNT AS
| (SELECT ACCTID, LOCID, YEAR, COUNT(*) AS CNT
| FROM TRANS
| GROUP BY ACCTID, LOCID, YEAR )
| DATA INITIALLY DEFERRED
| REFRESH DEFERRED;
| If you enable automatic query rewrite, DB2 can rewrite UserQ1 as NewQ1. NewQ1
| accesses the TRANSCNT materialized query table instead of the TRANS fact table.
| NewQ1
| -----
| SELECT A.ACCTID, L.STATE, A.YEAR, SUM(A.CNT) AS CNT
| FROM TRANSCNT A, LOC L
| WHERE A.LOCID = L.ID AND
| L.COUNTRY = ’USA’
| GROUP BY A.ACCTID, L.STATE, A.YEAR;
| DB2 can use query rewrite in this case because of the following reasons:
| v The TRANS table is common to both UserQ1 and TRANSCNT.
| v DB2 can derive the columns of the query result from TRANSCNT.
| v The GROUP BY in the query requests data that are grouped at a higher level
| than the level in the definition of TRANSCNT.
| Because customers typically make several hundred transactions per year with most
| of them in the same city, TRANSCNT is about hundred times smaller than TRANS.
| Therefore, rewriting UserQ1 into a query that uses TRANSCNT instead of TRANS
| improves response time significantly.
| Example 2: Assume that an analyst wants to find the number of televisions, with a
| price over 100 and a discount greater than 0.1, that were purchased by each credit
| card account. The analyst submits the following query:
| UserQ2
| ------
| SELECT T.ID, TI.QUANTITY * TI.PRICE * (1 - TI.DISCOUNT) AS AMT
| FROM TRANSITEM TI, TRANS T, PGROUP PG
| WHERE TI.TRANSID = T.ID AND
| If you define the following materialized query table TRANSIAB, DB2 can rewrite
| UserQ2 as NewQ2:
| TRANSIAB
| --------
| CREATE TABLE TRANSIAB AS
| (SELECT TI.TRANSID, TI.PRICE, TI.DISCOUNT, TI.PGID,
| L.COUNTRY, TI.PRICE * TI.QUANTITY as VALUE
| FROM TRANSITEM TI, TRANS T, LOC L
| WHERE TI.TRANSID = T.ID AND
| T.LOCID = L.ID AND
| TI.PRICE > 1 AND
| TI.DISCOUNT > 0.1)
| DATA INITIALLY DEFERRED
| REFRESH DEFERRED;
|
|
| NewQ2
| -----
| SELECT A.TRANSID, A.VALUE * (1 - A.DISCOUNT) as AM
| FROM TRANSIAB A, PGROUP PG
| WHERE A.PGID = PG.ID AND
| A.PRICE > 100 AND
| PG.NAME = ’TV’;
| DB2 can rewrite UserQ2 as a new query that uses materialized query table
| TRANSIAB because of the following reasons:
| v Although the predicate T.LOCID = L.ID appears only in the materialized query
| table, it does not result in rows that DB2 might discard. The referential
| constraint between the TRANS.LOCID and LOC.ID columns makes the join
| between TRANS and LOC in the materialized query table definition lossless. The
| join is lossless only if the foreign key in the constraint is NOT NULL.
| v The predicates TI.TRANSID = T.ID and TI.DISCOUNT > 0.1 appear in both the
| user query and the TRANSIAB fullselect.
| v The fullselect predicate TI.PRICE >1 in TRANSIAB subsumes the user query
| predicate TI.PRICE > 100 in UserQ2. Because the fullselect predicate is more
| inclusive than the user query predicate, DB2 can compute the user query
| predicate from TRANSIAB.
| v The user query predicate PG.NAME = ’TV’ refers to a table that is not in the
| TRANSIAB fullselect. However, DB2 can compute the predicate from the
| PGROUP table. A predicate like PG.NAME=’TV’ does not disqualify other
| predicates in a query from qualifying for automatic query rewrite. In this case
| PGROUP is a relatively small dimension table, so a predicate that refers to the
| table is not overly costly.
| v DB2 can derive the query result from the materialized query table definition,
| even when the derivation is not readily apparent:
| – DB2 derives T.ID in the query from T.TRANSID in the TRANSIAB fullselect.
| Although these two columns originate from different tables, they are
| equivalent because of the predicate T.TRANSID = T.ID. DB2 recognizes such
| column equivalency through join predicates. Thus, DB2 derives T.ID from
| T.TRANSID, and the query qualifies for automatic query rewrite.
| – DB2 derives AMT in the query UserQ2 from DISCOUNT and VALUE in the
| TRANSIAB fullselect.
| If you define the following materialized query table TRANSAVG, DB2 can rewrite
| UserQ3 as NewQ3:
| TRANSAVG
| --------
| CREATE TABLE TRANSAVG AS
| (SELECT T.YEAR, T.MONTH, SUM(QUANTITY * PRICE) AS TOTVAL, COUNT(*) AS CNT
| FROM TRANSITEM TI, TRANS T
| WHERE TI.TRANSID = T.ID
| GROUP BY T.YEAR, T.MONTH )
| DATA INITIALLY DEFERRED
| REFRESH DEFERRED;
|
|
| NewQ3
| -----
| SELECT YEAR, CASE WHEN SUM(CNT) = 0 THEN NULL
| ELSE SUM(TOTVAL)/SUM(CNT)
| END AS AVGVAL
| FROM TRANSAVG
| GROUP BY YEAR;
| DB2 can rewrite UserQ3 as a new query that uses materialized query table
| TRANSAVG because of the following reasons:
| v DB2 considers YEAR in the user query and YEAR in the materialized query
| table fullselect to match exactly.
| v DB2 can derive the AVG function in the user query from the SUM function and
| the COUNT function in the materialized query table fullselect.
| v The GROUP BY in the query NewQ3 requests data at a higher level than the
| level in the definition of TRANSAVG.
| v DB2 can compute the yearly average in the user query by using the monthly
| sums and counts of transaction items in TRANSAVG. DB2 derives the yearly
| averages from the CNT and TOTVAL columns of the materialized query table by
| using a case expression.
| If DB2 rewrites the query to use a materialized query table, a portion of the plan
| table output might look like Table 160 on page 861.
| The value M in TABLE_TYPE indicates that DB2 used a materialized query table.
| TNAME shows that DB2 used the materialized query table named
| TRANSAVG.You can also obtain this information from a performance trace (IFCID
| 0022).
| See member DSNTEJ3M in data set DSN810.SDSNSAMP for all of the code,
| including the following items:
| v SQL statements to create and populate the star schema
| v SQL statements to create and populate the materialized query tables
| v Queries that DB2 rewrites to use the materialized query table
Access path selection uses buffer pool statistics for several calculations. Access path
selection also considers the central processor model. These two factors can change
your queries’ access paths from one system to another, even if all the catalog
statistics are identical. You should keep this in mind when migrating from a test
system to a production system, or when modeling a new application.
Mixed central processor models in a data sharing group can also affect access path
selection. For more information on data sharing, see DB2 Data Sharing: Planning
and Administration.
If you run RUNSTATS for separate partitions of a table space, DB2 uses the results
to update the aggregate statistics for the entire table space. For recommendations
about running RUNSTATS on separate partitions, see “Gathering monitor statistics
and update statistics” on page 875. (You should either run RUNSTATS once on the
entire object before collecting statistics on separate partitions or use the appropriate
option to ensure that the statistics are aggregated appropriately, especially if some
partitions are not loaded with data.)
You can establish default statistical values for the cardinality and number of pages
if you can estimate the normal cardinality and number of pages that used the
values for a particular created temporary table. You can manually update the
values in the CARDF and NPAGES columns of the SYSTABLES row for the created
temporary table. These values become the default values used if more accurate
values are not available or more accurate values cannot be used. The more accurate
values are available only for dynamic SQL statements that are prepared after the
instantiation of the created temporary table, but within the same unit of work.
These more accurate values are not used if the result of the dynamic bind is
destined for the Dynamic Statement Cache.
History statistics
Several catalog tables provide historical statistics for other catalog tables. These
catalog history tables include:
v SYSIBM.SYSCOLDIST_HIST
v SYSIBM.SYSCOLUMNS_HIST
v SYSIBM.SYSINDEXES_HIST
v SYSIBM.SYSINDEXPART_HIST
When DB2 adds or changes rows in a catalog table, DB2 might also write
information about the rows to the corresponding catalog history table. Although
the catalog history tables are not identical to their counterpart tables, they do
contain the same columns for access path information and space utilization
information. The history statistics provide a way to study trends, to determine
when utilities, such as REORG, should be run for maintenance, and to aid in space
management.
Table 163 lists the catalog data that are collected for historical statistics. For
information on how to gather these statistics, see “Gathering monitor statistics and
update statistics” on page 875.
Table 163. Catalog data collected for historical statistics
Provides Provides
access path space
Column name statistics1 statistics Description
SYSIBM.SYSCOLDIST_HIST
CARDF Yes No Number of distinct values gathered
COLGROUPCOLNO Yes No Identifies the columns involved in multi-column
statistics
COLVALUE Yes No Frequently occurring value in the key distribution
FREQUENCYF Yes No A number, which multiplied by 100, gives the
percentage of rows that contain the value of
COLVALUE
NUMCOLUMNS Yes No Number of columns involved in multi-column
statistics
TYPE Yes No Type of statistics gathered, either cardinality (c) or
frequent value (F)
SYSIBM.SYCOLUMNS_HIST
COLCARDF Yes No Estimated number of distinct values in the
column
HIGH2KEY Yes No Second highest value of the column, or blank
LOW2KEY Yes No Second lowest value of the column, or blank
SYSIBM.SYSINDEXES_HIST
CLUSTERING Yes No Whether the index was created with CLUSTER
CLUSTERRATIOF Yes No A number, when multiplied by 100, gives the
percentage of rows in the clustering order
FIRSTKEYCARDF Yes No Number of distinct values in the first key column
FULLKEYCARDF Yes No Number of distinct values in the full key
NLEAF Yes No Number of active leaf pages
NLEVELS Yes No Number of levels in the index tree
You can choose which DB2 catalog tables you want RUNSTATS to update: those
used to optimize the performance of SQL statements or those used by database
administrators to assess the status of a particular table space or index. You can
monitor these catalog statistics in conjunction with EXPLAIN to make sure that
your queries access data efficiently.
To obtain information from the catalog tables, use a SELECT statement, or specify
REPORT YES when you invoke RUNSTATS. When used routinely, RUNSTATS
provides data about table spaces and indexes over a period of time. For example,
when you create or drop tables or indexes or insert many rows, run RUNSTATS to
update the catalog. Then rebind your applications so that DB2 can choose the most
efficient access paths.
Collecting statistics by partition: You can collect statistics for a single data
partition or index partition. This information allows you to avoid the cost of
running utilities against unchanged partitions. When you run utilities by partition,
DB2 uses the results to update the aggregate statistics for the entire table space or
index. If statistics do not exist for each separate partition, DB2 can calculate the
aggregate statistics only if the utilities are executed with the FORCEROLLUP YES
keyword (or FORCEROLLUP keyword is omitted and the value of the STATISTICS
ROLLUP field on installation panel DSNTIPO is YES). If you do not use the
keyword or installation panel field setting to force the roll up of the aggregate
statistics, you must run utilities once on the entire object before running utilities on
separate partitions.
Collecting history statistics: When you collect statistics with RUNSTATS or gather
them inline with the LOAD, REBUILD, or REORG utilities, you can use the
HISTORY option to collect history statistics. With the HISTORY option, the utility
stores the statistics that were updated in the catalog tables in history records in the
corresponding catalog history tables. (For information on the catalog data that is
collected for history statistics, seeTable 163 on page 873.)
Running RUNSTATS after UPDATE: If you change values in the catalog and later
run RUNSTATS to update those values, your changes are lost.
Recommendation: Keep track of the changes you make and of the plans or
packages that have an access path change due to changed statistics.
The exception and the remedy for COLCARD and COLCARDF are also true for
the FIRSTKEYCARDF column in SYSIBM.SYSINDEXES and the FIRSTKEYCARDF
column in SYSIBM.SYSINDEXSTATS.
| If you update the COLCARDF value for a column, also update HIGH2KEY and
| LOW2KEY for the column. HIGH2KEY and LOW2KEY are defined as
| VARCHAR(2000). To update HIGH2KEY and LOW2KEY:
v Specify a character or hexadecimal value on the UPDATE statement.
Entering a character value is quite straightforward: SET LOW2KEY = ’ALAS’, for
example. But to enter a numeric, date, or time value you must use the
hexadecimal value of the DB2 internal format. See “Dates, times, and
timestamps for edit and validation routines” on page 1073 and “DB2 codes for
numeric data in edit and validation routines” on page 1075. Be sure to allow for
a null indicator in keys that allow nulls. See also “Null values for edit and
validation routines” on page 1072.
| v Ensure that the padding characteristics for the columns are the same as the
| padding characteristics for COLCARDF.
| For example, if the statistics values that are collected for COLCARDF are
| padded, then the statistics collected for HIGH2KEY or LOW2KEY must also be
| padded. You can check the STATS_FORMAT column in SYSCOLUMNS and
| SYSCOLSTATS to determine whether the statistics gathered for these columns
| are padded or not.
You can insert, update, or delete distribution information for any column in these
catalog tables. However, to enter a numeric, date, or time value you must use the
hexadecimal value of the DB2 internal format. See “Dates, times, and timestamps
for edit and validation routines” on page 1073 and “DB2 codes for numeric data in
edit and validation routines” on page 1075. Be sure to allow for a null indicator in
keys that allow nulls. See also “Null values for edit and validation routines” on
page 1072.
To access information about your data and how it is organized, use the following
queries:
SELECT CREATOR, NAME, CARDF, NPAGES, PCTPAGES
FROM SYSIBM.SYSTABLES
WHERE DBNAME = 'xxx'
AND TYPE = 'T';
SELECT NAME, UNIQUERULE, CLUSTERRATIOF, FIRSTKEYCARDF, FULLKEYCARDF,
NLEAF, NLEVELS, PGSIZE
FROM SYSIBM.SYSINDEXES
WHERE DBNAME = 'xxx';
SELECT NAME, DBNAME, NACTIVE, CLOSERULE, LOCKRULE
FROM SYSIBM.SYSTABLESPACE
WHERE DBNAME = 'xxx';
SELECT NAME, TBNAME, COLCARDF, HIGH2KEY, LOW2KEY, HEX(HIGH2KEY),
HEX(LOW2KEY)
FROM SYSIBM.SYSCOLUMNS
WHERE TBCREATOR = 'xxx' AND COLCARDF <> -1;
SELECT NAME, FREQUENCYF, COLVALUE, HEX(COLVALUE), CARDF,
COLGROUPCOLNO, HEX(COLGROUPCOLNO), NUMCOLUMNS, TYPE
FROM SYSIBM.SYSCOLDIST
WHERE TBNAME = ’ttt’
ORDER BY NUMCOLUMNS, NAME, COLGROUPCOLNO, TYPE, FREQUENCYF DESC;
SELECT NAME, TSNAME, CARD, NPAGES
FROM SYSIBM.SYSTABSTATS
WHERE DBNAME='xxx';
If the statistics in the DB2 catalog no longer correspond to the true organization of
your data, you should reorganize the necessary tables, run RUNSTATS, and rebind
the plans or packages that contain any affected queries. See “When to reorganize
indexes and table spaces” on page 884 and the description of REORG in Part 2 of
DB2 Utility Guide and Reference for information on how to determine which table
spaces and indexes qualify for reorganization. This includes the DB2 catalog table
spaces as well as user table spaces. Then DB2 has accurate information to choose
appropriate access paths for your queries. Use the EXPLAIN statement to verify
the chosen access paths for your queries.
The statistics show distribution of data within the allocated space, from which you
can judge clustering and the need to reorganize.
Space utilization statistics can also help you make sure that access paths that use
the index or table space are as efficient as possible. By reducing gaps between leaf
pages in an index, or to ensure that data pages are close together, you can reduce
sequential I/Os.
Here are some things to remember about the effect of CLUSTERRATIOF on access
paths:
v CLUSTERRATIOF is an important input to the cost estimates that are used to
determine whether an index is used for an access path, and, if so, which index
to use.
v If the access is INDEXONLY, then this value does not apply.
v The higher the CLUSTERRATIOF value, the lower the cost of referencing data
pages during an index scan is.
v For an index that has a CLUSTERRATIOF less than 80%, sequential prefetch is
not used to access the data pages.
| v A slight reduction in CLUSTERRATIOF for a table with a large number of rows
| can represent a much more significant number of unclustered rows than for a
| table with a small number of rows.
| Example: A CLUSTERRATIOF of 99% for a table with 100 000 000 rows
| represents 100 000 unclustered rows. Whereas, the CLUSTERRATIOF of 95% for
| a table with 100 000 rows represents 5000 unclustered rows.
Figure 99 on page 882 shows an index scan on an index with a high cluster ratio.
Compare that with Figure 100 on page 883, which shows an index scan on an
index with a low cluster ratio.
Intermediate
8 13 33 45 75 86 pages
Leaf
pages
Row
Table
Table space
Figure 99. A clustered index scan. This figure assumes that the index is 100% clustered.
Intermediate
8 13 33 45 75 86 pages
Leaf
pages
Row
Table
Table space
Figure 100. A nonclustered index scan. In some cases, DB2 can access the data pages in
order even when a nonclustered index is used.
FIRSTKEYCARDF: The number of distinct values of the first index key column.
When an indexable equal predicate is specified on the first index key column,
1/FIRSTKEYCARDF is the filter factor for the predicate and the index. The higher
the number is, the less the cost is.
FULLKEYCARDF: The number of distinct values for the entire index key. When
indexable equal predicates are specified on all the index key columns,
1/FULLKEYCARDF is the filter factor for the predicates and the index. The higher
the number is, the less the cost is.
When the number of matching columns is greater than 1 and less than the number
of index key columns, the filtering of the index is located between
1/FIRSTKEYCARDF and 1/FULLKEYCARDF.
NLEAF: The number of active leaf pages in the index. NLEAF is a portion of the
cost to scan the index. The smaller the number is, the less the cost is. It is also less
when the filtering of the index is high, which comes from FIRSTKEYCARDF,
FULLKEYCARDF, and other indexable predicates.
Reorganizing Indexes
To understand index organization, you must understand the LEAFNEAR and
LEAFFAR columns of SYSIBM.SYSINDEXPART. This section describes how to
interpret those values and then describes some rules of thumb for determining
when to reorganize the index.
Logical view
Root page
2
Leaf page Leaf page Leaf page Leaf page Leaf page
17 78 13 16 79
Physical view
LEAFNEAR LEAFFAR
2nd jump 3rd jump
Leaf page Leaf page Leaf page Leaf page Leaf page
13 ... 16 17 prefetch 78 79
quantity
GARCIA HANSON DOYLE FORESTER JACKSON
1st jump
LEAFFAR
Figure 101. Logical and physical views of an index in which LEAFNEAR=1 and LEAFFAR=2
The logical view at the top of the figure shows that for an index scan four leaf
pages need to be scanned to access the data for FORESTER through JACKSON.
The physical view at the bottom of the figure shows how the pages are physically
accessed. The first page is at physical leaf page 78, and the other leaf pages are at
physical locations 79, 13, and 16. A jump forward or backward of more than one
page represents non-optimal physical ordering. LEAFNEAR represents the number
of jumps within the prefetch quantity, and LEAFFAR represents the number of
jumps outside the prefetch quantity. In this example, assuming that the prefetch
quantity is 32, there are two jumps outside the prefetch quantity. A jump from
page 78 to page 13, and one from page 16 to page 79. Thus, LEAFFAR is 2.
Because of the jump within the prefetch quantity from page 13 to page 16,
LEAFNEAR is 1.
LEAFNEAR has a smaller impact than LEAFFAR because the LEAFNEAR pages,
which are located within the prefetch quantity, are typically read by prefetch
without incurring extra I/Os.
Additionally, you can use real-time statistics to identify DB2 objects that should be
reorganized, have their statistics updated, or be image copied.
To do that, run RUNSTATS on your production tables to get current statistics for
access path selection. Then retrieve them and use them to build SQL statements to
update the catalog of the test system.
Example: You can use queries similar to the following queries to build those
statements. To successfully model your production system, the table definitions
must be the same on both the test and production systems. For example, they must
have the same creator, name, indexes, number of partitions, and so on.
Delete statistics from SYSTABSTATS on the test subsystem for the specified tables
by using the following statement:
DELETE FROM (TEST_SUBSYSTEM).SYSTABSTATS
WHERE OWNER IN (creator_list)
AND NAME IN (table_list)
Delete statistics from SYSCOLDIST on the test subsystem for the specified tables
by using the following statement:
DELETE FROM (TEST_SUBSYSTEM).SYSCOLDIST
WHERE TBOWNER IN (creator_list)
AND TBNAME IN (table_list);
Access path differences from test to production: When you bind applications on the
test system with production statistics, access paths should be similar but still may
be different to what you see when the same query is bound on your production
system. The access paths from test to production could be different for the
following possible reasons:
v The processor models are different.
v The number of processors are different. (Differences in the number of processors
can affect the degree of parallelism that is obtained.)
v The buffer pool sizes are different.
v The RID pool sizes are different.
v Data in SYSIBM.SYSCOLDIST is mismatched. (This mismatch occurs only if
some of the previously mentioned steps mentioned are not followed exactly).
v The service levels are different.
v The values of optimization subsystem parameters, such as STARJOIN,
NPGTHRSH, and PARAMDEG (MAX DEGREE on installation panel DSNTIP4)
are different.
v The use of techniques such as optimization hints and volatile tables are different.
Tools to help: If your production system is accessible from your test system, you
can use DB2 PM EXPLAIN on your test system to request EXPLAIN information
from your production system. This request can reduce the need to simulate a
production system by updating the catalog.
You can also use the DB2 Visual Explain feature to display the current
PLAN_TABLE output or the graphed access paths for statements within any
particular subsystem from your workstation environment. For example, if you have
your test system on one subsystem and your production system on another
subsystem, you can visually compare the PLAN_TABLE outputs or access paths
simultaneously with some window or view manipulation. You can then access the
catalog statistics for certain referenced objects of an access path from either of the
displayed PLAN_TABLEs or access path graphs. For information on using Visual
Explain, see DB2 Visual Explain online help.
Other tools: The following tools can help you tune SQL queries:
v DB2 Visual Explain
Visual Explain is a graphical workstation feature of DB2 that provides:
– An easy-to-understand display of a selected access path
– Suggestions for changing an SQL statement
– An ability to invoke EXPLAIN for dynamic SQL statements
– An ability to provide DB2 catalog statistics for referenced objects of an access
path
– A subsystem parameter browser with keyword 'Find' capabilities
For information about using DB2 Visual Explain, which is a separately packaged
CD-ROM provided with your DB2 UDB for z/OS Version 8 license, see DB2
Visual Explain online help.
v OMEGAMON Performance Expert
OMEGAMON is a performance monitoring tool that formats performance data.
OMEGAMON combines information from EXPLAIN and from the DB2 catalog.
It displays access paths, indexes, tables, table spaces, plans, packages, DBRMs,
host variable definitions, ordering, table access and join sequences, and lock
types. Output is presented in a dialog rather than as a table, making the
information easy to read and understand. DB2 Performance Monitor (DB2 PM)
performs some of the functions of OMEGAMON Perfomance Expert. For more
information about OMEGAMON and DB2 PM, see “OMEGAMON” on page
1162.
v DB2 Estimator
DB2 Estimator for Windows is an easy-to-use, stand-alone tool for estimating the
performance of DB2 UDB for z/OS applications. You can use it to predict the
performance and cost of running the applications, transactions, SQL statements,
For each access to a single table, EXPLAIN tells you if an index access or table
space scan is used. If indexes are used, EXPLAIN tells you how many indexes and
index columns are used and what I/O methods are used to read the pages. For
joins of tables, EXPLAIN tells you which join method and type are used, the order
in which DB2 joins the tables, and when and why it sorts any rows.
The primary use of EXPLAIN is to observe the access paths for the SELECT parts
of your statements. For UPDATE and DELETE WHERE CURRENT OF, and for
INSERT, you receive somewhat less information in your plan table. And some
accesses EXPLAIN does not describe: for example, the access to LOB values, which
are stored separately from the base table, and access to parent or dependent tables
needed to enforce referential constraints.
The access paths shown for the example queries in this chapter are intended only
to illustrate those examples. If you execute the queries in this chapter on your
system, the access paths chosen can be different.
| Figure 102 on page 894 shows most current format of a plan table, which consists
| of 58 columns. Table 164 on page 896 shows the content of each column.
Table 164 on page 896 shows the descriptions of the columns in PLAN_TABLE.
When the values of QUERYNO are based on the statement number in the source
program, values greater than 32767 are reported as 0. However, in a very long
program, the value is not guaranteed to be unique. If QUERYNO is not unique, the
value of TIMESTAMP is unique.
| QBLOCKNO A number that identifies each query block within a query. The value of the numbers
| are not in any particular order, nor are they necessarily consecutive.
APPLNAME The name of the application plan for the row. Applies only to embedded EXPLAIN
statements executed from a plan or to statements explained when binding a plan.
Blank if not applicable.
PROGNAME The name of the program or package containing the statement being explained.
Applies only to embedded EXPLAIN statements and to statements explained as the
result of binding a plan or package. Blank if not applicable.
PLANNO The number of the step in which the query indicated in QBLOCKNO was processed.
This column indicates the order in which the steps were executed.
METHOD A number (0, 1, 2, 3, or 4) that indicates the join method used for the step:
0 First table accessed, continuation of previous table accessed, or not used.
1 Nested loop join. For each row of the present composite table, matching rows
of a new table are found and joined.
2 Merge scan join. The present composite table and the new table are scanned
in the order of the join columns, and matching rows are joined.
3 Sorts needed by ORDER BY, GROUP BY, SELECT DISTINCT, UNION, a
quantified predicate, or an IN predicate. This step does not access a new
table.
4 Hybrid join. The current composite table is scanned in the order of the
join-column rows of the new table. The new table is accessed using list
prefetch.
CREATOR The creator of the new table accessed in this step, blank if METHOD is 3.
| TNAME The name of a table, materialized query table, created or declared temporary table,
| materialized view, or materialized table expression. The value is blank if METHOD is
3. The column can also contain the name of a table in the form DSNWFQB(qblockno).
DSNWFQB(qblockno) is used to represent the intermediate result of a UNION ALL or
an outer join that is materialized. If a view is merged, the name of the view does not
appear.
TABNO Values are for IBM use only.
The data in this column is right justified. For example, IX appears as a blank
followed by I followed by X. If the column contains a blank, then no lock is acquired.
TIMESTAMP Usually, the time at which the row is processed, to the last .01 second. If necessary,
DB2 adds .01 second to the value to ensure that rows for two successive queries have
different values.
| M is the value of the column when the table contains muliple CCSID set, the value of
| the column is M.
| TABLE_SCCSID The SBCS CCSID value of the table. If column TABLE_ENCODE is M, the value is 0.
| TABLE_MCCSID The mixed CCSID value of the table. If column TABLE_ENCODE is M, the value is 0
| TABLE_DCCSID The DBCS CCSID value of the table. If column TABLE_ENCODE is M, the value is 0.
| ROUTINE_ID Values are for IBM use only.
| CTEREF If the referenced table is a common table expression, the value is the top-level query
| block number.
| STMTTOKEN User-specified statement token.
For tips on maintaining a growing plan table, see “Maintaining a plan table” on
page 902.
| If the plan owner or the package owner has an alias on a PLAN_TABLE that was
| created by another owner, other_owner.PLAN_TABLE is populated instead of
| package_owner.PLAN_TABLE or plan_owner.PLAN_TABLE.
EXPLAIN for remote binds: A remote requester that accesses DB2 can specify
EXPLAIN(YES) when binding a package at the DB2 server. The information
appears in a plan table at the server, not at the requester. If the requester does not
support the propagation of the option EXPLAIN(YES), rebind the package at the
requester with that option to obtain access path information. You cannot get
information about access paths for SQL statements that use private protocol.
Use parameter markers for host variables: If you have host variables in a
predicate for an original query in a static application and if you are using DB2
QMF or SPUFI to execute EXPLAIN for the query, in most cases, use parameter
markers where you use host variables in the original query. If you use a literal
value instead, you might see different access paths for your static and dynamic
queries. For instance, compare the following queries:
QMF Query Using Parameter QMF Query Using
Original Static SQL Marker Literal
When to use a literal: If you know that the static plan or package was bound
with REOPT(ALWAYS) and you have some idea of what is returned in the host
variable, it can be more accurate to include the literal in the DB2 QMF EXPLAIN.
REOPT(ALWAYS) means that DB2 will replace the value of the host variable with
the true value at run time and then determine the access path. For more
information about REOPT(ALWAYS) see “Changing the access path at run time”
on page 739.
Expect these differences: Even when using parameter markers, you could see
different access paths for static and dynamic queries. DB2 assumes that the value
that replaces a parameter marker has the same length and precision as the column
it is compared to. That assumption determines whether the predicate is stage 1
indexable or stage 2, which is always nonindexable.
| If the column definition and the host variable definition are both strings, the
| predicate becomes stage 1 but not indexable when any of the following conditions
| are true:
| v The column definition is CHAR or VARCHAR, and the host variable definition
| is GRAPHIC or VARGRAPHIC.
| v The column definition is GRAPHIC or VARGRAPHIC, the host variable
| definition is CHAR or VARCHAR, and the length of the column definition is
| less than the length of the host variable definition.
| v Both the column definition and the host variable definition are CHAR or
| VARCHAR, the length of the column definition is less than the length of the
| host variable definition, and the comparison operator is any operator other than
| ″=″.
| v Both the column definition and the host variable definition are GRAPHIC or
| VARGRAPHIC, the length of the column definition is less than the length of the
| host variable definition, and the comparison operator is any operator other than
| ″=″.
| The predicate becomes stage 2 when any of the following conditions are true:
| v The column definition is DECIMAL(p,s), where p>15, and the host variable
| definition is REAL or FLOAT.
| v The column definition is CHAR, VARCHAR, GRAPHIC, or VARGRAPHIC, and
| the host variable definition is DATE, TIME, or TIMESTAMP.
All rows with the same non-zero value for QBLOCKNO and the same value for
QUERYNO relate to a step within the query. QBLOCKNOs are not necessarily
executed in the order shown in PLAN_TABLE. But within a QBLOCKNO, the
PLANNO column gives the substeps in the order they execute.
For each substep, the TNAME column identifies the table accessed. Sorts can be
shown as part of a table access or as a separate step.
What if QUERYNO=0? For entries that contain QUERYNO=0, use the timestamp,
which is guaranteed to be unique, to distinguish individual statements.
Both of the following examples have these indexes: IX1 on T(C1) and IX2 on T(C2).
If MATCHCOLS is greater than 0, the access method is called a matching index scan:
the query uses predicates that match the index columns.
In general, the matching predicates on the leading index columns are equal or IN
predicates. The predicate that matches the final index column can be an equal, IN,
NOT NULL, or range predicate (<, <=, >, >=, LIKE, or BETWEEN).
The index XEMP5 is the chosen access path for this query, with MATCHCOLS = 3.
Two equal predicates are on the first two columns and a range predicate is on the
third column. Though the index has four columns in the index, only three of them
can be considered matching columns.
Index-only access to data is not possible for any step that uses list prefetch, which
is described under “What kind of prefetching is expected? (PREFETCH = L, S, D,
| or blank)” on page 908. Index-only access is not possible for padded indexes when
| varying-length data is returned or a VARCHAR column has a LIKE predicate,
| unless the VARCHAR FROM INDEX field of installation panel DSNTIP4 is set to
If access is by more than one index, INDEXONLY is Y for a step with access type
MX, because the data pages are not actually accessed until all the steps for
intersection (MI) or union (MU) take place.
When an SQL application uses index-only access for a ROWID column, the
application claims the table space or table space partition. As a result, contention
may occur between the SQL application and a utility that drains the table space or
partition. Index-only access to a table for a ROWID column is not possible if the
associated table space or partition is in an incompatible restrictive state. For
example, an SQL application can make a read claim on the table space only if the
restrictive state allows readers.
Direct row access is very fast, because DB2 does not need to use the index or a
table space scan to find the row. Direct row access can be used on any table that
has a ROWID column.
To use direct row access, you first select the values of a row into host variables.
The value that is selected from the ROWID column contains the location of that
row. Later, when you perform queries that access that row, you include the row ID
value in the search condition. If DB2 determines that it can use direct row access, it
uses the row ID value to navigate directly to the row.
Example: Assume that the host variable in the following statement contains a row
ID from SOURCE:
SELECT * FROM TARGET
WHERE ID = :hv_rowid
Because the row ID location is not the same as in the source table, DB2 will
probably not find that row. Search on another column to retrieve the row you
want.
Reverting to ACCESSTYPE
Although DB2 might plan to use direct row access, circumstances can cause DB2 to
not use direct row access at run time. DB2 remembers the location of the row as of
the time it is accessed. However, that row can change locations (such as after a
REORG) between the first and second time it is accessed, which means that DB2
cannot use direct row access to find the row on the second access attempt. Instead
of using direct row access, DB2 uses the access path that is shown in the
ACCESSTYPE column of PLAN_TABLE.
If the predicate you are using to do direct row access is not indexable and if DB2 is
unable to use direct row access, then DB2 uses a table space scan to find the row.
This can have a profound impact on the performance of applications that rely on
direct row access. Write your applications to handle the possibility that direct row
access might not be used. Some options are to:
v Ensure that your application does not try to remember ROWID columns across
reorganizations of the table space.
When your application commits, it releases its claim on the table space; it is
possible that a REORG can run and move the row, which disables direct row
access. Plan your commit processing accordingly; use the returned row ID value
before committing, or re-select the row ID value after a commit is issued.
If you are storing ROWID columns from another table, update those values after
the table with the ROWID column is reorganized.
v Create an index on the ROWID column, so that DB2 can use the index if direct
row access is disabled.
v Supplement the ROWID column predicate with another predicate that enables
DB2 to use an existing index on the table. For example, after reading a row, an
application might perform the following update:
EXEC SQL UPDATE EMP
SET SALARY = :hv_salary + 1200
WHERE EMP_ROWID = :hv_emp_rowid
AND EMPNO = :hv_empno;
If an index exists on EMPNO, DB2 can use index access if direct access fails. The
additional predicate ensures DB2 does not revert to a table space scan.
RID list processing: Direct row access and RID list processing are mutually
exclusive. If a query qualifies for both direct row access and RID list processing,
direct row access is used. If direct row access fails, DB2 does not revert to RID list
processing; instead it reverts to the backup access type.
A limited partition scan can be combined with other access methods. For example,
consider the following query:
SELECT .. FROM T
WHERE (C1 BETWEEN ’2002’ AND ’3280’
OR C1 BETWEEN ’6000’ AND ’8000’)
AND C2 = ’6’;
Assume that table T has a partitioned index on column C1 and that values of C1
between 2002 and 3280 all appear in partitions 3 and 4 and the values between
6000 and 8000 appear in partitions 8 and 9. Assume also that T has another index
on column C2. DB2 could choose any of these access methods:
v A matching index scan on column C1. The scan reads index values and data
only from partitions 3, 4, 8, and 9. (PAGE_RANGE=N)
v A matching index scan on column C2. (DB2 might choose that if few rows have
C2=6.) The matching index scan reads all RIDs for C2=6 from the index on C2
and corresponding data pages from partitions 3, 4, 8, and 9. (PAGE_RANGE=Y)
v A table space scan on T. DB2 avoids reading data pages from any partitions
except 3, 4, 8 and 9. (PAGE_RANGE=Y)
METHOD 3 sorts: These are used for ORDER BY, GROUP BY, SELECT DISTINCT,
UNION, or a quantified predicate. A quantified predicate is ’col = ANY (fullselect)’
or ’col = SOME (fullselect)’. They are indicated on a separate row. A single row of
the plan table can indicate two sorts of a composite table, but only one sort is
actually done.
Generally, values of R and S are considered better for performance than a blank.
Use variance and standard deviation with care: The VARIANCE and STDDEV
functions are always evaluated late (that is, COLUMN_FN_EVAL is blank). This
causes other functions in the same query block to be evaluated late as well. For
example, in the following query, the sum function is evaluated later than it would
be if the variance function was not present:
SELECT SUM(C1), VARIANCE(C1) FROM T1;
| Table 167 on page 911 shows the corresponding plan table for the WHEN clause.
Assume that table T has no index on C1. The following is an example that uses a
table space scan:
SELECT * FROM T WHERE C1 = VALUE;
In this case, at least every row in T must be examined to determine whether the
value of C1 matches the given value.
Recommendation for SEGSIZE value: Table 168 on page 912 summarizes the
recommendations for SEGSIZE, depending on how large the table is.
If you do not want to use sequential prefetch for a particular query, consider
adding to it the clause OPTIMIZE FOR 1 ROW.
| Backward index scan: In some cases, DB2 can use a backward index scan on a
| descending index to avoid a sort on ascending data. Similarly, an ascending index
| can be used to avoid a sort on descending data. For DB2 to use a backward index
| scan, the following conditions must be true:
| Example: Suppose that an index exists on the ACCT_STAT table. The index is
| defined by the following columns: ACCT_NUM, STATUS_DATE, STATUS_TIME.
| All of the columns in the index are in ascending order. Now, consider the
| following SELECT statements:
| SELECT STATUS_DATE, STATUS
| FROM ACCT_STAT
| WHERE ACCT_NUM = :HV
| ORDER BY STATUS_DATE DESC, STATUS_TIME DESC;
| SELECT STATUS_DATE, STATUS
| FROM ACCT_STAT
| WHERE ACCT_NUM = :HV
| ORDER BY STATUS_DATE ASC, STATUS_TIME ASC;
| By using a backward index scan, DB2 can use the same index for both statements.
Not all sorts are inefficient. For example, if the index that provides ordering is not
an efficient one and many rows qualify, it is possible that using another access
path to retrieve and then sort the data could be more efficient than the inefficient,
ordering index.
Indexes that are created to avoid sorts can sometimes be non-selective. If these
indexes require data access and if the cluster ratio is poor, these indexes are
unlikely to be chosen. Accessing many rows by using a poorly clustered index is
often less efficient than accessing rows by using a table space scan and sort. Both
table space scan and sort benefit from sequential access.
Costs of indexes
Before you begin creating indexes, consider carefully their costs:
| v Indexes require storage space. Padded indexes require more space than
| nonpadded indexes for long index keys. For short index keys, nonpadded
| indexes can take more space.
| v Each index requires an index space and a data set, or as many data sets as the
| number of data partitions if the index is partitioned, and operating system
| restrictions exist on the number of open data sets.
v Indexes must be changed to reflect every insert or delete operation on the base
table. If an update operation updates a column that is in the index, then the
index must also be changed. The time required by these operations increases
accordingly.
v Indexes can be built automatically when loading data, but this takes time. They
must be recovered or rebuilt if the underlying table space is recovered, which
might also be time-consuming.
In the general case, the rules for determining the number of matching columns are
simple, although there are a few exceptions.
v Look at the index columns from leading to trailing. For each index column,
search for an indexable boolean term predicate on that column. (See “Properties
of predicates” on page 715 for a definition of boolean term.) If such a predicate
is found, then it can be used as a matching predicate.
Column MATCHCOLS in a plan table shows how many of the index columns
are matched by predicates.
v If no matching predicate is found for a column, the search for matching
predicates stops.
v If a matching predicate is a range predicate, then there can be no more matching
columns. For example, in the matching index scan example that follows, the
range predicate C2>1 prevents the search for additional matching columns.
v For star joins, a missing key predicate does not cause termination of matching
columns that are to be used on the fact table index.
Two matching columns occur in this example. The first one comes from the
predicate C1=1, and the second one comes from C2>1. The range predicate on C2
prevents C3 from becoming a matching column.
The predicates can be applied on the index, but they are not matching predicates.
C5=8 is not an index screening predicate, and it must be evaluated when data is
retrieved. The value of MATCHCOLS in the plan table is 1.
You can regard the IN-list index scan as a series of matching index scans with the
values in the IN predicate being used for each matching index scan. The following
example has an index on (C1,C2,C3,C4) and might use an IN-list index scan:
SELECT * FROM T
WHERE C1=1 AND C2 IN (1,2,3)
AND C3>0 AND C4<100;
The plan table shows MATCHCOLS = 3 and ACCESSTYPE = N. The IN-list scan is
performed as the following three matching index scans:
(C1=1,C2=1,C3>0), (C1=1,C2=2,C3>0), (C1=1,C2=3,C3>0)
Parallelism is supported for queries that involve IN-list index access. These queries
used to run sequentially in previous releases of DB2, although parallelism could
have been used when the IN-list access was for the inner table of a parallel group.
Now, in environments in which parallelism is enabled, you can see a reduction in
elapsed time for queries that involve IN-list index access for the outer table of a
parallel group.
RID lists are constructed for each of the indexes involved. The unions or
intersections of the RID lists produce a final list of qualified RIDs that is used to
retrieve the result rows, using list prefetch. You can consider multiple index access
as an extension to list prefetch with more complex RID retrieval operations in its
first phase. The complex operators are union and intersection.
The plan table contains a sequence of rows describing the access. For this query,
ACCESSTYPE uses the following values:
Value Meaning
M Start of multiple index access processing
MX Indexes are to be scanned for later union or intersection
MI An intersection (AND) is performed
MU A union (OR) is performed
The following steps relate to the previous query and the values shown for the plan
table in Table 169:
1. Index EMPX1, with matching predicate AGE= 34, provides a set of candidates
for the result of the query. The value of MIXOPSEQ is 1.
2. Index EMPX1, with matching predicate AGE = 40, also provides a set of
candidates for the result of the query. The value of MIXOPSEQ is 2.
3. Index EMPX2, with matching predicate JOB=’MANAGER’, also provides a set
of candidates for the result of the query. The value of MIXOPSEQ is 3.
4. The first intersection (AND) is done, and the value of MIXOPSEQ is 4. This MI
removes the two previous candidate lists (produced by MIXOPSEQs 2 and 3)
by intersecting them to form an intermediate candidate list, IR1, which is not
shown in PLAN_TABLE.
5. The last step, where the value MIXOPSEQ is 5, is a union (OR) of the two
remaining candidate lists, which are IR1 and the candidate list produced by
MIXOPSEQ 1. This final union gives the result for the query.
Table 169. Plan table output for a query that uses multiple indexes. Depending on the filter
factors of the predicates, the access steps can appear in a different order.
PLAN- ACCESS- MATCH- ACCESS- MIXOP-
NO TNAME TYPE COLS NAME PREFETCH SEQ
1 EMP M 0 L 0
1 EMP MX 1 EMPX1 1
1 EMP MX 1 EMPX1 2
In this example, the steps in the multiple index access follow the physical sequence
of the predicates in the query. This is not always the case. The multiple index steps
are arranged in an order that uses RID pool storage most efficiently and for the
least amount of time.
With an index on T(C1,C2), the following queries can use index-only access:
Sometimes DB2 can determine that an index that is not fully matching is actually
an equal unique index case. Assume the following case:
Unique Index1: (C1, C2)
Unique Index2: (C2, C1, C3)
SELECT C3 FROM T
WHERE C1 = 1 AND C2 = 5;
Index1 is a fully matching equal unique index. However, Index2 is also an equal
unique index even though it is not fully matching. Index2 is the better choice
because, in addition to being equal and unique, it also provides index-only access.
To use a matching index scan to update an index in which its key columns are
being updated, the following conditions must be met:
v Each updated key column must have a corresponding predicate of the form
″index_key_column = constant″ or ″index_key_column IS NULL″.
v If a view is involved, WITH CHECK OPTION must not be specified.
| For updates that do not involve dynamic scrollable cursors, DB2 can use list
| prefetch, multiple index access, or IN-list access. With list prefetch or multiple
index access, any index or indexes can be used in an UPDATE operation. Of
course, to be chosen, those access paths must provide efficient access to the data.
| A positioned update that uses a dynamic scrollable cursor cannot use an access
| path with list prefetch, or multiple index access. This means that indexes that do
| not meet the preceding criteria cannot be used to locate the rows to be updated.
This section begins with “Definitions and examples of join operations” on page 919
and continues with descriptions of the methods of joining that can be indicated in
a plan table:
v “Nested loop join (METHOD=1)” on page 921
v “Merge scan join (METHOD=2)” on page 923
v “Hybrid join (METHOD=4)” on page 925
v “Star join (JOIN_TYPE=’S’)” on page 926
Definitions: A composite table represents the result of accessing one or more tables
in a query. If a query contains a single table, only one composite table exists. If one
or more joins are involved, an outer composite table consists of the intermediate
result rows from the previous join step. This intermediate result may, or may not,
be materialized into a work file.
The new table (or inner table) in a join operation is the table that is newly accessed
in the step.
A join operation can involve more than two tables. In these cases, the operation is
carried out in a series of steps. For non-star joins, each step joins only two tables.
(Method 1)
Nested
Composite TJ loop TK New
join
(Method 2)
Composite Work Merge scan TL New
File join
(Sort)
Result
Table 170 and Table 171 on page 920 show a subset of columns in a plan table for
this join operation.
Table 170. Subset of columns for a two-step join operation
ACCESS- MATCH- ACCESS- INDEX- TSLOCK-
METHOD TNAME TYPE COLS NAME ONLY MODE
0 TJ I 1 TJX1 N IS
1 TK I 1 TKX1 N IS
2 TL I 0 TLX1 Y S
3 0 N
Definitions: A join operation typically matches a row of one table with a row of
another on the basis of a join condition. For example, the condition might specify
that the value in column A of one table equals the value of column X in the other
table (WHERE T1.A = T2.X).
Two kinds of joins differ in what they do with rows in one table that do not match
on the join condition with any row in the other table:
v An inner join discards rows of either table that do not match any row of the
other table.
v An outer join keeps unmatched rows of one or the other table, or of both. A row
in the composite table that results from an unmatched row is filled out with null
values. As Table 172 shows, outer joins are distinguished by which unmatched
rows they keep.
Table 172. Join types and kept unmatched rows
Outer join type Included unmatched rows
Left outer join The composite (outer) table
Right outer join The new (inner) table
Full outer join Both tables
Example: Suppose that you issue the following statement to explain an outer join:
EXPLAIN PLAN SET QUERYNO = 10 FOR
SELECT PROJECT, COALESCE(PROJECTS.PROD#, PRODNUM) AS PRODNUM,
PRODUCT, PART, UNITS
FROM PROJECTS LEFT JOIN
(SELECT PART,
COALESCE(PARTS.PROD#, PRODUCTS.PROD#) AS PRODNUM,
PRODUCTS.PRODUCT
FROM PARTS FULL OUTER JOIN PRODUCTS
ON PARTS.PROD# = PRODUCTS.PROD#) AS TEMP
ON PROJECTS.PROD# = PRODNUM
Table 173 shows a subset of the plan table for the outer join.
Table 173. Plan table output for an example with outer joins
QUERYNO QBLOCKNO PLANNO TNAME JOIN_TYPE
10 1 1 PROJECTS
10 1 2 TEMP L
10 2 1 PRODUCTS
10 2 2 PARTS F
Column JOIN_TYPE identifies the type of outer join with one of these values:
v F for FULL OUTER JOIN
Materialization with outer join: Sometimes DB2 has to materialize a result table
when an outer join is used in conjunction with other joins, views, or nested table
expressions. You can tell when this happens by looking at the TABLE_TYPE and
TNAME columns of the plan table. When materialization occurs, TABLE_TYPE
contains a W, and TNAME shows the name of the materialized table as
DSNWFQB(xx), where xx is the number of the query block (QBLOCKNO) that
produced the work file.
SELECT A, B, X, Y
FROM (SELECT FROM OUTERT WHERE A=10)
LEFT JOIN INNERT ON B=X;
Method of joining
DB2 scans the composite (outer) table. For each row in that table that qualifies (by
satisfying the predicates on that table), DB2 searches for matching rows of the new
(inner) table. It concatenates any it finds with the current row of the composite
table. If no rows match the current row, then:
v For an inner join, DB2 discards the current row.
v For an outer join, DB2 concatenates a row of null values.
Stage 1 and stage 2 predicates eliminate unqualified rows during the join. (For an
explanation of those types of predicate, see “Stage 1 and stage 2 predicates” on
page 717.) DB2 can scan either table using any of the available access methods,
including table space scan.
Performance considerations
The nested loop join repetitively scans the inner table. That is, DB2 scans the outer
table once, and scans the inner table as many times as the number of qualifying
Example: left outer join: Figure 105 on page 921 illustrates a nested loop for a left
outer join. The outer join preserves the unmatched row in OUTERT with values
A=10 and B=6. The same join method for an inner join differs only in discarding
that row.
Example: one-row table priority: For a case like the following example, with a
unique index on T1.C2, DB2 detects that T1 has only one row that satisfies the
search condition. DB2 makes T1 the first table in a nested loop join.
SELECT * FROM T1, T2
WHERE T1.C1 = T2.C1 AND
T1.C2 = 5;
Example: Cartesian join with small tables first: A Cartesian join is a form of
nested loop join in which there are no join predicates between the two tables. DB2
usually avoids a Cartesian join, but sometimes it is the most efficient method, as in
the following example. The query uses three tables: T1 has 2 rows, T2 has 3 rows,
and T3 has 10 million rows.
SELECT * FROM T1, T2, T3
WHERE T1.C1 = T3.C1 AND
T2.C2 = T3.C2 AND
T3.C3 = 5;
Join predicates are between T1 and T3 and between T2 and T3. There is no join
predicate between T1 and T2.
Assume that 5 million rows of T3 have the value C3=5. Processing time is large if
T3 is the outer table of the join and tables T1 and T2 are accessed for each of 5
million rows.
However if all rows from T1 and T2 are joined, without a join predicate, the 5
million rows are accessed only six times, once for each row in the Cartesian join of
T1 and T2. It is difficult to say which access path is the most efficient. DB2
evaluates the different options and could decide to access the tables in the
sequence T1, T2, T3.
Sorting the composite table: Your plan table could show a nested loop join that
includes a sort on the composite table. DB2 might sort the composite table (the
outer table in Figure 105) if the following conditions exist:
v The join columns in the composite table and the new table are not in the same
sequence.
Nested loop join with a sorted composite table has the following performance
advantages:
v Uses sequential detection efficiently to prefetch data pages of the new table,
reducing the number of synchronous I/O operations and the elapsed time.
v Avoids repetitive full probes of the inner table index by using the index
look-aside.
Method of joining
Figure 106 illustrates a merge scan join.
SELECT A, B, X, Y
FROM OUTER, INNER
WHERE A=10 AND B=X;
DB2 scans both tables in the order of the join columns. If no efficient indexes on
the join columns provide the order, DB2 might sort the outer table, the inner table,
or both. The inner table is put into a work file; the outer table is put into a work
file only if it must be sorted. When a row of the outer table matches a row of the
inner table, DB2 returns the combined rows.
DB2 then reads another row of the inner table that might match the same row of
the outer table and continues reading rows of the inner table as long as there is a
match. When there is no longer a match, DB2 reads another row of the outer table.
Performance considerations
A full outer join by this method uses all predicates in the ON clause to match the
two tables and reads every row at the time of the join. Inner and left outer joins
use only stage 1 predicates in the ON clause to match the tables. If your tables
match on more than one column, it is generally more efficient to put all the
predicates for the matches in the ON clause, rather than to leave some of them in
the WHERE clause.
For an inner join, DB2 can derive extra predicates for the inner table at bind time
and apply them to the sorted outer table to be used at run time. The predicates can
reduce the size of the work file needed for the inner table.
If DB2 has used an efficient index on the join columns, to retrieve the rows of the
inner table, those rows are already in sequence. DB2 puts the data directly into the
work file without sorting the inner table, which reduces the elapsed time.
SELECT A, B, X, Y
FROM OUTER, INNER
WHERE A=10 AND X=B;
INNER
X Y RIDs
OUTER
A B 1 Davis P5
Index 1 Index 2 Jones P2
10 1 2 Smith P7
10 1 3 Brown P4
10 2 5 Blake P1 5
10 3 7 Stone P6
10 6 9 Meyer P3 Composite table
A B X Y
10 2 2 Jones
2 X=B List prefetch 4 10 3 3 Brown
10 1 1 Davis
Intermediate table (phase 1) 10 1 1 Davis
OUTER INNER 10 2 2 Jones
data RIDs
RID List
10 1 P5
10 1 P5 P5
10 2 P2 P2
10 2 P7 P7
10 3 P4 P4
3 SORT
RID list
P2
P4
P5
Intermediate table (phase 2) P7
OUTER INNER
data RIDs
10 2 P2
10 3 P4
10 1 P5
10 1 P5
10 2 P7
Method of joining
The method requires obtaining RIDs in the order needed to use list prefetch. The
steps are shown in Figure 107. In that example, both the outer table (OUTER) and
the inner table (INNER) have indexes on the join columns.
Performance considerations
Hybrid join uses list prefetch more efficiently than nested loop join, especially if
there are indexes on the join predicate with low cluster ratios. It also processes
duplicates more efficiently because the inner table is scanned only once for each set
of duplicate values in the join column of the outer table.
If the index on the inner table is highly clustered, there is no need to sort the
intermediate table (SORTN_JOIN=N). The intermediate table is placed in a table in
memory rather than in a work file.
Dimension
table
Dimension Dimension
table table
Fact table
Dimension Dimension
table table
Figure 108. Star schema with a fact table and dimension tables
| Unlike the steps in the other join methods (nested loop join, merge scan join, and
| hybrid join) in which only two tables are joined in each step, a step in the star join
| method can involve three or more tables. Dimension tables are joined to the fact
| table via a multi-column index that is defined on the fact table. Therefore, having a
| well-defined, multi-column index on the fact table is critical for efficient star join
| processing.
In this scenario, the sales table contains three columns with IDs from the
dimension tables for time, product, and location instead of three columns for time,
three columns for products, and two columns for location. Thus, the size of the fact
You can create even more complex star schemas by normalizing a dimension table
into several tables. The normalized dimension table is called a snowflake. Only one
of the tables in the snowflake joins directly wiht the fact table.
| You can set the subsystem parameter STARJOIN by using the STAR JOIN
| QUERIES field on the DSNTIP8 installation panel.
Examples: query with three dimension tables: Suppose that you have a store in
San Jose and want information about sales of audio equipment from that store in
2000. For this example, you want to join the following tables:
v A fact table for SALES (S)
v A dimension table for TIME (T) with columns for an ID, month, quarter, and
year
v A dimension table for geographic LOCATION (L) with columns for an ID, city,
region, and country
v A dimension table for PRODUCT (P) with columns for an ID, product item,
class, and inventory
All snowflakes are processed before the central part of the star join, as individual
query blocks, and are materialized into work files. There is a work file for each
snowflake. The EXPLAIN output identifies these work files by naming them
DSN_DIM_TBLX(nn), where nn indicates the corresponding QBLOCKNO for the
snowflake.
This next example shows the plan for a star join that contains two snowflakes.
Suppose that two new tables MANUFACTURER (M) and COUNTRY (C) are
added to the tables in the previous example to break dimension tables PRODUCT
(P) and LOCATION (L) into snowflakes:
v The PRODUCT table has a new column MID that represents the manufacturer.
v Table MANUFACTURER (M) has columns for MID and name to contain
manufacturer information.
v The LOCATION table has a new column CID that represents the country.
v Table COUNTRY (C) has columns for CID and name to contain country
information.
You could write the following query to join all the tables:
SELECT *
FROM SALES S, TIME T, PRODUCT P, MANUFACTURER M,
LOCATION L, COUNTRY C
WHERE S.TIME = T.ID AND
S.PRODUCT = P.ID AND
P.MID = M.MID AND
S.LOCATION = L.ID AND
L.CID = C.CID AND
T.YEAR = 2000 AND
M.NAME = ’some_company’;
The joins in the snowflakes are processed first, and each snowflake is materialized
into a work file. Therefore, when the main star join block (QBLOCKNO=1) is
processed, it contains four tables: SALES (the fact table), TIME (a base dimension
table), and the two snowflake work files.
In this example, in the main star join block, the star join method is used for the
first three tables (as indicated by S in the JOIN TYPE column of the plan table) and
the remaining work file is joined by the nested loop join with sparse index access
on the work file (as indicated by T in the ACCESSTYPE column for
DSN_DIM_TBLX(3)).
| To determine the size of the virtual memory pool, perform the following steps:
| 1. Determine the value of A. Estimate the number of star join queries that run
| concurrently.
| 2. Determine the value of B. Estimate the average number of work files that a star
| join query uses. In typical cases, with highly normalized star schemas, the
| average number is about three to six work files.
| 3. Determine the value of C. Estimate the number of work-file rows, the
| maximum length of the key, and the total of the maximum length of the
| relevant columns. Multiply these three values together to find the size of the
| data caching space for the work file, or the value of C.
| 4. Multiply (A) * (B) * (C) to determine the size of the pool in MB.
| The default virtual memory pool size is 20 MB. To set the pool size, use the
| SJMXPOOL parameter on the DSNTIP8 installation panel.
| To determine the size of the dedicated virtual memory pool, perform the following
| steps:
| 1. Determine the value of A. Estimate the number of star join queries that run
| concurrently.
| In this example, based on the type of operation, up to 12 star join queries are
| expected run concurrently. Therefore, A = 12.
| 2. Determine the value of B. Estimate the average number of work files that a star
| join query uses.
| In this example, the star join query uses two work files, PROD and
| DSN_DIM_TBLX(02). Therefore B = 2.
| 3. Determine the value of C. Estimate the number of work-file rows, the
| maximum length of the key, and the total of the maximum length of the
| relevant columns. Multiply these three values together to find the size of the
| data caching space for the work file, or the value of C.
| Both PROD and DSN_DIM_TBLX(02) are used to determine the value of C.
| Recommendation: Average the values for a representative sample of work files,
| and round the value up to determine an estimate for a value of C.
| v The number of work-file rows depends on the number of rows that match
| the predicate. For PROD, 87 rows are stored in the work file because 87 rows
| match the IN-list predicate. No selective predicate is used for
| DSN_DIM_TBLX(02), so the entire result of the join is stored in the work file.
| The work file for DSN_DIM_TBLX(02) holds 2800 rows.
| v The maximum length of the key depends on the data type definition of the
| table’s key column. For PID, the key column for PROD, the maximum length
If DB2 does not choose prefetch at bind time, it can sometimes use it at execution
time nevertheless. The method is described in “Sequential detection at execution
time” on page 935.
For certain utilities (LOAD, REORG, RECOVER), the prefetch quantity can be
twice as much.
When sequential prefetch is used: Sequential prefetch is generally used for a table
space scan.
For an index scan that accesses eight or more consecutive data pages, DB2 requests
sequential prefetch at bind time. The index must have a cluster ratio of 80% or
higher. Both data pages and index pages are prefetched.
List prefetch can be used in conjunction with either single or multiple index access.
List prefetch does not preserve the data ordering given by the index. Because the
RIDs are sorted in page number order before accessing the data, the data is not
retrieved in order by any column. If the data must be ordered for an ORDER BY
clause or any other reason, it requires an additional sort.
In a hybrid join, if the index is highly clustered, the page numbers might not be
sorted before accessing the data.
List prefetch can be used with most matching predicates for an index scan. IN-list
predicates are the exception; they cannot be the matching predicates when list
prefetch is used.
During execution, DB2 ends list prefetching if more than 25% of the rows in the
table (with a minimum of 4075) must be accessed. Record IFCID 0125 in the
performance trace, mapped by macro DSNDQW01, indicates whether list prefetch
ended.
When list prefetch ends, the query continues processing by a method that depends
on the current access path.
v For access through a single index or through the union of RID lists from two
indexes, processing continues by a table space scan.
v For index access before forming an intersection of RID lists, processing continues
with the next step of multiple index access. If no step remains and no RID list
has been accumulated, processing continues by a table space scan.
When DB2 forms an intersection of RID lists, if any list has 32 or fewer RIDs,
intersection stops and the list of 32 or fewer RIDs is used to access the data.
If a table is accessed repeatedly using the same statement (for example, DELETE in
a do-while loop), the data or index leaf pages of the table can be accessed
sequentially. This is common in a batch processing environment. Sequential
detection can then be used if access is through:
v SELECT or FETCH statements
v UPDATE and DELETE statements
v INSERT statements when existing data pages are accessed sequentially
DB2 can use sequential detection if it did not choose sequential prefetch at bind
time because of an inaccurate estimate of the number of pages to be accessed.
Sequential detection is not used for an SQL statement that is subject to referential
constraints.
The most recent eight pages are tracked. A page is considered page-sequential if it
is within P/2 advancing pages of the current page, where P is the prefetch
quantity. P is usually 32.
When data access is first declared sequential, which is called initial data access
sequential, three page ranges are calculated as follows:
v Let A be the page being requested. RUN1 is defined as the page range of length
P/2 pages starting at A.
v Let B be page A + P/2. RUN2 is defined as the page range of length P/2 pages
starting at B.
v Let C be page B + P/2. RUN3 is defined as the page range of length P pages
starting at C.
For example, assume that page A is 10. Figure 109 on page 937 illustrates the page
ranges that DB2 calculates.
P=32 pages 16 16 32
For initial data access sequential, prefetch is requested starting at page A for P
pages (RUN1 and RUN2). The prefetch quantity is always P pages.
For subsequent page requests where the page is 1) page sequential and 2) data
access sequential is still in effect, prefetch is requested as follows:
v If the desired page is in RUN1, no prefetch is triggered because it was already
triggered when data access sequential was first declared.
v If the desired page is in RUN2, prefetch for RUN3 is triggered and RUN2
becomes RUN1, RUN3 becomes RUN2, and RUN3 becomes the page range
starting at C+P for a length of P pages.
If a data access pattern develops such that data access sequential is no longer in
effect and, thereafter, a new pattern develops that is sequential, then initial data
access sequential is declared again and handled accordingly.
Because, at bind time, the number of pages to be accessed can only be estimated,
sequential detection acts as a safety net and is employed when the data is being
accessed sequentially.
In extreme situations, when certain buffer pool thresholds are reached, sequential
prefetch can be disabled. See “Buffer pool thresholds” on page 635 for a
description of these thresholds.
Sorts of data
After you run EXPLAIN, DB2 sorts are indicated in PLAN_TABLE. The sorts can
be either sorts of the composite table or the new table. If a single row of
PLAN_TABLE has a ’Y’ in more than one of the sort composite columns, then one
sort accomplishes two things. (DB2 will not perform two sorts when two ’Y’s are
in the same row.) For instance, if both SORTC_ORDERBY and SORTC_UNIQ are
’Y’ in one row of PLAN_TABLE, then a single sort puts the rows in order and
removes any duplicate rows as well.
The only reason DB2 sorts the new table is for join processing, which is indicated
by SORTN_JOIN.
Sorts of RIDs
To perform list prefetch, DB2 sorts RIDs into ascending page number order. This
sort is very fast and is done totally in memory. A RID sort is usually not indicated
in the PLAN_TABLE, but a RID sort normally is performed whenever list prefetch
is used. The only exception to this rule is when a hybrid join is performed and a
single, highly clustered index is used on the inner table. In this case SORTN_JOIN
is ’N’, indicating that the RID list for the inner table was not sorted.
Without parallelism:
v If no sorts are required, then OPEN CURSOR does not access any data. It is at
the first fetch that data is returned.
v If a sort is required, then the OPEN CURSOR causes the materialized result table
to be produced. Control returns to the application after the result table is
materialized. If a cursor that requires a sort is closed and reopened, the sort is
performed again.
v If there is a RID sort, but no data sort, then it is not until the first row is fetched
that the RID list is built from the index and the first data record is returned.
Subsequent fetches access the RID pool to access the next data record.
Merge
The merge process is more efficient than materialization, as described in
“Performance of merge versus materialization” on page 944. In the merge process,
the statement that references the view or table expression is combined with the
fullselect that defined the view or table expression. This combination creates a
logically equivalent statement. This equivalent statement is executed against the
database.
Example: Consider the following statements, one of which defines a view, the
other of which references the view:
View-defining statement: View referencing statement:
Example: The following statements show another example of when a view and
table expression can be merged:
SELECT * FROM V1 X
LEFT JOIN
(SELECT * FROM T2) Y ON X.C1=Y.C1
LEFT JOIN T3 Z ON X.C1=Z.C1;
Merged statement:
SELECT * FROM V1 X
LEFT JOIN
T2 ON X.C1 = T2.C1
LEFT JOIN T3 Z ON X.C1 = Z.C1;
Table 185 indicates some cases in which materialization occurs. DB2 can also use
materialization in statements that contain multiple outer joins, outer joins that
combine with inner joins, or merges that cause a join of greater than 15 tables.
Table 185. Cases when DB2 performs view or table expression materialization. The ″X″ indicates a case of
materialization. Notes follow the table.
View definition or table expression uses...2
Aggregate
SELECT FROM view or Aggregate function UNION
table expression uses...1 GROUP BY DISTINCT function DISTINCT UNION ALL(4)
Joins (3) X X X X X
GROUP BY X X X X X
DISTINCT X X X
Aggregate function X X X X X X
When DB2 chooses materialization, TNAME contains the name of the view or table
expression, and TABLE_TYPE contains a W. A value of Q in TABLE_TYPE for the
name of a view or nested table expression indicates that the materialization was
virtual and not actual. (Materialization can be virtual when the view or nested
table expression definition contains a UNION ALL that is not distributed.) When
DB2 chooses merge, EXPLAIN data for the merged statement appears in
PLAN_TABLE; only the names of the base tables on which the view or table
expression is defined appear.
Example: Consider the following statements, which define a view and reference
the view:
View defining statement:
Table 186 shows a subset of columns in a plan table for the query.
Table 186. Plan table output for an example with view materialization
QBLOCK_ TABLE_
QBLOCKNO PLANNO TYPE TNAME TYPE METHOD
1 1 SELECT DEPT T 0
2 1 NOCOSUB V1DIS W 0
2 2 NOCOSUB ? 3
3 1 NOCOSUB EMP T 0
3 2 NOCOSUB ? 3
Notice how TNAME contains the name of the view and TABLE_TYPE contains W
to indicate that DB2 chooses materialization for the reference to the view because
of the use of SELECT DISTINCT in the view definition.
Example: Consider the following statements, which define a view and reference
the view:
If the VIEW was defined without DISTINCT, DB2 would choose merge instead of
materialization. In the sample output, the name of the view does not appear in the
plan table, but the table name on which the view is based does appear.
For an example of when a view definition contains a UNION ALL and DB2 can
distribute joins and aggregations and avoid materialization, see “Using EXPLAIN
to determine UNION activity and query rewrite.” When DB2 avoids
materialization in such cases, TABLE_TYPE contains a Q to indicate that DB2 uses
an intermediate result that is not materialized, and TNAME shows the name of
this intermediate result as DSNWFQB(xx), where xx is the number of the query
block that produced the result.
The QBLOCK_TYPE column in the plan table indicates union activity. For a
UNION ALL, the column contains ’UNIONA’. For UNION, the column contains
’UNION’. When QBLOCK_TYPE=’UNION’, the METHOD column on the same
row is set to 3 and the SORTC_UNIQ column is set to ’Y’ to indicate that a sort is
necessary to remove duplicates. As with other views and table expressions, the
plan table also shows when DB2 uses materialization instead of merge.
Example: Consider the following statements, which define a view, reference the
view, and show how DB2 rewrites the referencing statement:
View defining statement: View is created on three tables that contain weekly data
View referencing statement: For each customer in California, find the average
charges during the first and third Friday of January 2000
Table 188 shows a subset of columns in a plan table for the query.
Table 188. Plan table output for an example with a view with UNION ALLs
QBLOCK_ PARENT_
QBLOCKNO PLANNO TNAME TABLE_TYPE METHOD TYPE QBLOCKNO
1 1 DSNWFQB(02) Q 0 0
1 2 ? 3 0
2 1 ? 0 UNIONA 1
3 1 CUST T 0 2
3 2 WEEK1 T 1 2
4 1 CUST T 0 2
4 2 WEEK3 T 2 2
Notice how DB2 eliminates the second subselect of the view definition from the
rewritten query and how the plan table indicates this removal by showing a
UNION ALL for only the first and third subselect in the view definition. The Q in
the TABLE_TYPE column indicates that DB2 does not materialize the view.
| Your statement table can use an older format in which the STMT_ENCODE
| column does not exist, PROGNAME has a data type of CHAR(8), and COLLID has
| a data type of CHAR(18). However, use the most current format because it gives
| you the most information. You can alter a statement table in the older format to a
| statement table in the current format.
Just as with the plan table, DB2 just adds rows to the statement table; it does not
automatically delete rows. INSERT triggers are not activated unless you insert
rows yourself using and SQL INSERT statement.
To clear the table of obsolete rows, use DELETE, just as you would for deleting
rows from any table. You can also use DROP TABLE to drop a statement table
completely.
Similarly, if system administrators use these estimates as input into the resource
limit specification table for governing (either predictive or reactive), they probably
would want to give much greater latitude for statements in cost category B than
for those in cost category A.
What goes into cost category B? DB2 puts a statement’s estimate into cost category
B when any of the following conditions exist:
v The statement has UDFs.
v Triggers are defined for the target table:
– The statement is INSERT, and insert triggers are defined on the target table.
What goes into cost category A? DB2 puts everything that doesn’t fall into
category B into category A.
Query I/O parallelism manages concurrent I/O requests for a single query,
fetching pages into the buffer pool in parallel. This processing can significantly
improve the performance of I/O-bound queries. I/O parallelism is used only when
one of the other parallelism modes cannot be used.
Query CP parallelism enables true multitasking within a query. A large query can
be broken into multiple smaller queries. These smaller queries run simultaneously
on multiple processors accessing data in parallel. This reduces the elapsed time for
a query.
Parallel operations usually involve at least one table in a partitioned table space.
Scans of large partitioned table spaces have the greatest performance
improvements where both I/O and central processor (CP) operations can be
carried out in parallel.
Parallelism for partitioned and nonpartitioned table spaces: Both partitioned and
nonpartitioned table spaces can take advantage of query parallelism. Parallelism is
now enabled to include non-clustering indexes. Thus, table access can be run in
parallel when the application is bound with DEGREE (ANY) and the table is
accessed through a non-clustering index.
Figure 111 shows sequential processing. With sequential processing, DB2 takes the
3 partitions in order, completing partition 1 before starting to process partition 2,
and completing 2 before starting 3. Sequential prefetch allows overlap of CP
processing with I/O operations, but I/O operations do not overlap with each
other. In the example in Figure 111, a prefetch request takes longer than the time to
process it. The processor is frequently waiting for I/O.
CP
processing: … …
P1R1 P1R2 P1R3 P2R1 P2R2 P2R3 P3R1
I/O:
P1R1 P1R2 P1R3
… P2R1 P2R2 P2R3
… P3R1 P3R2
Time line
Figure 112 shows parallel I/O operations. With parallel I/O, DB2 prefetches data
from the 3 partitions at one time. The processor processes the first request from
each partition, then the second request from each partition, and so on. The
processor is not waiting for I/O, but there is still only one processing task.
CP processing: …
P1R1 P2R1 P3R1 P1R2 P2R2 P3R2 P1R3
I/O:
P1 R1 R2 R3
P2 R1 R2 R3
P3 R1 R2 R3
Time line
Figure 113 on page 953 shows parallel CP processing. With CP parallelism, DB2
can use multiple parallel tasks to process the query. Three tasks working
concurrently can greatly reduce the overall elapsed time for data-intensive and
processor-intensive queries. The same principle applies for Sysplex query
parallelism, except that the work can cross the boundaries of a single CPC.
CP task 2:
P2R1 P2R2 P2R3
…
I/O:
P2R1 P2R2 P2R3
…
CP task 3:
P3R1 P3R2 P3R3
…
I/O:
P3R1 P3R2 P3R3
…
Time line
Figure 113. CP and I/O processing techniques. Query processing using CP parallelism. The
tasks can be contained within a single CPC or can be spread out among the members of a
data sharing group.
Queries that are most likely to take advantage of parallel operations: Queries that
can take advantage of parallel processing are:
v Those in which DB2 spends most of the time fetching pages—an I/O-intensive
query
A typical I/O-intensive query is something like the following query, assuming
that a table space scan is used on many pages:
SELECT COUNT(*) FROM ACCOUNTS
WHERE BALANCE > 0 AND
DAYS_OVERDUE > 30;
v Those in which DB2 spends a lot of processor time and also, perhaps, I/O time,
to process rows. Those include:
– Queries with intensive data scans and high selectivity. Those queries involve large
volumes of data to be scanned but relatively few rows that meet the search
criteria.
– Queries containing aggregate functions. Column functions (such as MIN, MAX,
SUM, AVG, and COUNT) usually involve large amounts of data to be
scanned but return only a single aggregate result.
– Queries accessing long data rows. Those queries access tables with long data
rows, and the ratio of rows per page is very low (one row per page, for
example).
– Queries requiring large amounts of central processor time. Those queries might be
read-only queries that are complex, data-intensive, or that involve a sort.
A typical processor-intensive query is something like:
SELECT MAX(QTY_ON_HAND) AS MAX_ON_HAND,
AVG(PRICE) AS AVG_PRICE,
AVG(DISCOUNTED_PRICE) AS DISC_PRICE,
SUM(TAX) AS SUM_TAX,
SUM(QTY_SOLD) AS SUM_QTY_SOLD,
SUM(QTY_ON_HAND - QTY_BROKEN) AS QTY_GOOD,
AVG(DISCOUNT) AS AVG_DISCOUNT,
ORDERSTATUS,
COUNT(*) AS COUNT_ORDERS
FROM ORDER_TABLE
Terminology: When the term task is used with information about parallel
processing, the context should be considered. For parallel query CP processing or
Sysplex query parallelism, a task is an actual z/OS execution unit used to process
a query. For parallel I/O processing, a task simply refers to the processing of one
of the concurrent I/O streams.
A parallel group is the term used to name a particular set of parallel operations
(parallel tasks or parallel I/O operations). A query can have more than one parallel
group, but each parallel group within the query is identified by its own unique ID
number.
The degree of parallelism is the number of parallel tasks or I/O operations that
| DB2 determines can be used for the operations on the parallel group. The
| maximum of parallel operations that DB2 can generate is 254. However, for most
| queries and DB2 environments, DB2 chooses a lower number. You might need to
| limit the maximum number further because more parallel operations consume
| processor, real storage, and I/O resources. If resource consumption in high in your
| parallelism environment, use the MAX DEGREE field on installation panel
| DSNTIP4 to explicitly limit the maximum number of parallel operations that DB2
| generates, as explain in “Enabling parallel processing” on page 957.
In a parallel group, an originating task is the TCB (SRB for distributed requests)
that coordinates the work of all the parallel tasks. Parallel tasks are executable
units composed of special SRBs, which are called preemptable SRBs.
With preemptable SRBs, the z/OS dispatcher can interrupt a task at any time to
run other work at the same or higher dispatching priority. For non-distributed
parallel work, parallel tasks run under a type of preemptable SRB called a client
SRB, which lets the parallel task inherit the importance of the originating address
space. For distributed requests, the parallel tasks run under a preemptable SRB
called an enclave SRB. Enclave SRBs are described more fully in “Using z/OS
workload management to set performance objectives” on page 705.
In general, the number of partitions falls in a range between the number of CPs
and the maximum number of I/O paths to the data. When determining the
number of partitions that use a mixed set of processor- and I/O-intensive queries,
always choose the largest number of partitions in the range you determine.
v For processor-intensive queries, specify, at a minimum, a number that is equal
to the number of CPs in the system that you want to use for parallelism,
whether you have a single CPC or multiple CPCs in a data sharing group. If the
query is processor-intensive, it can use all CPs available in the system. If you
plan to use Sysplex query parallelism, then choose a number that is close to the
total number of CPs (including partial allocation of CPs) that you plan to
allocate for decision support processing across the data sharing group. Do not
include processing resources that are dedicated to other, higher priority, work.
For more information about Sysplex query parallelism, see Chapter 6 of DB2
Data Sharing: Planning and Administration.
v For I/O-intensive queries, calculate the ratio of elapsed time to processor time.
Multiply this ratio by the number of processors allocated for decision support
processing. Round up this number to determine how many partitions you can
use to the best advantage, assuming that these partitions can be on separate
devices and have adequate paths to the data. This calculation also assumes that
you have adequate processing power to handle the increase in partitions. (This
might not be much of an issue with an extremely I/O-intensive query.)
Example configurations for an I/O-intensive query: If the I/O cost of your queries
is about twice as much as the processing cost, the optimal number of partitions
when run on a 10-way processor is 20 (2 * number of processors). Figure 114 shows
an I/O configuration that minimizes the elapsed time and allows the CPC to run
at 100% busy. It assumes the suggested guideline of four devices per control unit
and four channels per control unit.8
10-way CPC
ESCON channels (20)
ESCON
director
Device
data paths
Storage
control units
Disk
Figure 114. I/O configuration that maximizes performance for an I/O-intensive query
8. A lower-cost configuration could use as few as two to three channels per control unit shared among all controllers using an
ESCON director. However, using four paths minimizes contention and provides the best performance. Paths might also need to
be taken offline for service.
DB2 tries to create equal work ranges by dividing the total cost of running the
work by the logical partition cost. This division often has some left over work. In
this case, DB2 creates an additional task to handle the extra work, rather than
making all the work ranges larger, which would reduce the degree of parallelism.
| To rebalance partitions that have become skewed, reorganize the table space,
| specifying the REBALANCE keyword on the REORG utility statement.
DB2 also considers only parallel I/O operations if you declare a cursor WITH
HOLD and bind with isolation RR or RS. For more restrictions on parallelism, see
Table 191.
For complex queries, run the query in parallel within a member of a data sharing
group. With Sysplex query parallelism, use the power of the data sharing group to
process individual complex queries on many members of the data sharing group.
For more information about how you can use the power of the data sharing group
to run complex queries, see Chapter 6 of DB2 Data Sharing: Planning and
Administration.
Limiting the degree of parallelism: If you want to limit the maximum number of
parallel tasks that DB2 generates, you can use the MAX DEGREE field on
installation panel DSNTIP4. Changing MAX DEGREE, however, is not the way to
turn parallelism off. You use the DEGREE bind parameter or CURRENT DEGREE
special register to turn parallelism off.
DB2 avoids certain hybrid joins when parallelism is enabled: To ensure that you
can take advantage of parallelism, DB2 does not pick one type of hybrid join
| In a multi-table join, DB2 might also execute the sort for a composite that
| involves more than one table in a parallel task. DB2 uses a cost basis model to
| determine whether to use parallel sort in all cases. When DB2 decides to use
| parallel sort, SORTC_PGROUP_ID and SORTN_PGROUP_ID indicate the
| parallel group identifier. Consider a query that joins three tables, T1, T2, and T3,
| and uses a merge scan join between T1 and T2, and then between the composite
| and T3. If DB2 decides, based on the cost model, that all sorts in this query are
| to be performed in parallel, part of PLAN_TABLE appears as shown in
| Table 195:
| Table 195. Part of PLAN_TABLE for a multi-table, merge scan join
| ACCESS_ JOIN_ SORTC_ SORTN_
| ACCESS_ PGROUP_ JOIN_ PGROUP_ PGROUP_ PGROUP_
| TNAME METHOD DEGREE ID DEGREE ID ID ID
| T1 0 3 1 (null) (null) (null) (null)
| T2 2 6 2 6 3 1 2
| T3 2 6 4 6 5 3 4
|
| v Example 4: hybrid join
Bind time: At bind time, DB2 collects partition statistics from the catalog, estimates
the processor cycles for the costs of processing the partitions, and determines the
optimal number of parallel tasks to achieve minimum elapsed time.
When a planned degree exceeds the number of online CPs, the query might not be
completely processor-bound. Instead it might be approaching the number of
partitions because it is I/O-bound. In general, the more I/O-bound a query is, the
closer the degree of parallelism is to the number of partitions.
In general, the more processor-bound a query is, the closer the degree of
parallelism is to the number of online CPs, and it can even exceed the number of
CPs by one.
To help DB2 determine the optimal degree of parallelism, use the utility
RUNSTATS to keep your statistics current.
Execution time: For each parallel group, parallelism (either CP or I/O) can execute
at a reduced degree or degrade to sequential operations for the following reasons:
v Amount of virtual buffer pool space available
v Host variable values
v Availability of the hardware sort assist facility
v Ambiguous cursors
v A change in the number or configuration of online processors
v The join technique that DB2 uses (I/O parallelism not supported when DB2 uses
the star join technique)
The PARALLEL REQUEST field in this example shows that DB2 was negotiating
buffer pool resource for 282 parallel groups. Of those 282 groups, only 5 were
degraded because of a lack of buffer pool resource. A large number in the
DEGRADED PARALLEL field could indicate that there are not enough buffers that
can be used for parallel processing.
See Chapter 2 of DB2 Command Reference for information about the syntax of the
command DISPLAY THREAD.
Accounting trace
By default, DB2 rolls task accounting into an accounting record for the originating
task. OMEGAMON also summarizes all accounting records generated for a parallel
query and presents them as one logical accounting record. OMEGAMON presents
the times for the originating tasks separately from the accumulated times for all
the parallel tasks.
As shown in Figure 115 on page 963 CPU TIME-AGENT is the time for the
originating tasks, while CPU TIME-PAR.TASKS (A) is the accumulated processing
time for the parallel tasks.
As the report shows, the values for CPU TIME and I/O WAIT TIME are larger
than the elapsed time. The processor and suspension time can be greater than the
elapsed time because these two times are accumulated from multiple parallel tasks.
The elapsed time would be less than the processor and suspension time if these
two times are accumulated sequentially.
If you have baseline accounting data for the same thread run without parallelism,
the elapsed times and processor times should not be significantly larger when that
query is run in parallel. If it is significantly larger, or if response time is poor, you
need to examine the accounting data for the individual tasks. Use the
OMEGAMON Record Trace for the IFCID 0003 records of the thread you want to
examine. Use the performance trace if you need more information to determine the
cause of the response time problem.
Performance trace
The performance trace can give you information about tasks within a group. To
determine the actual number of parallel tasks used, refer to field QW0221AD in
IFCID 0221, as mapped by macro DSNDQW03. The 0221 record also gives you
information about the key ranges used to partition the data.
IFCID 0222 contains the elapsed time information for each parallel task and each
parallel group in each SQL query. OMEGAMON presents this information in its
SQL Activity trace.
If there are many parallel groups that do not run at the planned degree (see B in
Figure 115 on page 963), check the following factors:
v Buffer pool availability
Depending on buffer pool availability, DB2 could reduce the degree of
parallelism (see C in Figure 115 on page 963) or revert to a sequential plan
before executing the parallel group (F in the figure).
To determine which buffer pool is short on storage, see section QW0221C in
IFCID 0221. You can use the ALTER BUFFERPOOL command to increase the
buffer pool space available for parallel operations by modifying the following
parameters:
– VPSIZE, the size of the virtual buffer pool
– VPSEQT, the sequential steal threshold
– VPPSEQT, the parallel sequential threshold
– VPXPSEQT, the assisting parallel sequential threshold, used only for Sysplex
query parallelism.
If the buffer pool is busy with parallel operations, the sequential prefetch
quantity might also be reduced.
The parallel sequential threshold also has an impact on work file processing for
parallel queries. DB2 assumes that you have all your work files of the same size
(4KB or 32KB) in the same buffer pool and makes run time decisions based on a
single buffer pool. A lack of buffer pool resources for the work files can lead to a
reduced degree of parallelism or cause the query to run sequentially.
If increasing the parallel thresholds does not help solve the problem of reduced
degree, you can increase the total buffer pool size (VPSIZE). Use information
from the statistics trace to determine the amount of buffer space you need. Use
the following formula:
(QBSTJIS ⁄ QBSTPQF) × 32 = buffer increase value
QBSTJIS is the total number of requested prefetch I/O streams that were denied
because of a storage shortage in the buffer pool. (There is one I/O stream per
parallel task.) QBSTPQF is the total number of times that DB2 could not allocate
enough buffer pages to allow a parallel group to run to the planned degree.
As an example, assume QBSTJIS is 100 000 and QBSTPQF is 2500:
(100000 ⁄ 2500) × 32 = 1280
Use ALTER BUFFERPOOL to increase the current VPSIZE by 2560 buffers to
alleviate the degree degradation problem. Use the DISPLAY BUFFERPOOL
command to see the current VPSIZE.
v Physical contention
As much as possible, put data partitions on separate physical devices to
minimize contention. Try not to use more partitions than there are internal paths
in the controller.
v Run time host variables
Recommendation: Use DRDA for new applications, and migrate existing private
protocol applications to DRDA. No enhancements are planned for private protocol.
| Characteristics of DRDA
| With DRDA, the application can remotely bind packages and can execute packages
| of static or dynamic SQL that have previously been bound at that location. DRDA
| has the following characteristics and benefits:
| v With DRDA access, an application can access data at any server that supports
| DRDA, not just a DB2 server on a z/OS operating system.
| v DRDA supports all SQL features, including user-defined functions, LOBs, and
| stored procedures.
| v DRDA can avoid multiple binds and minimize the number of binds that are
| required.
| v DRDA supports multiple-row FETCH.
| DRDA is the preferred method for remote access with DB2.
Some aspects of overhead processing, for instance, network processing, are not
under DB2 control. (Suggestions for tuning your network are in Part 3 of DB2
Installation Guide.)
BIND options
If appropriate for your applications, consider the following bind options to
improve performance:
v Use the DEFER(PREPARE) bind option, which can reduce the number of
messages that must be sent back and forth across the network. For more
information on using the DEFER(PREPARE) option, see Part 4 of DB2 Application
Programming and SQL Guide.
v Bind application plans and packages with ISOLATION(CS) to reduce contention
and message overhead.
The requester can use both forms of blocking at the same time and with different
servers.
If an application is doing read-only processing and can use continuous block fetch,
the sequence goes like this:
1. The requester sends a message to open a cursor and begins fetching the block
of rows at the server.
2. The server sends back a block of rows and the requester begins processing the
first row.
3. The server continues to send blocks of rows to the requester, without further
prompting. The requester processes the second and later rows as usual, but
fetches them from a buffer on the requester’s system.
For private protocol, continuous block fetch uses one conversation for each open
cursor. Having a dedicated conversation for each cursor allows the server to
continue sending until all the rows are returned.
For DRDA, only one conversation is used, and it must be made available to the
other SQL statements that are in the application. Thus, the server usually sends
back a subset of all the rows. The number of rows that the server sends depends
on the following factors:
v The size of each row
v The number of extra blocks that are requested by the requesting system
compared to the number of extra blocks that the server will return
For a DB2 UDB for z/OS requester, the EXTRA BLOCKS REQ field on
installation panel DSNTIP5 determines the maximum number of extra blocks
requested. For a DB2 UDB for z/OS server, the EXTRA BLOCKS SRV field on
installation panel DSNTIP5 determines the maximum number of extra blocks
allowed.
| Example: Suppose that the requester asks for 100 extra query blocks and that the
| server allows only 50. The server returns no more than 50 extra query blocks.
| The server might choose to return fewer than 50 extra query blocks for any
| number of reasons that DRDA allows.
v Whether continuous block fetch is enabled, and the number of extra rows that
the server can return if it regulates that number.
To enable continuous block fetch for DRDA and to regulate the number of extra
rows sent by a DB2 UDB for z/OS server, you must use the OPTIMIZE FOR n
ROWS clause on your SELECT statement. See “Optimizing for very large results
sets for DRDA” on page 974 for more information.
If you want to use continuous block fetch for DRDA, have the application fetch all
the rows of the cursor before doing any other SQL. Fetching all the rows first
Limited block fetch: Limited block fetch guarantees the transfer of a minimum
amount of data in response to each request from the requesting system. With
limited block fetch, a single conversation is used to transfer messages and data
between the requester and server for multiple cursors. Processing at the requester
and server is synchronous. The requester sends a request to the server, which
causes the server to send a response back to the requester. The server must then
wait for another request to tell it what should be done next.
Block fetch with scrollable cursors for DRDA: When a DB2 UDB for z/OS
requester uses a scrollable cursor to retrieve data from a DB2 UDB for z/OS server,
the following conditions are true:
v The requester never requests more than 64 rows in a query block, even if more
rows fit in the query block. In addition, the requester never requests extra query
blocks. This is true even if the setting of field EXTRA BLOCKS REQ in the
DISTRIBUTED DATA FACILITY PANEL 2 installation panel on the requester
allows extra query blocks to be requested.
v The requester discards rows of the result table if the application does not use
those rows.
Example: If the application fetches row n and then fetches row n+2, the
requester discards row n+1.
The application gets better performance for a blocked scrollable cursor if it
mostly scrolls forward, fetches most of the rows in a query block, and avoids
frequent switching between FETCH ABSOLUTE statements with negative and
positive values.
v If the scrollable cursor does not use block fetch, the server returns one row for
each FETCH statement.
LOB data and its effect on block fetch for DRDA: For a non-scrollable blocked
cursor, the server sends all the non-LOB data columns for a block of rows in one
message, including LOB locator values. As each row is fetched by the application,
the requester obtains the non-LOB data columns directly from the query block. If
the row contains non-null and non-zero length LOB values, those values are
retrieved from the server at that time. This behavior limits the impact to the
network by pacing the amount of data that is returned at any one time. If all LOB
data columns are retrieved into LOB locator host variables or if the row does not
contain any non-null or non-zero length LOB columns, then the whole row can be
retrieved directly from the query block.
For a scrollable blocked cursor, the LOB data columns are returned at the same
time as the non-LOB data columns. When the application fetches a row that is in
the block, a separate message is not required to get the LOB columns.
To use either limited or continuous block fetch, DB2 must determine that the
cursor is not used for updating or deleting. The easiest way to indicate that the
cursor does not modify data is to add the FOR FETCH ONLY or FOR READ
ONLY clause to the query in the DECLARE CURSOR statement as in the following
example:
If you do not use FOR FETCH ONLY or FOR READ ONLY, DB2 still uses block
fetch for the query if the following conditions are true:
v The cursor is a non-scrollable cursor, and the result table of the cursor is
read-only. This applies to static and dynamic cursors except for read-only views.
(See Chapter 5 of DB2 SQL Reference for information about declaring a cursor as
read-only.)
v The cursor is a scrollable cursor that is declared as INSENSITIVE, and the result
table of the cursor is read-only.
v The cursor is a scrollable cursor that is declared as SENSITIVE, the result table
of the cursor is read-only, and the value of bind option CURRENTDATA is NO.
v The result table of the cursor is not read-only, but the cursor is ambiguous, and
the value of bind option CURRENTDATA is NO. A cursor is ambiguous when:
– It is not defined with the clauses FOR FETCH ONLY, FOR READ ONLY, or
FOR UPDATE OF.
– It is not defined on a read-only result table.
– It is not the target of a WHERE CURRENT clause on an SQL UPDATE or
DELETE statement.
– It is in a plan or package that contains the SQL statements PREPARE or
EXECUTE IMMEDIATE.
DB2 triggers block fetch for static SQL only when it can detect that no updates or
deletes are in the application. For dynamic statements, because DB2 cannot detect
what follows in the program, the decision to use block fetch is based on the
declaration of the cursor.
DB2 does not use continuous block fetch if the following conditions are true:
v The cursor is referred to in the statement DELETE WHERE CURRENT OF
elsewhere in the program.
v The cursor statement appears that it can be updated at the requesting system.
(DB2 does not check whether the cursor references a view at the server that
cannot be updated.)
The following three tables summarize the conditions under which a DB2 server
uses block fetch.
Table 198 shows the conditions for a scrollable cursor that is not used to retrieve a
stored procedure result set.
Table 198. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor
that is not used for a stored procedure result set
Isolation level Cursor sensitivity CURRENTDATA Cursor type Block fetch
CS, RR, or RS INSENSITIVE Yes Read-only Yes
No Read-only Yes
SENSITIVE Yes Read-only No
Updatable No
Ambiguous No
No Read-only Yes
Updatable No
Ambiguous Yes
UR INSENSITIVE Yes Read-only Yes
No Read-only Yes
SENSITIVE Yes Read-only Yes
No Read-only Yes
Table 199 shows the conditions for a scrollable cursor that is used to retrieve a
stored procedure result set.
Table 199. Effect of CURRENTDATA and isolation level on block fetch for a scrollable cursor
that is used for a stored procedure result set
Isolation level Cursor sensitivity CURRENTDATA Cursor type Block fetch
CS, RR, or RS INSENSITIVE Yes Read-only Yes
No Read-only Yes
SENSITIVE Yes Read-only No
No Read-only Yes
Recommendation: Because the application SQL uses only one conversation, do not
try to do other SQL work until the entire answer set is processed. If the application
issues another SQL statement before the previous statement’s answer set has been
received, DDF must buffer them in its address space. You can buffer up to 10 MB
in this way.
Because specifying a large number of network blocks can saturate the network,
limit the number of blocks according to what your network can handle. You can
limit the number of blocks used for these large download operations. When the
client supports extra query blocks, DB2 chooses the smallest of the following
values when determining the number of query blocks to send:
v The number of blocks into which the number of rows (n) on the OPTIMIZE
clause will fit. For example, assume you specify 10000 rows for n, and the size of
each row that is returned is approximately 100 bytes. If the block size used is 32
KB (32768 bytes), the calculation is as follows:
(10000 * 100) / 32768 = 31 blocks
v The DB2 server value for the EXTRA BLOCKS SRV field on installation panel
DSNTIP5. The maximum value that you can specify is 100.
v The client’s extra query block limit, which is obtained from the DRDA
MAXBLKEXT parameter received from the client. When DB2 UDB for z/OS acts
as a DRDA client, you set this parameter at installation time with the EXTRA
| BLOCKS REQ field on installation panel DSNTIP5. The maximum value that
| you can specify is 100. DB2 Connect sets the MAXBLKEXT parameter to −1
| (unlimited).
If the client does not support extra query blocks, the DB2 server on z/OS
automatically reduces the value of n to match the number of rows that fit within a
DRDA query block.
For examples of performance problems that can occur from not using OPTIMIZE
FOR n ROWS when downloading large amounts of data, see Part 4 of DB2
Application Programming and SQL Guide.
Using OPTIMIZE FOR n ROWS: When you specify OPTIMIZE FOR n ROWS and
n is less than the number of rows that fit in the DRDA query block (default size on
z/OS is 32 KB), the DB2 server prefetches and returns only as many rows as fit
into the query block. For example, if the client application is interested in seeing
only one screen of data, specify OPTIMIZE FOR n ROWS, choosing a small
number for n, such as 3 or 4. The OPTIMIZE FOR n ROWS clause has no effect on
scrollable cursors.
| Using FETCH FIRST n ROWS ONLY: The FETCH FIRST n ROWS ONLY clause
| does not affect network blocking. If FETCH FIRST n ROWS ONLY is specified and
| OPTIMIZE FOR n ROWS is not specified, DB2 uses the FETCH FIRST value to
| optimize the access path. However, DRDA does not consider this value when it
| determines network blocking.
| When both the FETCH FIRST n ROWS ONLY clause and the OPTIMIZE FOR n
| ROWS clause are specified, the value for the OPTIMIZE FOR n ROWS clause is
| used for access path selection.
| The OPTIMIZE FOR value of 20 rows is used for network blocking and access path
| selection.
When you use FETCH FIRST n ROWS ONLY, DB2 might use a fast implicit close.
Fast implicit close means that during a distributed query, the DB2 server
automatically closes the cursor when it prefetches the nth row if FETCH FIRST n
ROWS ONLY is specified or when there are no more rows to return. Fast implicit
close can improve performance because it can save an additional network
transmission between the client and the server.
DB2 uses fast implicit close when the following conditions are true:
v The query uses limited block fetch.
v The query retrieves no LOBs.
v The cursor is not a scrollable cursor.
v Either of the following conditions is true:
When you use FETCH FIRST n ROWS ONLY and DB2 does a fast implicit close,
the DB2 server closes the cursor after it prefetches n rows, or when there are no
more rows.
Serving system
For access using DB2 private protocol, the serving system is the DB2 system on
which the SQL is dynamically executed. For access using DRDA, the serving
system is the system on which your remotely bound package executes.
When DB2 is the server, it is a good idea to activate accounting trace class 7. This
provides accounting information at the package level, which can be very useful in
determining performance problems.
If your applications update data at other sites, turn on the statistics class 4 trace
and always keep it active. This statistics trace covers error situations surrounding
in doubt threads; it provides a history of events that might impact data availability
and data consistency.
DB2 accounting records are created separately at the requester and each server.
Events are recorded in the accounting record at the location where they occur.
When a thread becomes active, the accounting fields are reset. Later, when the
thread becomes inactive or is terminated, the accounting record is created.
Figure 116 on page 978 shows the relationship of the accounting class 1 and 2 times
and the requester and server accounting records. Figure 117 on page 979 and
Figure 118 on page 980 show the server and requester distributed data facility
blocks from the OMEGAMON accounting long trace.
SQL
SQL
SQL
Commit
Commit
Terminate thread
Figure 116. Elapsed times in a DDF environment as reported by OMEGAMON. These times are valid for access that
uses either DRDA or private protocol (except as noted).
This figure is a simplified picture of the processes that go on in the serving system.
It does not show block fetch statements and is only applicable to a single row
retrieval.
The Class 2 processing time (the TCB time) at the requester does not include
processing time at the server. To determine the total Class 2 processing time, add
the Class 2 time at the requester to the Class 2 time at the server.
Likewise, add the getpage counts, prefetch counts, locking counts, and I/O counts
of the requester to the equivalent counts at the server. For private protocol, SQL
activity is counted at both the requester and server. For DRDA, SQL activity is
counted only at the server.
Figure 117. DDF block of a requester thread from a OMEGAMON accounting long trace
Figure 118. DDF block of a server thread from a OMEGAMON accounting long trace
The accounting distributed fields for each serving or requesting location are
collected from the viewpoint of this thread communicating with the other location
identified. For example, SQL sent from the requester is SQL received at the server.
Do not add together the distributed fields from the requester and the server.
Several fields in the distributed section merit specific attention. The number of
conversations is reported in several fields:
v The number of conversation allocations is reported as CONVERSATIONS
INITIATED (A).
v The number of conversation requests queued during allocation is reported as
CONVERSATIONS QUEUED (B).
v The number of successful conversation allocations is reported as
SUCCESSFULLY ALLOC.CONV (C).
v The number of times a switch was made from continuous block fetch to limited
block fetch is reported as CONT->LIM.BL.FTCH (D). This is only applicable to
access that uses DB2 private protocol.
You can use the difference between initiated allocations and successful allocations
to identify a session resource constraint problem. If the number of conversations
queued is high, or if the number of times a switch was made from continuous to
limited block fetch is high, you might want to tune VTAM to increase the number
of conversations. VTAM and network parameter definitions are important factors
in the performance of DB2 distributed processing. For more information, see VTAM
for MVS/ESA Network Implementation Guide.
Bytes sent, bytes received, messages sent, and messages received are recorded at
both the requester and the server. They provide information on the volume of data
transmitted. However, because of the way distributed SQL is processed for private
protocol, more bytes may be reported as sent than are reported as received.
To determine the percentage of the rows transmitted by block fetch, compare the
total number of rows sent to the number of rows sent in a block fetch buffer,
which is reported as MSG.IN BUFFER (E). The number of rows sent is reported
at the server, and the number of rows received is reported at the requester. Block
fetch can significantly affect the number of rows sent across the network.
The number of SQL statements bound for remote access is the number of
statements dynamically bound at the server for private protocol. This field is
maintained at the requester and is reported as STMT BOUND AT SER (F).
Because of the manner in which distributed SQL is processed, the number of rows
that are reported might differ slightly from the number of rows that are received.
However, a significantly lower number of rows received may indicate that the
application did not fetch the entire answer set. This is especially true for access
that uses DB2 private protocol.
Duration of an enclave
“Using threads in INACTIVE MODE for DRDA-only connections” on page 702
describes the difference between threads that are always active and those that can
be pooled. If the thread is always active, the duration of the thread is the duration
of the enclave. If the thread can be pooled, the following conditions determine the
duration of an enclave:
v If the associated package is bound with KEEPDYNAMIC(NO), or there are no
open held cursors, or there are active declared temporary tables, the duration of
the enclave is the period during which the thread is active.
v If the associated package is bound with KEEPDYNAMIC(YES), and no held
cursors or active declared temporary tables exist, and only
KEEPDYNAMIC(YES) keeps the thread from being pooled, the duration of the
enclave is the period from the beginning to the end of the transaction.
While a thread is pooled, such as during think time, it is not using an enclave.
Therefore, SMF 72 record does not report inactive periods.
ACTIVE MODE threads are treated as a single enclave from the time it is created
until the time it is terminated. This means that the entire life of the database access
thread is reported in the SMF 72 record, regardless of whether SQL work is
actually being processed. Figure 119 on page 982 contrasts the two types of threads.
Enclave Enclave
Enclave
Queue
Execute
Figure 119. Contrasting ACTIVE MODE threads and POOLED MODE threads
Queue time: Note that the information that is reported back to RMF includes queue
time. This particular queue time includes waiting for a new or existing thread to
become available.
| Each enclave contributes its data to one type 72 record for the service class and to
| zero or one (0 or 1) type 72 records for the report class. You can use WLM
| classification rules to separate different enclaves into different service or report
| classes. Separating the enclaves in this way enables you to understand the DDF
| work better.
Stored procedures that were created in a release of DB2 prior to Version 8 can run
in a DB2-established stored procedures address space. For information about a
DB2-established address space and how it compares to WLM-established address
spaces, see “Comparing the types of stored procedure address spaces” on page
989.
Each task control block that runs in a WLM-established stored procedures address
space uses approximately 200 KB below the 16-MB line. DB2 needs this storage for
stored procedures and user-defined functions because you can create both main
programs and subprograms, and DB2 must create an environment for each.
Dynamically extending load libraries: Use partitioned data set extended (PDSEs)
for load libraries containing stored procedures. Using PDSEs may eliminate your
need to stop and start the stored procedures address space due to growth of the
load libraries. If a load library grows from additions or replacements, the library
may have to be extended.
If you use PDSEs for the load libraries, the new extent information is dynamically
updated and you do not need to stop and start the address space. If PDSs are
used, load failures may occur because the new extent information is not available.
Figure 120. WLM panel to create an application environment. You can also use the variable
&IWMSSNM for the DB2SSN parameter (DB2SSN=&IWMSSNM). This variable represents
the name of the subsystem for which you are starting this address space. This variable is
useful for using the same JCL procedure for multiple DB2 subsystems.
The total cost of a table function consists of the following three components:
v The initialization cost that results from the first call processing
v The cost that is associated with acquiring a single row
v The final call cost that performs the clean up processing
These costs, though, are not known to DB2 when I/O costs are added to the CPU
cost.
To assist DB2 in determining the cost of user-defined table functions, you can use
four fields in SYSIBM.SYSROUTINES. Use the following fields to provide cost
information:
v IOS_PER_INVOC for the estimated number of I/Os per row
v INSTS_PER_INVOC for the estimated number of instructions
v INITIAL_IOS for the estimated number of I/Os performed the first and last time
the function is invoked
v INITIAL_INSTS for the estimated number of instructions for the first and last
time the function is invoked
Chapter 36. Monitoring and tuning stored procedures and user-defined functions 985
These values, along with the CARDINALITY value of the table being accessed, are
used by DB2 to determine the cost. The results of the calculations can influence
such things as the join sequence for a multi-table join and the cost estimates
generated for and used in predictive governing.
Determine values for the four fields by examining the source code for the table
function. Estimate the I/Os by examining the code executed during the FIRST call
and FINAL call. Look for the code executed during the OPEN, FETCH, and
CLOSE calls. The costs for the OPEN and CLOSE calls can be amortized over the
expected number of rows returned. Estimate the I/O cost by providing the number
of I/Os that will be issued. Include the I/Os for any file access.
Calculate the instruction cost by counting the number of high level instructions
executed in the user-defined table function and multiplying it by a factor of 20. For
assembler programs, the instruction cost is the number of assembler instructions.
If SQL statements are issued within the user-defined table function, use DB2
Estimator to determine the number of instructions and I/Os for the statements.
Examining the JES job statistics for a batch program doing equivalent functions can
also be helpful. For all fields, a precise number of instructions is not required.
Because DB2 already accounts for the costs of invoking table functions, these costs
should not be included in the estimates.
Example: The following statement shows how these fields can be updated. The
authority to update is the same authority as that required to update any catalog
statistics column.
UPDATE SYSIBM.SYSROUTINES SET
IOS_PER_INVOC = 0.0,
INSTS_PER_INVOC = 4.5E3,
INITIAL_IOS = 2.0
INITIAL_INSTS = 1.0E4,
CARDINALITY = 5E3
WHERE
SCHEMA = ’SYSADM’ AND
SPECIFICNAME = ’FUNCTION1’ AND
ROUTINETYPE = ’F’;
The accounting report on the server has several fields that specifically relate to
stored procedures processing, as shown in Figure 121 on page 987.
Descriptions of fields:
v The part of the total CPU time that was spent satisfying stored procedures
requests is indicated in A.
| v The amount of time spent waiting for a stored procedure to be scheduled and
| the time that is needed to return control to DB2 after the stored procedure has
| completed is indicated in B.
v The number of calls to stored procedures is indicated in C.
v The number of times a stored procedure timed out waiting to be scheduled is
shown in D.
What to do for excessive timeouts or wait time: If you have excessive wait time
(B) or timeouts (D) for user-defined functions or stored procedures, the possible
causes include:
| v The goal of the service class that is assigned to the WLM stored procedure’s
| address space, as it was initially started, is not high enough. The address space
| uses this goal to honor requests to start processing stored procedures.
v The priority of the service class that is running the stored procedure is not high
enough.
v Make sure that the application environment is available by using the z/OS
command DISPLAY WLM,APPLENV=applenv. If the application environment is
quiesced, WLM does not start any address spaces for that environment; CALL
statements are queued or rejected.
Chapter 36. Monitoring and tuning stored procedures and user-defined functions 987
Accounting for nested activities
The accounting class 1 and class 2 CPU and elapsed times for triggers, stored
procedures, and user-defined functions are accumulated in separate fields and
exclude any time accumulated in other nested activity. These CPU and elapsed
times are accumulated for each category during the execution of each agent until
agent deallocation. Package accounting can be used to break out accounting data
for execution of individual stored procedures, user-defined functions, or triggers.
Figure 122 shows an agent that executes multiple types of DB2 nested activities.
Table 201 shows the formula used to determine time for nested activities.
Table 201. Sample for time used for execution of nested activities
Count for Formula Class
| Application elapsed T21-T0 1
| Application task control block T21-T0 1
(TU)
| Application in DB2 elapsed T2-T1 + T4-T3 + T20-T19 2
| Application in DB2 task control T2-T1 + T4-T3 + T20-T19 2
block (TU)
| Trigger in DB2 elapsed T6-T4 + T19-T18 2
| Trigger in DB2 task control block T6-T4 + T19-T18 2
(TU)
| Wait for STP time T7-T6 + T18–T17 3
Stored procedure elapsed T11-T6 + T18-T16 1
Stored procedure task control T11-T6 + T18-T16 1
block (TU)
Stored procedure SQL elapsed T9-T8 + T11-T10 + T17-16 2
Stored procedure SQL elapsed T9-T8 + T11-T10 + T17-T16 2
| The total class 2 time is the total of the ″in DB2″ times for the application, trigger,
| stored procedure, and user-defined function. The class 1 ″wait″ times for the stored
| procedures and user-defined functions need to be added to the total class 3 times.
Chapter 36. Monitoring and tuning stored procedures and user-defined functions 989
Table 202. Comparing WLM-established and DB2-established stored procedures (continued)
DB2-established WLM-established More information
No ability to customize the Each address space is associated with a WLM “Assigning procedures and
environment. application environment that you specify. An functions to WLM
application environment is an attribute that you application environments”
associate on the CREATE statement for the on page 984
function or procedure. The environment
determines which JCL procedure is used to run a
particular stored procedure.
# Must run as a MAIN program. Can run as a MAIN or SUB program. SUB DB2 Application
# programs can run significantly faster, but the Programming and SQL
# subprogram must do more initialization and Guide
# cleanup processing itself rather than relying on
# Language Environment to handle that.
You can access non-relational data, You can access non-relational data. If the DB2 Application
but that data is not included in your non-relational data is managed by RRS, the Programming and SQL
SQL unit of work. It is a separate unit updates to that data are part of your SQL unit of Guide
of work. work.
Stored procedures access protected Procedures or functions can access protected DB2 Administration Guide
z/OS resources with the authority of z/OS resources with one of three authorities, as
the stored procedures address space. specified on the SECURITY option of the
CREATE FUNCTION or CREATE PROCEDURE
statement:
v The authority of the WLM-established address
space (SECURITY=DB2)
v The authority of the invoker of the stored
procedure or user-defined function
(SECURITY=USER)
v The authority of the definer of the stored
procedure or user-defined function
(SECURITY=DEFINER)
Chapter 36. Monitoring and tuning stored procedures and user-defined functions 991
992 Administration Guide
Part 6. Appendixes
Most of the examples in this book refer to the tables described in this appendix. As
a group, the tables include information that describes employees, departments,
projects, and activities, and make up a sample application that exemplifies most of
the features of DB2. The sample storage group, databases, tablespaces, tables, and
views are created when you run the installation sample jobs DSNTEJ1 and
DSNTEJ7. DB2 sample objects that include LOBs are created in job DSNTEJ7. All
other sample objects are created in job DSNTEJ1. The CREATE INDEX statements
for the sample tables are not shown here; they, too, are created by the DSNTEJ1
and DSNTEJ7 sample jobs.
The activity table is a parent table of the project activity table, through a foreign
key on column ACTNO.
The table, shown in Table 207 on page 997, resides in table space
DSN8D81A.DSN8S81D and is created with the following statement:
CREATE TABLE DSN8810.DEPT
(DEPTNO CHAR(3) NOT NULL,
DEPTNAME VARCHAR(36) NOT NULL,
MGRNO CHAR(6) ,
ADMRDEPT CHAR(3) NOT NULL,
LOCATION CHAR(16) ,
PRIMARY KEY (DEPTNO) )
IN DSN8D81A.DSN8S81D
CCSID EBCDIC;
Because the table is self-referencing, and also is part of a cycle of dependencies, its
foreign keys must be added later with these statements:
ALTER TABLE DSN8810.DEPT
FOREIGN KEY RDD (ADMRDEPT) REFERENCES DSN8810.DEPT
ON DELETE CASCADE;
The LOCATION column contains nulls until sample job DSNTEJ6 updates this
column with the location name.
The table shown in Table 210 on page 999 and Table 211 on page 1000 resides in
the partitioned table space DSN8D81A.DSN8S81E. Because it has a foreign key
referencing DEPT, that table and the index on its primary key must be created first.
Then EMP is created with the following statement:
CREATE TABLE DSN8810.EMP
(EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) CONSTRAINT NUMBER CHECK
(PHONENO >= ’0000’ AND
PHONENO <= ’9999’) ,
HIREDATE DATE ,
JOB CHAR(8) ,
EDLEVEL SMALLINT ,
SEX CHAR(1) ,
BIRTHDATE DATE ,
SALARY DECIMAL(9,2) ,
BONUS DECIMAL(9,2) ,
COMM DECIMAL(9,2) ,
PRIMARY KEY (EMPNO) ,
FOREIGN KEY RED (WORKDEPT) REFERENCES DSN8810.DEPT
ON DELETE SET NULL )
EDITPROC DSN8EAE1
IN DSN8D81A.DSN8S81E
CCSID EBCDIC;
Table 208 shows the content of the columns. The table has a check constraint,
NUMBER, which checks that the phone number is in the numeric range 0000 to
9999.
Table 208. Columns of the employee table
Column Column Name Description
1 EMPNO Employee number (the primary key)
2 FIRSTNME First name of employee
3 MIDINIT Middle initial of employee
4 LASTNAME Last name of employee
5 WORKDEPT ID of department in which the employee works
6 PHONENO Employee telephone number
7 HIREDATE Date of hire
8 JOB Job held by the employee
9 EDLEVEL Number of years of formal education
Table 210 and Table 211 on page 1000 show the content of the employee table:
Table 210. Left half of DSN8810.EMP: employee table. Note that a blank in the MIDINIT column is an actual value of
″ ″ rather than null.
EMPNO FIRSTNME MIDINIT LASTNAME WORKDEPT PHONENO HIREDATE
DB2 requires an auxiliary table for each LOB column in a table. These statements
define the auxiliary tables for the three LOB columns in
DSN8810.EMP_PHOTO_RESUME:
CREATE AUX TABLE DSN8810.AUX_BMP_PHOTO
IN DSN8D81L.DSN8S81M
STORES DSN8810.EMP_PHOTO_RESUME
COLUMN BMP_PHOTO;
Table 213 shows the indexes for the employee photo and resume table:
Table 213. Indexes of the employee photo and resume table
Name On Column Type of Index
DSN8810.XEMP_PHOTO_RESUME EMPNO Primary, ascending
Table 214 shows the indexes for the auxiliary tables for the employee photo and
resume table:
Table 214. Indexes of the auxiliary tables for the employee photo and resume table
Name On Table Type of Index
DSN8810.XAUX_BMP_PHOTO DSN8810.AUX_BMP_PHOTO Unique
DSN8810.XAUX_PSEG_PHOTO DSN8810.AUX_PSEG_PHOTO Unique
DSN8810.XAUX_EMP_RESUME DSN8810.AUX_EMP_RESUME Unique
The table is a parent table of the project table, through a foreign key on column
RESPEMP.
The table resides in database DSN8D81A. Because it has foreign keys referencing
DEPT and EMP, those tables and the indexes on their primary keys must be
created first. Then PROJ is created with the following statement:
CREATE TABLE DSN8810.PROJ
(PROJNO CHAR(6) PRIMARY KEY NOT NULL,
PROJNAME VARCHAR(24) NOT NULL WITH DEFAULT
’PROJECT NAME UNDEFINED’,
DEPTNO CHAR(3) NOT NULL REFERENCES
DSN8810.DEPT ON DELETE RESTRICT,
RESPEMP CHAR(6) NOT NULL REFERENCES
DSN8810.EMP ON DELETE RESTRICT,
PRSTAFF DECIMAL(5, 2) ,
PRSTDATE DATE ,
PRENDATE DATE ,
MAJPROJ CHAR(6))
IN DSN8D81A.DSN8S81P
CCSID EBCDIC;
Because the table is self-referencing, the foreign key for that restraint must be
added later with:
The table is a parent table of the employee to project activity table, through a
foreign key on columns PROJNO, ACTNO, and EMSTDATE. It is a dependent of:
v The activity table, through its foreign key on column ACTNO
v The project table, through its foreign key on column PROJNO
The table resides in database DSN8D81A. Because it has foreign keys referencing
EMP and PROJACT, those tables and the indexes on their primary keys must be
created first. Then EMPPROJACT is created with the following statement:
CREATE TABLE DSN8810.EMPPROJACT
(EMPNO CHAR(6) NOT NULL,
PROJNO CHAR(6) NOT NULL,
ACTNO SMALLINT NOT NULL,
EMPTIME DECIMAL(5,2) ,
EMSTDATE DATE ,
EMENDATE DATE ,
FOREIGN KEY REPAPA (PROJNO, ACTNO, EMSTDATE)
REFERENCES DSN8810.PROJACT
ON DELETE RESTRICT,
FOREIGN KEY REPAE (EMPNO) REFERENCES DSN8810.EMP
ON DELETE RESTRICT)
IN DSN8D81A.DSN8S81P
CCSID EBCDIC;
Table 220 shows the indexes for the employee to project activity table:
Table 220. Indexes of the employee to project activity table
Name On Columns Type of Index
DSN8810.XEMPPROJACT1 PROJNO, ACTNO, Unique, ascending
EMSTDATE, EMPNO
DSN8810.XEMPPROJACT2 EMPNO Ascending
CASCADE
DEPT
SET SET
NULL NULL
RESTRICT EMP
RESTRICT
RESTRICT EMP_PHOTO_RESUME
RESTRICT
CASCADE ACT
PROJ RESTRICT
RESTRICT
PROJACT
RESTRICT
RESTRICT
EMPPROJACT
The following SQL statements are used to create the sample views:
Table
spaces: Separate
LOB spaces
spaces for DSN8SvrP
for employee
DSN8SvrD DSN8SvrE other common for
photo and
department employee application programming
resume table
table table tables tables
In addition to the storage group and databases shown in Figure 143, the storage
group DSN8G81U and database DSN8D81U are created when you run DSNTEJ2A.
Databases
The default database, created when DB2 is installed, is not used to store the
| sample application data. DSN8D81P is the database that is used for tables that are
| related to programs. The remainder of the databases are used for tables that are
| related to applications. They are defined by the following statements:
CREATE DATABASE DSN8D81A
STOGROUP DSN8G810
BUFFERPOOL BP0
CCSID EBCDIC;
Table spaces
The following table spaces are explicitly defined by the following statements. The
table spaces not explicitly defined are created implicitly in the DSN8D81A
database, using the default space attributes.
CREATE TABLESPACE DSN8S81D
IN DSN8D81A
USING STOGROUP DSN8G810
PRIQTY 20
SECQTY 20
ERASE NO
LOCKSIZE PAGE LOCKMAX SYSTEM
BUFFERPOOL BP0
CLOSE NO
CCSID EBCDIC;
DB2 provides installation-wide exit points to routines that you provide. These exit
points are described in the following sections:
v “Connection routines and sign-on routines”
v “Access control authorization exit routine” on page 1025
v “Edit routines” on page 1040
v “Validation routines” on page 1043
v “Date and time routines” on page 1046
v “Conversion procedures” on page 1049
v “Field procedures” on page 1052
v “Log capture routines” on page 1064
v “Routines for dynamic plan selection in CICS” on page 1066
v “Routine for CICS transaction invocation stored procedure” on page 1069
v “General considerations for writing exit routines” on page 1069
v “Row formats for edit and validation routines” on page 1072
v “RACF access control module” on page 1076
If your installation has a connection exit routine and you plan to use CONNECT
with the USER/USING clause, you should examine your exit routine and take the
following into consideration. DB2 does not update the following to reflect the user
ID and password that are specified in the USER/USING clause of the CONNECT
statement:
v The security-related control blocks that are normally associated with the thread
v The address space that your exit can access
For a general overview of the roles of exit routines in assigning authorization IDs,
see Chapter 11, “Controlling access to a DB2 subsystem,” on page 231. That chapter
explains how to implement the security features that you want by assigning
identifiers through RACF, or some similar program, and by using the sample
connection routines and sign-on routines that are provided by IBM.
This section describes the interfaces for those routines and the functions that they
provide. If you want to use secondary authorization IDs, you must replace the
default routines with the sample routines, or with routines of your own.
“General considerations for writing exit routines” on page 1069 applies to these
routines. One exception to the description of execution environments is that the
routines execute in non-cross-memory mode.
You can use an ALIAS statement of the linkage editor to provide the entry-point
name.
You can combine both routines into one CSECT and load module if you wish, but
the module must include both entry points, DSN3@ATH and DSN3@SGN. Use
standard assembler and linkage editor control statements to define the entry
points. DB2 loads the module twice at startup, by issuing the z/OS LOAD macro
first for entry point DSN3@ATH and then for entry point DSN3@SGN. However,
because the routines are reentrant, only one copy of each remains in virtual
storage.
Change required for some CICS users: You must change the sample sign-on exit
routine (DSN3SSGN) before assembling and using it, if the following conditions
are true:
v You attach to DB2 with an AUTH parameter in the RCT other than
AUTH=GROUP.
v You have the RACF list-of-groups option active.
v You have transactions whose initial primary authorization ID is not defined to
RACF
To change the sample sign-on exit routine (DSN3SSGN), perform the following
steps:
1. Locate the following statement in DSN3SSGN as a reference point:
SSGN035 DS OH BLANK BACKSCAN LOOP REENTRY
2. Locate the following statement, which comes after the reference point:
B SSGN037 ENTIRE NAME IS BLANK, LEAVE
3. Replace the statement with the following statement:
B SSGN090 NO GROUP NAME... BYPASS RACF CHECK
By changing the statement, you avoid an abend with SQLCODE -922. The routine
with the new statement provides no secondary IDs unless you use AUTH=GROUP.
For instructions on controlling the IDs that are associated with connection requests,
see “Processing connections” on page 232. For instructions on controlling the IDs
associated with sign-on requests, see “Processing sign-ons” on page 236.
Register 1 EXPL
Address of EXPL
Address of work area
Address of Work area
authorization ID list Length of work area (8192 bytes)
Connection name
Session variable structure Connection type
Maximum number of entries
in session variable array Location name
Actual number of entries LU name
in session variable array
Network name
Pointer to session
variable array
Authorization ID list
Primary ID
Reserved
ACEE address of zero
Space for secondary ID list
(= maximum * 8 bytes)
Figure 144. How a connection or sign-on parameter list points to other information
Important: If your identifier does not meet the 8-character criteria, the request
abends. Therefore, you should add blanks to the end of short identifiers to ensure
that they meet the criteria.
If the values that are returned are not blank, DB2 interprets them in the following
ways:
v The primary ID becomes the primary authorization ID.
v The list of secondary IDs, down to the first blank entry or to a maximum of 245
entries, becomes the list of secondary authorization IDs. The space allocated for
the secondary ID list is only large enough to contain the maximum number of
authorization IDs. This number is in field AIDLSCNT.
Attention: If you allow more than 245 secondary authorization IDs, abends
and storage overlays can occur.
v The SQL ID is checked to see if it is the same as the primary or one of the
secondary IDs. If it is not, the connection or sign-on process abends. Otherwise,
the validated ID becomes the current SQL ID.
If the returned value of the primary ID is blank, DB2 takes the following steps:
v In connection processing, the default ID that is defined when DB2 is installed
(UNKNOWN AUTHID on panel DSNTIPP) is substituted as the primary
authorization ID and the current SQL ID. The list of secondary IDs is set to
blanks.
v Sign-on processing abends. No default value exists for the primary ID.
Your routine must also set a return code in word 5 of the exit parameter list to
allow or deny access (field EXPLARC). By those means you can deny the
connection altogether. The code must have one of the values that are shown in
Table 225.
Table 225. Required return code in EXPLARC
Value Meaning
0 Access allowed; continue processing.
12 Access denied; terminate.
Both the sample connection routine (DSN3SATH) and the sample sign-on routine
have similar sections for setup, constants, and storage areas. Both routines set
values of the primary ID, the SQL ID, and the secondary IDs in three numbered
sections.
In the sample connection routine (DSN3SATH): The three sections of the sample
connection routine perform the following functions:
Section 1
Section 1 provides the same function as in the default connection routine. It
determines whether the first character of the input primary ID has a value that
is greater than blank (hex 40), and performs the following operations:
v If the first character is greater than hex 40, the value is not changed.
v If the first character is not greater than hex 40, the value is set according to
the following rules:
– If the request is from a TSO foreground address space, the primary ID is
set to the logon ID.
– If the request is not from a TSO foreground address space, the primary ID
is set to the job user ID from the JES job control table.
– If no primary ID is located, Section 2 is bypassed.
Section 2
At the beginning of Section 2, you can restore one commented-out instruction,
which then truncates the primary authorization ID to 7 characters. (The
instruction is identified by comments in the code.)
Section 2 next tests RACF options and makes the following changes in the list
of secondary IDs, which is initially blank:
v If RACF is not active, the list remains blank.
v If the list of groups option is not active, but an ACEE exists, the connected
group name is copied as the only secondary ID.
v If the list of groups option is active, the list of group names from the
ICHPCGRP block is copied into AIDLSEC in the authorization ID list.
In the sample sign-on routine (DSN3SSGN): The three sections of the sample
sign-on routine perform the following functions:
Section 1
Section 1 does not change the primary ID.
Section 2
Section 2 sets the SQL ID to the value of the primary ID.
Section 3
Section 3 tests RACF options and makes the following changes in the list of
secondary IDs, which is initially blank:
v If RACF is not active, the list remains blank.
v If the list of groups option is active, section 3 attempts to find an existing
ACEE from which to copy the authorization ID list.
– If AIDLACEE contains a valid ACEE, it is used.
Otherwise, look for a valid ACEE chained from the TCB or from the ASXB
or, if no usable ACEE exists, issue RACROUTE to have RACF build an
ACEE structure for the primary ID.
Copy the list of group names from the ACEE structure into the secondary
authorization list.
– If the exit issued RACROUTE to build an ACEE, another RACROUTE
macro is issued and the structure is deleted.
v If a list of secondary authorization IDs has not been built, and AIDLSAPM is
not zero, the data that is pointed to by AIDLSAPM is copied into AIDLSEC.
The sample sign-on exit routine can issue the RACF RACROUTE macro with the
default option SMC=YES. If another product issues RACROUTE with SMC=NO, a
deadlock might occur.
Your routine can also enhance the performance of later authorization checking.
Authorization for dynamic SQL statements is checked first for the CURRENT
SQLID, then for the primary authorization ID, and then for the secondary
authorization IDs. If you know that a user's privilege most often comes from a
secondary authorization ID, then set the CURRENT SQLID to this secondary ID
within your exit routine.
Diagnostics for connection exit routines and sign-on exit routines: The connection
(identify) recovery routine and the sign-on recovery routine provide diagnostics for
the corresponding exit routines. The diagnostics are produced only when the
abend occurs in the exit routine. The following diagnostics are available:
Dump title
The component failing module name is “DSN3@ATH” for a connection exit or
“DSN3@SGN” for a sign-on exit.
| The session variable structure: The connection exit routine and the sign-on exit
| routine point to the session variable structure (DSNDSVS). DSNDSVS specifies the
| maximum number of entries in the session array, the actual number of entries in
| the session array, and a pointer to the session variable array. The default value for
| the actual number of session variables is zero.
| Defining session variables: To define session variables, use the session variable
| array (DSNDSVA) to list up to 10 session variables as name and value pairs. The
| session variables that you establish in the connection exit routine and the sign-on
| exit routine are defined in the SESSION schema. The values that the exit routine
| supplies in the session variable array replace the previous values.
| Example: The session variable array that is shown in Table 228 lists six session
| variables.
| Table 228. Sample session variable array
| Name Value
| default_database DATAXM
| default_driver PZN4Y7
| location Kyoto
| member_of GROUP_42
| filename report.txt
| account_number A1-X142783
|
| The unqualified names are defined as VARCHAR(128), and the values are defined
| as VARCHAR(255). The exit routines must provide these values in Unicode CCSID
| 1208.
| If you change from DB2 authorization to RACF access control, you must
| change to RACF methods for some authorization techniques, and you must
| understand how DB2 and RACF work together. Expect to make the following
| changes when you implement RACF access control:
| v Plan to use RACF facilities (such as groups and patterns) more.
| v Plan to use patterns instead of individual item access profiles and
| permissions.
| v Plan to use RACF groups instead of secondary authorization IDs, which
| are not implemented in RACF. OWNER(secondaryID) generally must be a
| valid group.
# v Find an alternative to BINDAGENT. BINDAGENT is based on secondary
# authorization IDs, which are not implemented in RACF. BINDAGENT
# provides a relatively weak security separation. Customers have found
# alternatives.
| v Understand how SET CURRENT SQLID works with RACF. SET CURRENT
| SQLID can set a qualifier, but does not change authorization.
| v Know that authorizations are not dropped when objects are dropped or
| renamed.
| v Be aware of the relationship between objects and revoked privileges. Plans
| and packages are not invalidated when authorizations are revoked. Views
| are not dropped when authorizations are revoked.
DB2 provides an exit point that lets you provide your own access control
authorization exit routine, or lets RACF or an equivalent security system perform
DB2 authorization checking. Your routine specifies whether the authorization
checking should all be done by RACF only, or by both RACF and DB2. (Also, the
# routine can be called and still let all checking be performed by DB2.) For more
# information about how to use the routine that is provided, see DB2 RACF Access
# Control Module Guide.
When DB2 invokes the routine, it passes three possible functions to the routine:
v Initialization (DB2 startup)
v Authorization check
v Termination (DB2 shutdown)
When the exit routine is bypassed: In the following situations, the exit routine is
not called to check authorization:
# v The user has installation SYSADM or installation SYSOPR authority (where
# installation SYSOPR authority is sufficient to authorize the request). This
# authorization check is made strictly within DB2.
v DB2 security has been disabled. (You can disable DB2 security by specifying NO
on the USE PROTECTION field of installation panel DSNTIPP).
v Authorization has been cached from a prior check.
v In a prior invocation of the exit routine, the routine indicated that it should not
be called again.
v GRANT statements.
“General considerations for writing exit routines” on page 1069 applies to this
routine, but with the following exceptions to the description of execution
environments:
v The routine executes in non-cross-memory mode during initialization and
termination (XAPLFUNC of 1 or 3, described in Table 229 on page 1030).
v During authorization checking the routine can execute under a TCB or SRB in
cross-memory or non-cross-memory mode.
The source code for the default routine is in prefix.SDSNSAMP as DSNXSXAC. You
can use it to write your own exit routine. To assemble it, you must use Assembler
H.
# RACF provides a sample exit routine DSNXRXAC, which is shipped with DB2. It
# can be found in prefix.SDSNSAMP. For more information, see DB2 RACF Access
# Control Module Guide.
| Example: If you are not using external security in CICS (that is, SEC=NO is
| specified in the DFHSIT), CICS does not pass an ACEE to the CICS attachment
| facility. The ACEE address is passed for CICS transactions, if available.
When DB2 does not have an ACEE, it passes zeros in the XAPLACEE field. If this
happens, your routine can return a 4 in the EXPLRC1 field, and let DB2 handle the
authorization check.
# DB2 does not pass the ACEE address for IMS transactions. The ACEE address is
# passed for CICS transactions, if available.
# DB2 does pass the ACEE address when it is available for DB2 commands that are
# issued from a logged on z/OS console. DB2 does not pass the ACEE address for
# DB2 commands that are issued from a console that is not logged on, or for the
# START DB2 command, or commands issued automatically during DB2 startup.
| The only authorization ID that is available to check at run time is the primary
| authorization ID that runs the stored procedure package. If your security plan
| requires that you do not grant the EXECUTE privilege to all authorization IDs, the
| ID that runs the stored procedure package might not have it.
| However, you can ensure that the ID can run the package without granting the
| EXECUTE privilege on the stored procedure package to it. To do this, grant the
| privilege to the ID that binds the plan that calls the stored procedure package, and
| satisfy the following conditions:
| v The stored procedure definition must not specify COLLID.
| v The program that calls the stored procedure must be bound directly to the plan
| that calls it by using the MEMBER bind option and by specifying the name of
| the calling program. The stored procedure package must be referenced in the
| PKLIST as collection-id.* or collection-id.stored-procedure-package-id with the
| VALIDATE(BIND) option.
| v The SET CURRENT PACKAGESET statement must not be used before the stored
| procedure is called in the program.
Dropping views
When a privilege that is required to create a view is revoked, the view is dropped.
Similar to the revocation of plan privileges, such an event is not communicated to
DB2 by the authorization checking routine.
If you want DB2 to drop the view when a privilege is revoked, you must use the
SQL statements GRANT and REVOKE.
The results of authorization checks on the EXECUTE privilege for packages and
routines are cached (assuming that package and routine authorization caching is
enabled on your system). If this privilege is revoked in the exit routine, the cached
information is not updated to reflect the revoke. You must use the GRANT
statement and the REVOKE statement to update the cached information.
If you use an access control authorization exit routine, some user-defined functions
that were not candidates for execution before the original BIND or REBIND of the
invoking plan or package might become candidates for execution during the
automatic rebind of the invoking plan or package. If a user-defined function is
invoked during an automatic rebind, and that user-defined function is invoked
from a trigger body and receives a transition table, the form of the invoked
function that DB2 uses for function selection includes only the columns of the
transition table that existed at the time of the original BIND or REBIND of the
package or plan for the invoking program.
| For an ACA exit routine to suppress unwanted error messages during the creation
| of materialized query tables, XAPLFSUP is turned on.
Register 1 EXPL
Address of EXPL
Address of work area
Address of XAPL Work area
authorization Length of work area (4096 bytes)
checking list
Return code--EXPLRC1
Reason code--EXPLRC2
.
.
.
Figure 145. How an authorization routine's parameter list points to other information
The work area (4096 bytes) is obtained once during the startup of DB2 and only
released when DB2 is shut down. The work area is shared by all invocations to the
exit routine.
XAPLOWNQ, XAPLREL1 and XAPLREL2 might further qualify the object or may
provide additional information that can be used in determining authorization for
certain privileges. These privileges and the contents of XAPLOWNQ, XAPLREL1
and XAPLREL2 are shown in Table 231.
# Table 231. Related information for certain privileges
# Object type
# Privilege (XAPLTYPE) XAPLOWNQ XAPLREL1 XAPLREL2
# 0263 (USAGE) E Address of schema Address of distinct Contains binary
# name type owner zeroes
# 0064 (EXECUTE) F Address of schema Address of Contains binary
# 0265 (START) name user-defined function zeroes
# 0266 (STOP) owner
# 0267 (DISPLAY)
# 0263 (USAGE) J Address of schema Address of JAR Contains binary
# name owner zeroes
# 0064 (EXECUTE) K Address of Contains binary Contains binary
# collection ID zeroes zeroes
The data types and field lengths of the information shown in Table 231 on page
1034 is shown in Table 232 on page 1037.
Return codes during initialization: EXPLRC1 must have one of the values that are
shown in Table 233 during initialization.
Table 233. Required values in EXPLRC1 during initialization
Value Meaning
0 Initialization successful.
12 Unable to service request; don’t call exit again.
See “Exception processing” on page 1038 for an explanation of how the EXPLRC1
value affects DB2 processing.
Return codes during authorization check: Make sure that EXPLRC1 has one of the
values that are shown in Table 234 during the authorization check.
Table 234. Required values in EXPLRC1 during authorization check
Value Meaning
0 Access permitted.
4 Unable to determine; perform DB2 authorization checking.
8 Access denied.
12 Unable to service request; don’t call exit again.
See “Exception processing” for an explanation of how the EXPLRC1 value affects
DB2 processing. On authorization failures, the return code is included in the IFCID
0140 trace record.
Reason codes during authorization check: Field EXPLRC2 lets you put in any code
that would be of use in determining why the authorization check in the exit
routine failed. On authorization failures, the reason code is included in the IFCID
0140 trace record.
Exception processing
During initialization or authorization checking, DB2 issues diagnostic message
DSNX210I to the operator’s console, if one of the following conditions occur:
v The authorization exit returns a return code of 12 or an invalid return code.
v The authorization exit abnormally terminates.
Additional actions that DB2 performs depend on the reason code that the exit
returns during initialization. Table 236 on page 1039 summarizes these actions.
| Notes:
| 1. During initialization, DB2 sets a value of -1 to identify the default exit. The user exit
| routine should not set the reason code to -1.
| 2. During initialization, the task is DB2 startup. During authorization checking, the task is
| the application.
# 3. AEXITLI (authorization exit limit) can be updated online. Refer to SET SYSPARM in DB2
# Command Reference.
Edit routines
Edit routines are assigned to a table by the EDITPROC clause of CREATE TABLE.
An edit routine receives the entire row of the base table in internal DB2 format; it
can transform that row when it is stored by an INSERT or UPDATE SQL
statement, or by the LOAD utility. It also receives the transformed row during
retrieval operations and must change it back to its original form. Typical uses are
to compress the storage representation of rows to save space on DASD and to
encrypt the data.
You cannot use an edit routine on a table that contains a LOB or a ROWID
column.
Your edit routine can encode the entire row of the table, including any index keys.
However, index keys are extracted from the row before the encoding is done,
therefore, index keys are stored in the index in edit-decoded form. Hence, for a table
with an edit routine, index keys in the table are edit-coded; index keys in the index
are not edit-coded.
The sample application contains a sample edit routine, DSN8EAE1. To print it, use
ISPF facilities, IEBPTPCH, or a program of your own. Or, assemble it and use the
assembly listing.
There is also a sample routine that does Huffman data compression, DSN8HUFF in
library prefix.SDSNSAMP. That routine not only exemplifies the use of the exit
parameters, it also has potentially some use for data compression. If you intend to
use the routine in any production application, please pay particular attention to the
warnings and restrictions given as comments in the code. You might prefer to let
DB2 compress your data. For instructions, see “Compressing your data” on page
670.
“General considerations for writing exit routines” on page 1069 applies to edit
routines.
You cannot add an edit routine to a table that already exists: you must drop the
table and re-create it. Also, you cannot alter a table with an edit routine to add a
column. Again, you must drop the table and re-create it, and presumably also alter
the edit routine in some way to account for the new column.
The same edit routine is invoked to edit-decode a row whenever DB2 retrieves
one. On retrieval, it is invoked before any date routine, time routine, or field
procedure. If retrieved rows are sorted, the edit routine is invoked before the sort.
An edit routine is not invoked for a DELETE operation without a WHERE clause
that deletes an entire table in a segmented table space.
Use macro DSNDEDIT to get the starting address and row length for edit exits.
Add the row length to the starting address to get the first invalid address beyond
the end of the input buffer; your routine must not process any address as large as
that.
Figure 146 shows how the parameter list points to other row information.
Register 1 EXPL
Address of
EXPL Address of
work area Work area
Address of (256 bytes)
edit parameter Length of
list work area
Reserved
Return code
Parameter list
Reason code
EDITCODE: Function to be
performed
Row descriptions
Address of row description
Number of columns
Reserved in row (n)
Length of input row Address of column
list
Address of input row
Row type
Length of output row
Data type
...n
Input row Data attribute
Column name
Figure 146. How the edit exit parameter list points to row information. The address of the nth
column description is given by: RFMTAFLD + (n−1)×(FFMTE−FFMT); see “Parameter list for
row format descriptions” on page 1074.
If EDITCODE contains 4, the input row is in coded form. Your routine must
decode it.
In that case, EDITOLTH contains the maximum length of the record. As before,
“record” includes fields for the lengths of VARCHAR and VARGRAPHIC
columns, and for null indicators, but not the 6-byte record header.
In either case, put the result in the output area, pointed to by EDITOPTR, and put
the length of your result in EDITOLTH. The length of your result must not be
greater than the length of the output area, as given in EDITOLTH on invocation,
and your routine must not modify storage beyond the end of the output area.
Required return code: Your routine must also leave a return code in EXPLRC1, with
the meanings that are listed in Table 238:
Table 238. Required return code in EXPLRC1
Value Meaning
0 Function performed successfully.
Nonzero Function failed.
If the function fails, the routine might also leave a reason code in EXPLRC2. DB2
returns SQLCODE -652 (SQLSTATE ’23506’) to the application program and puts
the reason code in field SQLERRD(6) of the SQL communication area (SQLCA).
Validation routines
Validation routines are assigned to a table by the VALIDPROC clause of CREATE
TABLE and ALTER TABLE. A validation routine receives an entire row of a base
table as input, and can return an indication of whether or not to allow a following
INSERT, UPDATE, or DELETE operation. Typically, a validation routine is used to
impose limits on the information that can be entered in a table; for example,
allowable salary ranges, perhaps dependent on job category, for the employee
sample table.
Although VALIDPROCs can be specified for a table that contains a LOB column,
the LOB values are not passed to the validation routine. The indicator column
takes the place of the LOB column.
The return code from a validation routine is checked for a 0 value before any
insert, update, or delete is allowed.
Use macro DSNDRVAL to get the starting address and row length for validation
exits. Add the row length to the starting address to get the first invalid address
beyond the end of the input buffer; your routine must not process any address as
large as that.
If the operation is not allowed, the routine might also leave a reason code in
EXPLRC2. DB2 returns SQLCODE -652 (SQLSTATE ’23506’) to the application
program and puts the reason code in field SQLERRD(6) of the SQL communication
area (SQLCA).
Figure 147 on page 1046 shows how the parameter list points to other information.
Return code
Parameter list
Reason code
Reserved
Address of row description Row descriptions
Reserved Number of columns
in row (n)
Length of input row to be
validated Address of column
list
Address of input row to be
validated Row type
.
. Column descriptions
.
Column length
Data type ...n
Input row
Data attribute
Column name
Figure 147. How a validation parameter list points to information. The address of the nth
column description is given by: RFMTAFLD + (n−1)×(FFMTE−FFMT); see “Parameter list for
row format descriptions” on page 1074.
You can have either a date routine, a time routine, or both. These routines do not
apply to timestamps. Special rules apply if you execute queries at a remote DBMS,
through the distributed data facility. For that case, see DB2 SQL Reference.
Also, replace the IBM-supplied exit routines, using CSECTs DSNXVDTX for a date
routine and DSNXVTMX for a time routine. The routines are loaded when DB2
starts.
To make the local date or time format the default for retrieval, set DATE
FORMAT or TIME FORMAT to LOCAL when installing DB2. That has the effect
that DB2 always takes the exit routine when you retrieve from a DATE or TIME
column. In our example, suppose that you want to retrieve dates in your local
format only occasionally; most of the time you use the USA format. Set DATE
FORMAT to USA.
The install parameters for LOCAL DATE LENGTH, LOCAL TIME LENGTH,
DATE FORMAT, and TIME FORMAT can also be updated after DB2 is installed.
For instructions, see Part 2 of DB2 Installation Guide. If you change a length
parameter, you may have to rebind applications.
On retrieval: A date or time routine can be invoked to change a value from ISO to
the locally-defined format when a date or time value is retrieved by a SELECT or
FETCH statement. If LOCAL is the default, the routine is always invoked unless
overridden by a precompiler option or by the CHAR function, as by specifying
CHAR(HIREDATE, ISO); that specification always retrieves a date in ISO format. If
LOCAL is not the default, the routine is invoked only when specifically called for
by CHAR, as in CHAR(HIREDATE, LOCAL); that always retrieves a date in the
format supplied by your date exit routine.
On retrieval, the exit is invoked after any edit routine or DB2 sort. A date or time
routine is not invoked for a DELETE operation without a WHERE clause that
deletes an entire table in a segmented table space.
If the function code is 8, the input value is in ISO, in the area pointed to by
DTXPISO. Your routine must change it to your local format, and put the result in
the area pointed to by DTXPLOC.
Your routine must also leave a return code in EXPLRC1, a 4-byte integer and the
third word of the EXPL area. The return code can have the meanings that are
shown in Table 243 on page 1049.
Figure 148 shows how the parameter list points to other information.
Register 1
EXPL
Address of
EXPL Address of
work area Work area
Address of (512 bytes)
parameter Length of
list work area
Return code
Parameter list
Address of function code
Function code:
Address of format length Function to be
performed
Address of LOCAL value
LOCAL value
Figure 148. How a date or time parameter list points to other information
Conversion procedures
A conversion procedure is a user-written exit routine that converts characters from
one coded character set to another coded character set. (For a general discussion of
character sets, and definitions of those terms, see Appendix A ofDB2 Installation
Guide.) In most cases, any conversion that is needed can be done by routines
provided by IBM. The exit for a user-written routine is available to handle
exceptions.
DB2 does not use the following columns, but checks them for the allowable values
listed. Values you insert can be used by your routine in any way. If you insert no
value in one of these columns, DB2 inserts the default value listed.
ERRORBYTE Any character, or null. The default is null.
SUBBYTE Any character not equal to the value of
ERRORBYTE, or null. The default is null.
TRANSTAB Any character string of length 256 or the empty
string. The default is an empty string.
The length of the work area pointed to by the exit parameter list is generally 512
bytes. However, if the string to be converted is ASCII MIXED data (the value of
TRANSTYPE in the row from SYSSTRINGS is PM or PS), then the length of the
work area is 256 bytes, plus the length attribute of the string.
The string value descriptor: The descriptor has the format shown in Table 244 on
page 1051.
The row from SYSSTRINGS: The row copied from the catalog table
SYSIBM.SYSSTRINGS is in the standard DB2 row format described in “Row
formats for edit and validation routines” on page 1072. The fields ERRORBYTE
and SUBBYTE each include a null indicator. The field TRANSTAB is of varying
length and begins with a 2-byte length field.
Your procedure must also set a return code in field EXPLRC1 of the exit parameter
list.
With the two codes that are shown in Table 245, provide the converted string in
FPVDVALE.
Table 245. Codes for the converted string in FPVDVALE
Code Meaning
0 Successful conversion
4 Conversion with substitution
For the remaining codes that are shown in Table 246, DB2 does not use the
converted string.
Table 246. Remaining codes for the FPVDVALE
Code Meaning
8 Length exception
12 Invalid code point
16 Form exception
20 Any other error
24 Invalid CCSID
For an invalid code point (code 12), place the 1- or 2-byte code point in field
EXPLRC2 of the exit parameter list.
Return a form exception (code 16) for EBCDIC MIXED data when the source string
does not conform to the rules for MIXED data.
In the case of a conversion error, DB2 sets the SQLERRMC field of the SQLCA to
HEX(EXPLRC1) CONCAT X'FF' CONCAT HEX(EXPLRC2).
Figure 149 shows how the parameter list points to other information.
Register 1
Address of EXPL
EXPL Address of
Work area
Address of work area
string value
list Length of
work area
Address of
SYSSTRINGS Reserved
row copy
Return code
Invalid code
String value descriptor
Data type of string
Copy of row from
Maximum string length
SYSIBM.SYSSTRINGS
String length
String value
Field procedures
Field procedures are assigned to a table by the FIELDPROC clause of CREATE
TABLE and ALTER TABLE. A field procedure is a user-written exit routine to
transform values in a single short-string column. When values in the column are
changed, or new values inserted, the field procedure is invoked for each value, and
can transform that value (encode it) in any way. The encoded value is then stored.
When values are retrieved from the column, the field procedure is invoked for
each value, which is encoded, and must decode it back to the original string value.
“General considerations for writing exit routines” on page 1069 applies to field
procedures.
A user-defined data type can be a valid field if the source type of the data type is a
short string column that has a null default value. DB2 casts the value of the
column to the source type before it passes it to the field procedure.
If you plan to use a field procedure, specify it when you create the table. In
operation, the procedure is loaded on demand. You cannot add a field procedure
to an existing column of a table; you can, however, use ALTER TABLE to add to an
existing table a new column that uses a field procedure.
The optional parameter list that follows the procedure name is a list of constants,
enclosed in parentheses, called the literal list. The literal list is converted by DB2
into a data structure called the field procedure parameter value list (FPPVL). That
structure is passed to the field procedure during the field-definition operation. At
that time, the procedure can modify it or return it unchanged. The output form of
the FPPVL is called the modified FPPVL; it is stored in the DB2 catalog as part of
the field description. The modified FPPVL is passed again to the field procedure
whenever that procedure is invoked for field-encoding or field-decoding.
A field procedure is never invoked to process a null value, nor for a DELETE
operation without a WHERE clause on a table in a segmented table space.
A warning about blanks: When DB2 compares the values of two strings with
different lengths, it temporarily pads the shorter string with blanks (in EBCDIC or
double-byte characters, as needed) up to the length of the longer string. If the
shorter string is the value of a column with a field procedure, the padding is done
to the encoded value, but the pad character is not encoded. Therefore, if the
Next, this section describes specific requirements for the three operations of
field-definition:
v “Field-definition (function code 8)” on page 1058
v “Field-encoding (function code 0)” on page 1060
v “Field-decoding (function code 4)” on page 1062
The contents of registers at invocation and at exit are different for each of those
operations, and are described with the requirements for the operations.
FPIB address
Field procedure
CVD address information
block (FPIB)
FVD address
FPPVL address
Column value
descriptor (CVD)
Field value
descriptor (FVD)
Field procedure
parameter value
list (FPPVL) or
literal list
The size of the area you need depends on the way you have programmed your
field-encoding and field-decoding operations. Suppose, for example, that the
longest work area you need for either of those operations is 1024 bytes. DB2 passes
to your routine, for the field-definition operation, a value of 512 bytes for the
length of the work area; your field-definition operation must change that to 1024.
Thereafter, whenever your field procedure is invoked for encoding or decoding,
DB2 makes available to it an area of 1024 bytes.
If a work area of 512 bytes is sufficient for your operations, your field-definition
operation need not change the value supplied by DB2. If you need less than 512
bytes, your field definition can return a smaller value.
The column value descriptor (CVD) contains a description of a column value and, if
appropriate, the value itself. During field-encoding, the CVD describes the value to
be encoded. During field-decoding, it describes the decoded value to be supplied
by the field procedure. During field-definition, it describes the column as defined
in the CREATE TABLE or ALTER TABLE statement.
The field value descriptor (FVD) contains a description of a field value and, if
appropriate, the value itself. During field-encoding, the FVD describes the encoded
value to be supplied by the field procedure. During field-decoding, it describes the
value to be decoded. Field-definition must put into the FVD a description of the
encoded value.
On entry
The registers have the information that is listed in Table 250:
Table 250. Contents of the registers on entry
Register Contains
1 Address of the field procedure parameter list (FPPL); see Figure 150
on page 1055 for a schematic diagram.
2 through 12 Unknown values that must be restored on exit.
13 Address of the register save area.
14 Return address.
15 Address of entry point of exit routine.
The contents of all other registers, and of fields not listed in the following tables,
are unpredictable.
On exit
The registers must have the information that is listed in Table 254:
Table 254. Contents of the registers on exit
Register Contains
2 through 12 The values that they contained on entry.
15 The integer zero if the column described in the CVD is valid for the
field procedure; otherwise the value must not be zero.
The following fields must be set as shown; all other fields must remain as on entry.
The FVD must have the information that is listed in Table 256:
Table 256. Contents of the FVD on exit
Field Contains
FPVDTYPE The numeric code for the data type of the field value. Any of the data
types listed in Table 249 on page 1058 is valid.
FPVDVLEN The length of the field value.
Field FPVDVALE must not be set; the length of the FVD is 4 bytes only.
The FPPVL can be redefined to suit the field procedure, and returned as the
modified FPPVL, subject to the following restrictions:
v The field procedure must not increase the length of the FPPVL.
v FPPVLEN must contain the actual length of the modified FPPVL, or 0 if no
parameter list is returned.
The modified FPPVL is recorded in the catalog table SYSIBM.SYSFIELDS, and is
passed again to the field procedure during field-encoding and field-decoding. The
modified FPPVL need not have the format of a field procedure parameter list, and
it need not describe constants by value descriptors.
On entry
The registers have the information that is listed in Table 257:
Table 257. Contents of the registers on entry
Register Contains
1 Address of the field procedure parameter list (FPPL); see Figure 150 on
page 1055 for a schematic diagram.
The contents of all other registers, and of fields not listed, are unpredictable.
The work area is contiguous, uninitialized, and of the length specified by the field
procedure during field-definition.
On exit
The registers have the information that is listed in Table 261:
Table 261. Contents of the registers on exit
Register Contains
2 through 12 The values that they contained on entry.
15 The integer zero if the column described in the CVD is valid for the
field procedure; otherwise the value must not be zero.
The FPIB can have the information that is listed in Table 262:
Table 262. Contents of the FPIB on exit
Field Contains
FPBRTNC An optional 2-byte character return code, defined by the field procedure;
blanks if no return code is given.
FPBRSNC An optional 4-byte character reason code, defined by the field procedure;
blanks if no reason code is given.
FPBTOKP Optionally, the address of a 40-byte error message residing in the work
area or in the field procedure's static area; zeros if no message is given.
On entry
The registers have the information that is listed in Table 263:
Table 263. Contents of the registers on entry
Register Contains
1 Address of the field procedure parameter list (FPPL); see Figure 150 on
page 1055 for a schematic diagram.
2 through 12 Unknown values that must be restored on exit.
13 Address of the register save area.
14 Return address.
15 Address of entry point of exit routine.
The contents of all other registers, and of fields not listed, are unpredictable.
The work area is contiguous, uninitialized, and of the length specified by the field
procedure during field-definition.
On exit
The registers have the information that is listed in Table 267:
Table 267. Contents of the registers on exit
Register Contains
2 through 12 The values they contained on entry.
15 The integer zero if the column described in the FVD is valid for the
field procedure; otherwise the value must not be zero.
The CVD must contain the decoded (column) value in field FPVDVALE. If the
value is a varying-length string, the first halfword must contain its length.
The FPIB can have the information that is listed in Table 268:
Table 268. Contents of the FPIB on exit
Field Contains
FPBRTNC An optional 2-byte character return code, defined by the field procedure;
blanks if no return code is given.
FPBRSNC An optional 4-byte character reason code, defined by the field procedure;
blanks if no reason code is given.
FPBTOKP Optionally, the address of a 40-byte error message residing in the work
area or in the field procedure's static area; zeros if no message is given.
Performance factor: Your log capture routine receives control often. Design it with
care: a poorly designed routine can seriously degrade system performance.
Whenever possible, use the instrumentation facility interface (IFI), rather than a log
capture exit routine, to read data from the log. For instructions, see “Reading log
records with IFI” on page 1088.
“General considerations for writing exit routines” on page 1069 applies, but with
the following exceptions to the description of execution environments:
A log capture routine can execute in either TCB mode or SRB mode, depending
on the function it is performing. When in SRB mode, it must not perform any
I/O operations nor invoke any SVC services or ESTAE routines.
The module is loaded during DB2 initialization and deleted during DB2
termination. You must link the module into either the prefix.SDSNEXIT or the DB2
prefix.SDSNLOAD library. Specify the REPLACE parameter of the link-edit job to
replace a module that is part of the standard DB2 library for this release. The
module should have attributes AMODE(31) and RMODE(ANY).
A log control interval can be passed more than once. Use the time stamp to
determine the last occurrence of the control interval. This last occurrence should
replace all others. The time stamp is found in the control interval.
The function was originally intended to ease two problems that can occur, for a
program running under a CICS transaction, when all SQL calls are bound into a
single large plan. First, changing one DBRM requires all of them to be bound
again. Second, binding a large plan can be very slow, and the entire transaction is
unavailable for processing during the operation. An application that is designed
around small packages avoids both those problems. For guidance on using
packages, see DB2 Application Programming and SQL Guide.
You can specify the same exit routine for all entries in the resource control table
(RCT), or different routines for different entries. You can select plans dynamically
for RCT entries of both TYPE=ENTRY and TYPE=POOL.
The exit routine can name the plan during execution of the transaction at one of
two times:
v When the first SQL statement in the transaction is about to be executed. That
action is called dynamic plan selection.
v When the first SQL statement following a sync point is about to be executed, if
the sync point releases a thread for reuse and if several other conditions are
satisfied. That action is called dynamic plan switching. If you think you need that
function, see particularly “Dynamic plan switching” on page 1068 and then
consider packages again.
The exit routine can change the plan that is allocated by changing the contents of
field CPRMPLAN in its parameter list. If the routine does not change the value of
CPRMPLAN, the plan that is allocated has the DBRM name of the first SQL
statement executed.
The sample routine does not change the parameter list. As a result, the name of the
plan selected is, by default, the DBRM of the first SQL statement. The sample
establishes addressability to the parameter list and then issues EXEC CICS
RETURN.
The exit can also be taken at the first SQL statement following a sync point, for
dynamic plan switching. Whether the exit is taken at that time is determined by
the rules for dynamic plan selection in CICS.
You can use the sample program, DSNC@EXT, as an example for coding your own
exit routine. Your routine:
v Must adhere to normal CICS conventions for command-level programs
v Can be written in any language supported by CICS, such as assembler, COBOL,
or PL/I
v Must establish addressability to the parameter list DFHCOMMAREA, using
standard CICS command-level conventions
v Can update the parameter list if necessary
v Can change the plan that is allocated by changing the contents of field
CPRMPLAN in the parameter list
v Must not contain SQL statements
v Must not issue the command EXEC CICS SYNCPOINT
v Must terminate by using the command EXEC CICS RETURN
The field CPRMUSER can be used for such purposes as addressing a user table or
even a CICS GETMAIN area. There is a unique field called CPRMUSER for each
RCT entry with PLNEXIT=YES.
The sample macros in prefix.SDSNMACS map the parameter list in the languages
that are shown in Table 271.
Table 271. Macros to map the parameter list in different languages
Macro Language
DSNCPRMA Assembler
DSNCPRMC COBOL
DSNCPRMP PL/I
Even though DB2 has functional recovery routines of its own, you can establish
your own functional recovery routine (FRR), specifying MODE=FULLXM and
EUT=YES.
Register 1
Address of EXPL parameter list
Figure 151. Use of register 1 on invoking an exit routine. (Field procedures and translate
procedures do not use the standard exit-specific parameter list.)
Table 274 shows the EXPL parameter list. Its description is given by macro
DSNDEXPL.
Table 274. Contents of EXPL parameter list
Name Hex offset Data type Description
EXPLWA 0 Address Address of a work area to be used by the
routine
EXPLWL 4 Signed 4-byte Length of the work area. The value is:
integer 2048 for connection routines and sign-on
routines
512 for date and time routines and
translate procedures (see Note 1).
256 for edit, validation, and log capture
routines
EXPLRSV1 8 Signed 2-byte Reserved
integer
EXPLRC1 A Signed 2-byte Return code
integer
EXPLRC2 C Signed 4-byte Reason code
integer
EXPLARC 10 Signed 4-byte Used only by connection routines and
integer sign-on routines
LOB columns are an exception. LOB values are not stored contiguously. An
indicator column is stored in a base table in place of the LOB value.
Edit procedures cannot be specified for any table that contains a LOB column or a
ROWID column. In addition, LOB values are not available to validation routines;
indicator columns and ROWID columns represent LOB columns as input to a
validation procedure.
The extra byte is included in the column length attribute (parameter FFMTFLEN in
Table 282 on page 1074).
Example: The sample project activity table has five fixed-length columns. The first
two columns do not allow nulls; the last three do. Table 275 shows a row in the
table.
Table 275. A row in fixed-length format
Column 1 Column 2 Column 3 Column 4 Column 5
MA2100 10 00 0.5 00 820101 00 821101
There are no gaps after varying-length columns. Hence, columns that appear after
varying-length columns are at variable offsets in the row. To get to such a column,
you must scan the columns sequentially after the first varying-length column. An
empty string has a length of zero with no data following.
ROWID and indicator columns are treated like varying length columns. Row IDs
are VARCHAR(17). An indicator columns is VARCHAR(4); it is stored in a base
table in place of a LOB column, and indicates whether the LOB value for the
column is null or zero length.
Table 277 shows how the row in Table 276 would look in storage if nulls were
allowed in Column 2.
Table 277. A varying-length row in the sample department table.. The first value in Column
2 indicates the column length as a hexadecimal value.
Column 1 Column 2 Column 3 Column 4
C01 0013 Information center 000030 A00
An empty string has a length of one, a X'00' null indicator, and no data following.
Table 278 shows the DATE format, which consists of 4 total bytes
Table 278. DATE format
Year Month Day
2 bytes 1 byte 1 byte
Table 280 shows the TIMESTAMP format, which consists of 10 total bytes.
Table 280. TIMESTAMP format
Year Month Day Hours Minutes Seconds Microseconds
2 bytes 1 byte 1 byte 1 byte 1 byte 1 byte 3 bytes
Table 283 on page 1075 shows a description of data type codes and length
attributes.
| For more information about the RACF access control module, see DB2 RACF Access
| Control Module Guide.
For diagnostic or recovery purposes, it can be useful to read DB2 log records. This
appendix also discusses three approaches to writing programs that read log
records:
v “Reading log records with IFI” on page 1088
This is an online method using the instrumentation facility interface (IFI) when
DB2 is running. You use the READA (read asynchronously) command of IFI to
read log records into a buffer and the READS (read synchronously) command to
pick up specific log control intervals from a buffer.
v “Reading log records with OPEN, GET, and CLOSE” on page 1092
This is a stand-alone method that can be used when DB2 is down. You use the
assembler language macro DSNJSLR to submit OPEN, GET, and CLOSE
functions. This method can be used to capture log records that you cannot pick
up with IFI after DB2 goes down.
v “Reading log records with the log capture exit routine” on page 1100
This is an online method using the log capture exit when DB2 is running. You
write an exit routine to use this exit to capture and transfer log records in real
time.
There are three main types of log records which are described under these
headings:
“Unit of recovery log records” on page 1078
“Checkpoint log records” on page 1081
“Database page set control records” on page 1082
Exception information that is not included in any of these types is described under
“Other exception information” on page 1082.
Each log record has a header that indicates its type, the DB2 subcomponent that
made the record, and, for unit-of-recovery records, the unit-of-recovery identifier.
The log records can be extracted and printed by the DSN1LOGP program. For
instructions, refer to Part 3 of DB2 Utility Guide and Reference.
The log relative byte address and log record sequence number: The DB2 log can
contain up to 248 bytes, where 248 is 2 to the 48th power. Each byte is addressable
by its offset from the beginning of the log. That offset is known as its relative byte
address (RBA).
| In the data sharing environment, each member has its own log. The log record
| sequence number (LRSN) uniquely identifies the log records of a data sharing
| member. The LRSN is a 6-byte hexadecimal value derived from a store clock
timestamp. DB2 uses the LRSN for recovery in the data sharing environment.
Effects of ESA data compression: Log records can contain compressed data if a
table contains compressed data. For example, if the data in a DB2 row are
compressed, all data logged because of changes to that row (resulting from inserts,
updates and deletes) are compressed. If logged, the record prefix is not
compressed, but all of the data in the record are in compressed format. Reading
compressed data requires access to the dictionary that was in use when the data
was compressed.
If the work is rolled back, the undo/redo record is used to remove the change. At
the same time that the change is removed, a new redo/undo record is created that
contains information, called compensation information, that is used if necessary to
reverse the change. For example, if a value of 3 is changed to 5, redo compensation
information changes it back to 3.
If the work must be recovered, DB2 scans the log forward and applies the redo
portions of log records and the redo portions of compensation records, without
keeping track of whether the unit of recovery was committed or rolled back. If the
unit of recovery had been rolled back, DB2 would have written compensation redo
log records to record the original undo action as a redo action. Using this
technique, the data can be completely restored by applying only redo log records
on a single forward pass of the log.
DB2 also logs the creation and deletion of data sets. If the work is rolled back, the
operations are reversed. For example, if a table space is created using
DB2-managed data sets, DB2 creates a data set; if rollback is necessary, the data set
is deleted. If a table space using DB2-managed data sets is dropped, DB2 deletes
the data set when the work is committed, not immediately. If the work is rolled
back, DB2 does nothing.
The log record identifies the RID, the operation (insert, delete, or update), and the
data. Depending on the data size and other variables, DB2 can write a single log
record with both undo and redo information, or it can write separate log records
for undo and redo.
At a checkpoint, DB2 logs its current status and registers the log RBA of the
checkpoint in the bootstrap data set (BSDS). At restart, DB2 uses the information in
the checkpoint records to reconstruct its state when it terminated.
Many log records can be written for a single checkpoint. DB2 can write one to
begin the checkpoint; others can then be written, followed by a record to end the
checkpoint. Table 288 summarizes the information logged.
Table 288. Contents of checkpoint log records
Type of log record Information logged
Begin_Checkpoint Marks the start of the summary information. All later records in
the checkpoint have type X'0100' (in the LRH).
Unit of Recovery Identifies an incomplete unit of recovery (by the log RBA of the
Summary Begin_UR log record). Includes the date and time of its creation,
its connection ID, correlation ID, authorization ID, the plan
name it used, and its current state (inflight, indoubt, in-commit,
or in-abort).
Page Set Summary Contains information for allocating and opening objects at
restart, and identifies (by the log RBA) the earliest checkpoint
interval containing log records about data changes that have not
been applied to the DASD version of the data or index. There is
one record for each open page set (table space or index space).
Page Set Exception Identifies the type of exception state. For descriptions of the
Summary states, see “Database page set control records” on page 1082.
There is one record for each database and page set with an
exception state.
Page Set UR Summary Identifies page sets modified by any active UR (inflight,
Record in-abort, or in-commit) at the time of the checkpoint.
End_Checkpoint Marks the end of the summary information about a checkpoint.
9. A page in a catalog table space that has links can contain up to 127 rows.
Figure 152 on page 1083 shows a VSAM CI containing four log records or
segments, namely:
v The last segment of a log record of 768 bytes (X'0300'). The length of the
segment is 100 bytes (X'0064').
v A complete log record of 40 bytes (X'0028').
v A complete log record of 1024 bytes (X'0400').
v The first segment of a log record of 4108 bytes (X'100C'). The length of the
segment is 2911 bytes (X'0B5F').
Record 4
VSAM record
ends here
For data sharing, the LRSN of
the last log record in this CI
Offset of last segment in this CI
(beginning of log record 4)
Total length of spanned record that
ends in this CI (log record 1)
Total length of spanned record that
begins in this CI (log record 4)
The term log record refers to a logical record, unless the term physical log record is
used. A part of a logical record that falls within one physical record is called a
segment.
The first segment of a log record must contain the header and some bytes of data.
If the current physical record has too little room for the minimum segment of a
new record, the remainder of the physical record is unused, and a new log record
is written in a new physical record.
The log record can span many VSAM CIs. For example, a minimum of nine CIs are
required to hold the maximum size log record of 32815 bytes. Only the first
segment of the record contains the entire LRH; later segments include only the first
two fields. When a specific log record is needed for recovery, all segments are
retrieved and presented together as if the record were stored continuously.
Table 289. Contents of the log record header
Hex offset Length Information
00 2 Length of this record or segment
Notes:
1. For record types and subtypes, see “Log record type codes” on page 1086 and “Log
record subtype codes” on page 1086.
2. For a description of units of recovery, see “Unit of recovery log records” on page 1078
LINK (6)
Unit of recovery ID (6)
Flags (1)
A single record can contain multiple type codes that are combined. For example,
0600 is a combined UNDO/REDO record; F400 is a combination of four
DB2-assigned types plus a REDO.
Log record type 0004 (SYSCOPY utility) has log subtype codes that correspond to
the page set ID values of the table spaces that have their SYSCOPY records in the
log (SYSIBM.SYSUTILX, SYSIBM.SYSCOPY and DSNDB01.DBD01).
For a description of log record types 0200 (unit of recovery undo) and 0400 (unit of
recovery redo), see the SUBTYPE option of DSN1LOGP in Part 3 of DB2 Utility
Guide and Reference.
Log record type 0800 (quiesce) does not have subtype codes.
Some log record types (1000 to 8000 assigned by DB2) can have proprietary log
record subtype codes assigned.
Log record formats for the record types and subtypes that are listed in “Log record
subtype codes” on page 1086 are detailed in the mapping macro DSNDQJ00.
DSNDQJ00 provides the mapping of specific data change log records, UR control
Either the primary or one of the secondary authorization IDs must have
MONITOR2 privilege. For details on how to code an IFI program, see Appendix E,
“Programming for the Instrumentation Facility Interface (IFI),” on page 1117.
where:
v P signifies to start a DB2 performance trace. Any of the DB2 trace types can be
used.
v CLASS(30) is a user-defined trace class (31 and 32 are also user-defined classes).
v IFCID(126) activates DB2 log buffer recording.
v DEST(OPX) starts the trace to the next available DB2 online performance (OP)
buffer. The size of this OP buffer can be explicitly controlled by the BUFSIZE
keyword of the START TRACE command. Valid sizes range from 256 KB to 16
MB. The number must be evenly divisible by 4.
When the START TRACE command takes effect, from that point forward until DB2
terminates, DB2 begins writing 4-KB log buffer VSAM control intervals (CIs) to the
OP buffer as well as to the active log. As part of the IFI COMMAND invocation,
the application specifies an ECB to be posted and a threshold to which the OP
buffer is filled when the application is posted to obtain the contents of the buffer.
The IFI READA request is issued to obtain OP buffer contents.
To retrieve the log control interval, your program must initialize certain fields in
the qualification area:
If you specify a range of log CIs, but some of those records have not yet been
written to the active log, DB2 returns as many log records as possible. You can find
the number of CIs returned in field QWT02R1N of the self-defining section of the
record. For information about interpreting trace output, see Appendix D,
“Interpreting DB2 trace output,” on page 1101.
To use this IFCID, use the same call as described in “Reading specific log records
(IFCID 0129)” on page 1088. IFCID 0306 must appear in the IFCID area. IFCID
0306 returns complete log records and the spanned record indicators in bytes 2 will
have no meaning, if present. Multi-segmented control interval log records are
combined for a complete log record.
The IFI application program needs to run in supervisor state to request the ECSA
key 7 return area. The return area storage size need be a minimum of the largest
DB2 log record returned plus the additional area defined in DSNDQW04. Minimize
the number of IFI calls required to get all log data but do not over use ECSA by
the IFI program. The other IFI storage areas can remain in user storage key 8. The
IFI application must be in supervisor state and key 0 when making IFCID 0306
calls.
IFCID 0306 return area mapping: IFCID 0306 has a unique return area format.
The first section is mapped by QW0306OF instead of the write header
DSNDQWIN. See Appendix E, “Programming for the Instrumentation Facility
Interface (IFI),” on page 1117 for details.
To invoke these services, use the assembler language macro, DSNJSLR, specifying
one of the preceding functions.
These log services use a request block, which contains a feedback area in which
information for all stand-alone log GET calls is returned. The request block is
created when a stand-alone log OPEN call is made. The request block must be
passed as input to all subsequent stand-alone log calls (GET and CLOSE). The
request block is mapped by the DSNDSLRB macro and the feedback area is
mapped by the DSNDSLRF macro.
See Figure 154 on page 1099 for an example of an application program that
includes these various stand-alone log calls.
When you issue an OPEN request, you can indicate whether you want to get log
records or log record control intervals. Each GET request returns a single logical
record or control interval depending on which you selected with the OPEN
request. If neither is specified, the default, RECORD, is used. DB2 reads the log in
the forward direction of ascending relative byte addresses or log record sequence
numbers (LRSNs).
If a bootstrap data set (BSDS) is allocated before stand-alone services are invoked,
appropriate log data sets are allocated dynamically by z/OS. If the bootstrap data
set is not allocated before stand-alone services are invoked, the JCL for your
user-written application to read a log must specify and allocate the log data sets to
be read.
Table 291 lists and describes the JCL DD statements used by stand-alone services.
Table 291. JCL DD statements for DB2 stand-alone log services
JCL DD
statement Explanation
JOBCAT or Specifies the catalog in which the BSDS and the active log data sets are
STEPCAT cataloged. Required if the BSDS or any active log data set is to be
accessed, unless the data sets are cataloged in the system master catalog.
BSDS Specifies the bootstrap data set (BSDS). Optional. Another ddname can
be used for allocating the BSDS, in which case the ddname must be
specified as a parameter on the FUNC=OPEN (see “Stand-alone log
OPEN request” on page 1095 for more information). Using the ddname
in this way causes the BSDS to be used. If the ddname is omitted on the
FUNC=OPEN request, the processing uses DDNAME=BSDS when
attempting to open the BSDS.
ARCHIVE Specifies the archive log data sets to be read. Required if an archive data
set is to be read and the BSDS is not available (the BSDS DD statement
is omitted). Should not be present if the BSDS DD statement is present.
If multiple data sets are to be read, specify them as concatenated data
sets in ascending log RBA order.
All members’ logs and BSDS data sets must be available. If you use this
DD statement, you must also use the LRSN and RANGE parameters on
the OPEN request. The GROUP DD statement overrides any MxxBSDS
statements that are used.
(DB2 searches for the BSDS DD statement first, then the GROUP
statement, and then the MxxBSDS statements. If you want to use a
particular member’s BSDS for your own processing, you must call that
DD statement something other than BSDS.)
MxxBSDS Names the BSDS data set of a member whose log must participate in the
read operation and whose BSDS is to be used to locate its log data sets.
Use a separate MxxBSDS DD statement for each DB2 member. xx can be
any two valid characters.
Use these statements if logs from selected members of the data sharing
group are required and the BSDSs of those members are available. These
statements are ignored if you use the GROUP DD statement.
For one MxxBSDS statement, you can use either RBA or LRSN values to
specify a range. If you use more than one MxxBSDS statement, you must
use the LRSN to specify the range.
MyyARCHV Names the archive log data sets of a member to be used as input. yy can
be any two valid characters that do not duplicate any xx used in an
MxxBSDS DD statement.
You can use this statement if the BSDS data sets are unavailable or if
you want only some of the log data sets from selected members of the
group.
The DD statements must specify the log data sets in ascending order of log RBA
(or LRSN) range. If both ARCHIVE and ACTIVEn DD statements are included, the
first archive data set must contain the lowest log RBA or LRSN value. If the JCL
specifies the data sets in a different order, the job terminates with an error return
code with a GET request that tries to access the first record breaking the sequence.
If the log ranges of the two data sets overlap, this is not considered an error;
instead, the GET function skips over the duplicate data in the second data set and
returns the next record. The distinction between out-of-order and overlap is as
follows:
v Out-of-order condition occurs when the log RBA or LRSN of the first record in
a data set is greater than that of the first record in the following data set.
v Overlap condition occurs when the out-of-order condition is not met but the log
RBA or LRSN of the last record in a data set is greater than that of the first
record in the following data set.
Gaps within the log range are permitted. A gap is created when one or more log
data sets containing part of the range to be processed are not available. This can
happen if the data set was not specified in the JCL or is not reflected in the BSDS.
When the gap is encountered, an exception return code value is set, and the next
complete record after the gap is returned.
Normally, the BSDS DD name is supplied in the JCL, rather than a series of
ACTIVE DD names or a concatenated set of data sets for the ARCHIVE ddname.
This is commonly referred to as “running in BSDS mode”.
For example, assume you need to read log records from members S1, S2, S3, S4, S5
and S6.
S1 and S2 locate their log data sets by their BSDSs.
S3 and S4 need both archive and active logs.
The stand-alone log services invoke executable macros that can execute only in
24-bit addressing mode and reference data below the 16-MB line. User-written
applications should be link-edited as AMODE(24), RMODE(24).
See Part 3 of DB2 Codes for reason codes that are issued with the return codes.
Log control interval retrieval: You can use the PMO option to retrieve log control
intervals from archive log data sets. DSNJSLR also retrieves log control intervals
from the active log if the DB2 system is not active. During OPEN, if DSNJSLR
detects that the control interval range is not within the archive log range available
(for example, the range purged from BSDS), an error condition is returned.
Specify CI and use GET to retrieve the control interval you have chosen. The rules
remain the same regarding control intervals and the range specified for the OPEN
function. Control intervals must fall within the range specified on the RANGE
parameter.
A log record is available in the area pointed to by the request block until the next
GET request is issued. At that time, the record is no longer available to the
requesting program. If the program requires reference to a log record’s content
after requesting a GET of the next record, the program must move the record into
a storage area that is allocated by the program.
The first GET request, after a FUNC=OPEN request that specified a RANGE
parameter, returns a pointer in the request feedback area. This points to the first
record with a log RBA value greater than or equal to the low log RBA value
specified by the RANGE parameter. If the RANGE parameter was not specified on
the FUNC=OPEN request, then the data to be read is determined by the JCL
specification of the data sets. In this case, a pointer to the first complete log record
in the data set that is specified by the ARCHIVE, or by ACTIVE1 if ARCHIVE is
omitted, is returned. The next GET request returns a pointer to the next record in
ascending log RBA order. Subsequent GET requests continue to move forward in
log RBA sequence until the function encounters the end of RANGE RBA value, the
end of the last data set specified by the JCL, or the end of the log as determined
by the bootstrap data set.
See Part 3 of DB2 Codes for reason codes that are issued with the return codes.
Information about the GET request and its results is returned in the request
feedback area, starting at offset X'00'. If there is an error in the length of some
record, the control interval length is returned at offset X'0C' and the address of the
beginning of the control interval is returned at offset X'08'.
On return from this request, the first part of the request block contains the
feedback information that this function returns. Mapping macro DSNDSLRF
defines the feedback fields which are shown in Table 293. The information returned
is status information, a pointer to the log record, the length of the log record, and
the 6-byte log RBA value of the record.
Table 293. Stand-alone log get feedback area contents
Hex Length
Field name offset (bytes) Field contents
SLRFRC 00 2 Log request return code
SLRFINFO 02 2 Information code returned by dynamic allocation.
Refer to the z/OS SPF job management publication
for information code descriptions
SLRFERCD 04 2 VSAM or dynamic allocation error code, if register
15 contains a nonzero value.
SLRFRG15 06 2 VSAM register 15 return code value.
SLRFFRAD 08 4 Address of area containing the log record or CI
SLRFRCLL 0C 2 Length of the log record or RBA
SLRFRBA 0E 6 Log RBA of the log record
SLRFDDNM 14 8 ddname of data set on which activity occurred
TSTJSLR5
.. CSECT
.
OPENCALL EQU *
LA R2,NAME GET BSDS DDNAME ADDRESS
LA R3,RANGER GET ADDRESS OF RBA RANGE
DSNJSLR FUNC=OPEN,DDNAME=(R2),RANGE=(R3)
LTR R15,R15 CHECK RETURN CODE FROM OPEN
.. BZ GETCALL OPEN OK, DO GET CALLS
.
*****************************************************************
* HANDLE ERROR FROM OPEN FUNCTION AT THIS POINT *
*****************************************************************
..
.
GETCALL EQU *
DSNJSLR FUNC=GET,RBR=(R1)
C R0,=X’00D10020’ END OF RBA RANGE ?
BE CLOSE YES, DO CLEANUP
C R0,=X’00D10021’ RBA GAP DETECTED ?
BE GAPRTN HANDLE RBA GAP
LTR R15,R15 TEST RETURN CODE FROM GET
.. BNZ ERROR
.
..
.
******************************************************************
* PROCESS RETURNED LOG RECORD AT THIS POINT. IF LOG RECORD *
* DATA MUST BE KEPT ACROSS CALLS, IT MUST BE MOVED TO A *
* USER-PROVIDED AREA. *
******************************************************************
USING SLRF,1 BASE SLRF DSECT
L R8,SLRFFRAD GET LOG RECORD START ADDR
LR R9,R8
AH R9,SLRFRCLL GET LOG RECORD END ADDRESS
.. BCTR R9,R0
.
CLOSE EQU *
.. DSNJSLR FUNC=CLOSE,RBR=(1)
.
NAME DC C’DDBSDS’
RANGER
.. DC X’00000000000000000005FFFF’
.
DSNDSLRB
DSNDSLRF
EJECT
R0 EQU 0
R1 EQU 1
R2 . EQU 2
..
R15 EQU 15
END
Figure 154. Excerpts from a sample program using stand-alone log services
The log capture exit routine executes in an area of DB2 that is critical for
performance. As such, it is primarily intended as a mechanism to capture log data
for recovery purposes. In addition, the log capture exit routine operates in a very
restrictive z/OS environment, which severely limits its capabilities as a stand-alone
routine.
To capture log records with this exit routine, you must first write an exit routine
(or use the one provided by the preceding program offering) that can be loaded
and called under the various processing conditions and restrictions required of this
exit routine. See “Log capture routines” on page 1064 and refer to the previous
sections of this appendix, “Contents of the log” on page 1077 and “The physical
structure of the log” on page 1082.
When you activate a DB2 trace, it produces trace records based on the parameters
you specified for the START TRACE command. Each record identifies one or more
significant DB2 events. You can use OMEGAMON to format, print, and interpret
DB2 trace output. If you do not have OMEGAMON, or you want to do your own
analysis of the trace output, you can use the information in this appendix and the
trace field descriptions that are shipped with DB2. By examining a DB2 trace
record, you can determine the type of trace that produced the record (statistics,
accounting, audit, performance, monitor, or global) and the event the record
reports.
Note that when the trace output indicates a particular release level, 'xx'varies
according to the actual release of DB2.
The self-defining section follows the writer header section (both GTF and SMF)
and is further described in “Self-defining section” on page 1109. The first
self-defining section always points to a special data section called the product
section. Among other things, the product section contains an instrumentation
facility component identifier (IFCID). Descriptions of the records differ for each
IFCID. For a list of records, by IFCID, for each class of a trace, see the description
of the START TRACE command in DB2 Command Reference.
The product section also contains field QWHSNSDA, which indicates how many
self-defining data sections the record contains. You can use this field to keep from
trying to access data sections that do not exist. In trying to interpret the trace
records, remember that the various keywords you specified when you started the
trace determine whether any data is collected. If no data has been collected, field
QWHSNSDA shows a data length of zero.
The SMF writer header section begins at the first byte of the record. After
establishing addressability, you can examine the header fields. The fields are
described in Table 294.
Table 294. Contents of SMF writer header section
Hex
Offset DSNDQWST DSNDQWAS DSNDQWSP Description
0 SM100LEN SM101LEN SM102LEN Total length of SMF record
2 SM100SGD SM101SGD SM102SGD Segment descriptor
4 SM100FLG SM101FLG SM102FLG System indicator
5 SM100RTY SM101RTY SM102RTY SMF record type:
v Statistics=100(dec)
v Accounting=101(dec)
v Monitor=102(dec)
v Audit=102(dec)
v Performance=102(dec)
6 SM100TME SM101TME SM102TME SMF record timestamp, time
portion
A SM100DTE SM101DTE SM102DTE SMF record timestamp, date
portion
E SM100SID SM101SID SM102SID System ID
12 SM100SSI SM101SSI SM102SSI Subsystem ID
16 SM100STF SM101STF SM102STF Reserved
17 SM100RI SM101RI SM102RI Reserved
18 SM100BUF SM101BUF SM102BUF Reserved
1C SM100END SM101END SM102END End of SMF header
Figure 156. DB2 trace output sent to SMF (printed with DFSERA10 print program of IMS)
The GTF writer header section begins at the first byte of the record. After
establishing addressability, you can examine the fields of the header. The writer
headers for trace records sent to GTF are always mapped by macro DSNDQWGT.
The fields are described in Table 295 on page 1104.
Figure 157. DB2 trace output sent to GTF (spanned records printed with DFSERA10 print
program of IMS)
GTF records are blocked to 256 bytes. Because some of the trace records exceed the
GTF limit of 256 bytes, they have been blocked by DB2. Use the following logic to
process GTF records:
1. Is the GTF event ID of the record equal to the DB2 ID (that is, does QWGTEID
= X'xFB9')?
If it is not equal, get another record.
If it is equal, continue processing.
2. Is the record spanned?
If it is spanned (that is, QWGTDSCC ¬ = QWGTDS00), test to determine
whether it is the first, middle, or last segment of the spanned record.
a. If it is the first segment (that is, QWGTDSCC = QWGTDS01), save the entire
record including the sequence number (QWGTWSEQ) and the subsystem ID
(QWGTSSID).
b. If it is a middle segment (that is, QWGTDSCC = QWGTDS03), find the first
segment matching the sequence number (QWGTSEQ) and on the subsystem
ID (QWTGSSID). Then move the data portion immediately after the GTF
header to the end of the previous segment.
Figure 158 on page 1108 shows the same output after it has been processed by a
user-written routine, which follows the logic that was outlined previously.
Figure 158. DB2 trace output sent to GTF (assembled with a user-written routine and printed
with DFSERA10 print program of IMS) (Part 1 of 2)
Figure 158. DB2 trace output sent to GTF (assembled with a user-written routine and printed
with DFSERA10 print program of IMS) (Part 2 of 2)
Self-defining section
The self-defining section following the writer header contains pointers that enable
you to find the product and data sections, which contain the actual trace data.
Pointers occur in a fixed order, and their meanings are determined by the IFCID of
the record. Different sets of pointers can occur, and each set is described by a
separate DSECT. Therefore, to examine the pointers, you must first establish
addressability by using the DSECT that provides the appropriate description of the
self-defining section. To do this, perform the following steps:
1. Compute the address of the self-defining section.
The self-defining section begins at label “SM100END” for statistics records,
“SM101END” for accounting records, and “SM102END” for performance and
audit records. It does not matter which mapping DSECT you use because the
length of the SMF writer header is always the same.
For GTF, use QWGTEND.
2. Determine the IFCID of the record.
Use the first field in the self-defining section; it contains the offset from the
beginning of the record to the product section. The product section contains the
IFCID.
After establishing addressability using the appropriate DSECT, use the pointers in
the self-defining section to locate the record’s data sections.
The relationship between the contents of the self-defining section “pointers” and
the items in a data section for same-length data items is shown in Figure 159.
Figure 159. Relationship between self-defining section and data sections for same-length
data items
| The relationship between the contents of the self-defining section “pointers” and
| the items in a data section for variable-length data items is shown in .
|
Product section
The product section for all record types contains the standard header. The other
headers (correlation, CPU, distributed, and data sharing data) might also be
present. Table 296 shows the contents of the product section standard header.
Table 296. Contents of product section standard header
Hex Offset Macro DSNDQWHS field Description
0 QWHSLEN Length of standard header
2 QWHSTYP Header type
3 QWHSRMID RMID
4 QWHSIID IFCID
6 QWHSRELN Release number section
6 QWHSNSDA Number of self-defining sections
7 QWHSRN DB2 release identifier
8 QWHSACE ACE address
C QWHSSSID Subsystem ID
10 QWHSSTCK Timestamp—STORE CLOCK value assigned
by DB2
18 QWHSISEQ IFCID sequence number
1C QWHSWSEQ Destination sequence number
20 QWHSMTN Active trace number mask
24 QWHSLOCN Local location Name
34 QWHSLWID Logical unit of work ID
34 QWHSNID Network ID
3C QWHSLUNM LU name
44 QWHSLUUV Uniqueness value
4A QWHSLUCC Commit count
Table 297 shows the contents of the product section correlation header.
Table 297. Contents of product section correlation header
Macro DSNDQWHC
Hex Offset field Description
0 QWHCLEN Length of correlation header
2 QWHCTYP Header type
3 Reserved
4 QWHCAID Authorization ID
C QWHCCV Correlation ID
18 QWHCCN Connection name
20 QWHCPLAN Plan name
28 QWHCOPID Original operator ID
30 QWHCATYP The type of system that is connecting
34 QWHCTOKN Trace accounting token field
4A Reserved
4C QWHCEUID User ID of at the workstation for the end
user
5C QWHCEUTX Transaction name for the end user
7C QWHCEUWN Workstation name for the end user
8E QWHCEND End of product section correlation header
Figure 161 is a sample accounting trace for a distributed transaction sent to SMF.
| A
| 000000 07000000 1E650059 A8B50104 019FF3F0 F9F0E5F8 F1C10000 00000000 0000061A
| B C D E F G H I
| 000020 00E60001 00000084 01F00001 000003CE 01CC0001 0000059A 00400002 00000376
| J K L M
| 000040 00580001 00000000 00000000 00000274 01020001 00000000 00000000 00000000
| N
| 000060 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000080 00000000 BAA5E4D7 8373558E BAA5E4E2 B6DB4F46 00000000 0349EA39 00000000
| 0000A0 05578AFD 00000000 00000000 00000000 00000000 0000000C 40404040 40404040
| 0000C0 00000000 00000000 00000001 00000002 00000000 006349C0 00000000 005735A0
| 0000E0 00000000 00000000 00000000 00000000 00000000 00000000 00000006 00000000
| 000100 00000000 00000000 00000000 00000000 00000000 005536E0 00000000 00000000
| 000120 00000000 00000000 00000000 00000006 00000000 00000000 00000000 00000000
| O
| 000140 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000160 00000000 00030004 00000000 00000000 00000000 00000000 00000000 00000000
| 000180 00000000 03BDA0A1 00000000 0131DD20 0000000C 00000000 0C90ADB5 00000006
| 0001A0 00000000 00000000 00001816 00000000 00000000 00000000 00000000 00000000
| 0001C0 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 0001E0 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000200 00432BFA 00000000 00010F80 00000009 7C68DEB2 00000000 033B2BAB 00000000
| P
| 000220 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000240 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000260 00000000 00000000 00000000 00000000 00000000 5FC4E2D5 F0F8F0F1 F5E2E3D3
| 000280 C5C3F140 40404040 40404040 40E4E2C9 C2D4E2E8 40E2E8C5 C3F1C4C2 F2C2C1E3
| 0002A0 C3C84040 40C2C1E3 C3C84040 40E6D5C5 E2E3D5D7 40404040 40E2E8E2 C1C4D440
| 0002C0 40D7D3D5 C1D7D7D3 F8E4E2C5 D97EE2E8 E2C1C4D4 00000000 00000000 00000000
| 0002E0 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000300 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| Q
| 000320 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000340 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000360 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000380 00000000 00000000 00000000 00010000 00000000 00000000 00000000 00000000
| 0003A0 00000000 00000000 00000000 00170000 00100000 00000000 00040000 00000000
| 0003C0 00060000 00000000 00000000 00002095 01CCD8E7 E2E30000 00020000 00000000
| 0003E0 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| R
| 000400 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
| 000420 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
|
| Figure 161. DB2 distributed data trace output sent to SMF (printed with IMS DFSERA10 print
| program) (Part 1 of 2). In this example there is one accounting record (IFCID 0003) from the
| server site (SILICON_VALLEY_LAB). DSNDQWA0 maps the self-defining section for IFCID
| 0003.
|
Figure 161. DB2 distributed data trace output sent to SMF (printed with IMS DFSERA10 print
program) (Part 2 of 2). In this example there is one accounting record (IFCID 0003) from the
server site (SILICON_VALLEY_LAB). DSNDQWA0 maps the self-defining section for IFCID
| 0003.
You can use the TSO or ISPF browse function to look at the field descriptions in
the trace record mapping macros online, even when DB2 is down. If you prefer to
look at the descriptions in printed form, you can use ISPF to print a listing of the
data set.
The DB2 instrumentation facility gathers trace data that can be written to one or
more destinations that you specify. The instrumentation facility interface (IFI) is
designed for a program needing online trace information. IFI can be accessed
through any of the DB2 attachment facilities.
IFI uses the standard security mechanisms that DB2 uses: connection authorization,
plan authorization, and so forth. For more information about security, see Part 3,
“Security and auditing,” on page 123. Security checks specifically related to IFI are
included in the descriptions of the functions.
Before using IFI, you should be familiar with the material in “DB2 trace” on page
1155, which includes information on the DB2 trace facility and instrumentation
facility component identifiers (IFCIDs).
Note that where the trace output indicates a particular release level, you will see
'xx' to show that this information varies according to the actual release of DB2 that
you are using.
You can use IFI in a monitor program (a program or function outside of DB2 that
receives information about DB2) to perform the following tasks:
v “Submitting DB2 commands through IFI”
v “Obtaining trace data through IFI” on page 1118
v “Passing data to DB2 through IFI” on page 1118
When a DB2 trace is active, internal events trigger the creation of trace records.
The records, identified by instrumentation facility component identifiers (IFCIDs), can
be written to buffers, and you can read them later with the IFI READA function.
This means you are collecting the data asynchronously; you are not reading the data
at the time it was written.
You can trigger the creation of certain types of trace records by using the IFI
READS function. The records, identified as usual by IFCIDs, do not need a buffer;
they are passed immediately to your monitor program through IFI. This means
you are collecting the data synchronously. The data is collected at the time of the
request for the data.
Using specified trace classes and IFCIDs, a monitor program can control the
amount and type of its data. You can design your monitor program to:
v Activate and deactivate pre-defined trace classes.
v Activate and deactivate a trace record or group of records (identified by IFCIDs).
IFI functions
A monitor program can use the following IFI functions:
COMMAND To submit DB2 commands. For more information, see
“COMMAND: Syntax and usage with IFI” on page 1120.
READS To obtain monitor trace records synchronously. The READS request
The following example depicts an IFI call in an assembler program. All examples
in this appendix are given for assembler.
CALL DSNWLI,(function,ifca,parm-1,...parm-n),VL
The parameters that are passed on the call indicate the desired function (as
described in “IFI functions” on page 1118), point to communication areas used by
the function, and provide other information that depends on the function specified.
Because the parameter list may vary in length, the high-order bit of the last
parameter must be on to signal that it is the last parameter in the list.
Example: To turn on the bit in assembler, use the VL option to signal a variable
length parameter list.
The communication areas that are used by IFI are described in “Common
communication areas for IFI calls” on page 1140.
After you insert this call in your monitor program, you must link-edit the program
with the correct language interface. Each of the following language interface
modules has an entry point of DSNWLI for IFI:
v CAF DSNALI
v TSO DSNELI
v CICS DSNCLI
v IMS DFSLI000
v RRSAF DSNRLI
CAF DSNALI, the CAF (call attachment facility) language interface module,
includes a second entry point of DSNWLI2. The monitor program that link-edits
DSNALI with the program can make IFI calls directly to DSNWLI. The monitor
program that loads DSNALI must also load DSNWLI2 and remember its address.
When the monitor program calls DSNWLI, the program must have a dummy entry
point to handle the call to DSNWLI and then call the real DSNWLI2 routine. See
Part 6 of DB2 Application Programming and SQL Guide for additional information
about using CAF.
Monitor trace classes: Monitor trace classes 2 through 8 can be used to collect
information related to DB2 resource usage. Use monitor trace class 5, for example,
to find out how much time is spent processing IFI requests. Monitor trace classes 2,
3, and 5 are identical to accounting trace classes 2, 3, and 5. For more information
about these traces, see “Monitor trace” on page 1158.
You can submit any DB2 command, including START TRACE, STOP TRACE,
DISPLAY TRACE, and MODIFY TRACE. Because the program can also issue other
DB2 commands, you should be careful about which commands you use. For
example, do not use STOP DB2.
A zero indicates not to post the monitor program. In this case, the
monitor program should use its own timer to determine when to
issue a READA request.
WBUFBC C Signed four-byte The records placed into the instrumentation facility must reach this
integer value before the ECB will be posted. If the number is zero, and an
ECB exists, posting occurs when the buffer is full.
CALL
. DSNWLI,(’COMMAND ’,IFCAAREA,RETAREA,OUTAREA,BUFAREA),VL
.
.
COMMAND DC CL8 ’COMMAND ’
************************************************************************
* Function parameter declaration *
************************************************************************
* Storage of LENGTH(IFCA) and properly initialized *
************************************************************************
IFCAAREA
. DS 0CL180
.
.
************************************************************************
* Storage for length and returned info. *
************************************************************************
RETAREA DS CL608
************************************************************************
* Storage for length and DB2 Command *
************************************************************************
OUTAREA DS 0CL42
OUTLEN DC X’002A0000’
OUTCMD DC CL37’-STA TRAC(MON) DEST(OPX) BUFSIZE(256)’
************************************************************************
* Storage of LENGTH(WBUF) and properly initialized *
************************************************************************
BUFAREA
. DS 0CL16
.
.
The program that issues the READS request does not need to start monitor class 1
because no ownership of an OP buffer is involved when you obtain data from the
READS interface. Data is written directly to the application program's return area,
bypassing the OP buffer. This bypass is in direct contrast to the READA interface
where the application that issues READA must first issue a START TRACE
command to obtain ownership of an OP buffer and start the appropriate traces.
READS requests are checked for authorization once for each user (ownership
token) of the thread. (Several users can use the same thread, but an authorization
check is performed each time the user of the thread changes.)
If you use READS to obtain your own data (IFCID 0124, 0147, 0148, or 0150 not
qualified), no authorization check is performed.
To specify more than one buffer pool or group buffer pool, use the
pattern-matching character X'00' in any position in the buffer pool
name. X'00' indicates that any character can appear in that
position, and in all positions that follow.
Start monitor classes 2, 3, 5, 7, and 8 to collect summary and status information for
later probing. In this case, an instrumentation facility trace is started and
information is summarized by the instrumentation facility, but not returned to the
caller until it is requested by a READS call.
The READS request can reference data that is updated during the retrieval process.
You might need to do reasonability tests on data that is obtained through READS.
Because the READS function does not suspend activity that takes place under
referenced structures, an abend can occur. If an abend occurs, the READS function
is terminated without a dump and the monitor program is notified through the
return code and reason code information in the IFCA. However, the return area
can contain valid trace records, even if an abend occurred; therefore, your monitor
program should check for a non-zero value in the IFCABM (bytes moved) field of
the IFCA.
When you use a READS request with a query parallelism task, remember that each
parallel task is a separate thread. Each parallel thread has a separate READS
output. See Chapter 34, “Parallel operations and query performance,” on page 951
for more information on tracing the parallel tasks. A READS request might return
thread information for parallel tasks on a DB2 data sharing member without the
thread information for the originating task in a Sysplex query parallelism case. See
DB2 Data Sharing: Planning and Administration for more information.
For more information about IFCID field descriptions, see the mapping macros in
prefix.SDSNMACS. See also “DB2 trace” on page 1155 and Appendix D,
“Interpreting DB2 trace output,” on page 1101 for additional information.
An IFI program that monitors the dynamic statement cache should include these
steps:
1. Acquire and initialize storage areas for common IFI communication areas.
2. Issue an IFI COMMAND call to start performance trace class 30 for IFCID
0318. This step enables statistics collection for statements in the dynamic
statement cache. See “Controlling collection of dynamic statement cache
statistics with IFCID 0318” on page 1136 for information on when you should
start a trace for IFCID 0318.
3. Put the IFI program into a wait state. During this time, SQL applications in
the subsystem execute dynamic SQL statements by using the dynamic
statement cache.
4. Resume the IFI program after enough time has elapsed for a reasonable
amount of activity to occur in the dynamic statement cache.
5. Set up the qualification area for a READS call for IFCID 0316 as described in
Table 303 on page 1125.
6. Set up the IFCID area to request data for IFCID 0316.
7. Issue an IFI READS call to retrieve the qualifying cached SQL statements.
8. Examine the contents of the return area.
For a statement with unexpected statistics values:
a. Obtain the statement name and statement ID from the IFCID 0316 data.
An IFI program that monitors deadlocks and timeouts of cached statements should
include these steps:
1. Acquire and initialize storage areas for common IFI communication areas.
2. Issue an IFI COMMAND call to start monitor trace class 1. This step lets you
make READS calls for IFCID 0316 and IFCID 0317.
3. Issue an IFI COMMAND call to start performance trace class 30 for IFCID
0318. This step enables statistics collection for statements in the dynamic
statement cache. See “Controlling collection of dynamic statement cache
statistics with IFCID 0318” for information on when you should start a trace
for IFCID 0318.
4. Start performance trace class 3 for IFCID 0172 to monitor deadlocks, or
performance trace class 3 for IFCID 0196 to monitor timeouts.
5. Put the IFI program into a wait state. During this time, SQL applications in
the subsystem execute dynamic SQL statements by using the dynamic
statement cache.
6. Resume the IFI program when a deadlock or timeout occurs.
7. Issue a READA request to obtain IFCID 0172 or IFCID 0196 trace data.
8. Obtain the cached statement ID of the statement that was involved in the
deadlock or timeout from the IFCID 0172 or IFCID 0196 trace data. Using the
statement ID, set up the qualification area for a READS call for IFCID 0316 or
IFCID 0317, as described in Table 303 on page 1125.
9. Set up the IFCID area to request data for IFCID 0316 or IFCID 0317.
10. Issue an IFI READS call to retrieve the qualifying cached SQL statement.
11. Examine the contents of the return area.
12. Issue an IFI COMMAND call to stop monitor trace class 1.
13. Issue an IFI COMMAND call to stop performance trace class 30 for IFCID
0318 and performance trace class 3 for IFCID 0172 or IFCID 0196.
When you stop or start the trace for IFCID 0318, DB2 resets the IFCID 0316
statistics counters for all statements in the cache to 0.
There are times, however, when you should use a specific OPn destination initially:
v When you plan to start numerous asynchronous traces to the same OPn
destination. To do this, you must specify the OPn destination in your monitor
program. The OPn destination started is returned in the IFCA.
v When the monitor program specifies that a particular monitor class (defined as
available) together with a particular destination (for example OP7) indicates that
certain IFCIDs are started. An operator can use the DISPLAY TRACE command
to determine which monitors are active and what events are being traced.
Buffering data: To have trace data go to the OPn buffer, you must start the trace
from within the monitor program. After the trace is started, DB2 collects and
buffers the information as it occurs. The monitor program can then issue a read
asynchronous (READA) request to move the buffered data to the monitor program.
The buffering technique ensures that the data is not being updated by other users
while the buffer is being read by the READA caller. For more information, see
“Data integrity and IFI” on page 1149.
Possible data loss: You can activate all traces and have the trace data buffered.
However, this plan is definitely not recommended because performance might
suffer and data might be lost.
Data loss occurs when the buffer fills before the monitor program can obtain the
data. DB2 does not wait for the buffer to be emptied, but, instead, informs the
monitor program on the next READA request (in the IFCARLC field of the IFCA)
that the data has been lost. The user must have a high enough dispatching priority
that the application can be posted and then issue the READA request before
significant data is lost.
Your monitor program can request an asynchronous buffer, which records trace
data as trace events occur. The monitor program is then responsible for unloading
The write function must specify an IFCID area. The data that is written is defined
and interpreted by your site.
ifca Contains information regarding the success of the call. See “Instrument
facility communications area (IFCA)” on page 1140 for a description of the
IFCA.
output-area
Contains the varying-length of the monitor program's data record to be
written. See “Output area” on page 1145 for a description of the output
area.
ifcid-area
Contains the IFCID of the record to be written. Only the IFCIDs that are
defined to the write function (see Table 305 on page 1140) are allowed. If
See “IFCID area” on page 1145 for a description of the IFCID area.
Recommendation: If your site uses the IFI WRITE function, establish usage
procedures and standards. Procedures ensure that the correct IFCIDs are active
when DB2 is performing the WRITE function. Standards determine the records and
record formats that a monitor program sends to DB2.
Recommendation: Because your site can use one IFCID to contain many different
records, place your site's record type and sub-type in the first fields in the data
record .
The monitor program is responsible for allocating storage for the IFCA and
initializing it. The IFCA must be initialized to binary zeros and the eye catcher,
4-byte owner field, and length field must be set by the monitor program. Failure to
properly initialize the IFCA results in denying any IFI requests.
The monitor program is also responsible for checking the IFCA return code and
reason code fields to determine the status of the request.
Return area
You must specify a return area on all READA, READS, and COMMAND requests.
IFI uses the return area to return command responses, synchronous data, and
asynchronous data to the monitor program. Table 307describes the return area.
Table 307. Return area
Hex offset Data type Description
0 Signed 4-byte integer The length of the return area, plus 4. This must be set by the
monitor program. The valid range for READA requests is 100 to
1048576 (X’00000064’ to X’00100000’). The valid range for READS
requests is 100 to 2147483647 (X’00000064’ to X’7FFFFFFF’).
# 4 Character, varying-length DB2 places as many varying-length records as it can fit into the
# area following the length field. The monitor program’s length field
# is not modified by DB2. Each varying-length trace record has a
# 2-byte or 4-byte length field, depending on the high-order bit. if the
# high-order bit is on, the length field is 4 bytes. If the high-order bit
# is off, the length field is the first 2 bytes. In this case, the third byte
# indicates whether the record is spanned, and the fourth byte is
# reserved.
The destination header for data that is returned on a READA or READS request is
mapped by macro DSNDQWIW or the header QW0306OF for IFCID 306 requests.
# Please refer to prefix.SDSNIVPD(DSNWMSGS) for the format of the trace record
The monitor program must compare the number of bytes moved (IFCABM in the
IFCA) to the sum of the record lengths to determine when all records have been
processed.
IFCID area
You must specify the IFCID area on READS and WRITE requests. The IFCID area
contains the IFCIDs to process. Table 309 shows the IFCID area.
Table 309. IFCID area
Hex Offset Data type Description
0 Signed two-byte integer Length of the IFCID area, plus 4. The length can range from X'0006'
to X'0044'. For WRITE requests, only one IFCID is allowed, so the
length must be set to X'0006'.
For READS requests, you can specify multiple IFCIDs. If so, you
must be aware that the returned records can be in a different
sequence than requested and some records can be missing.
2 Signed two-byte integer Reserved.
4 Hex, n fields of 2 bytes each The IFCIDs to be processed. Each IFCID is placed contiguous to the
previous IFCID for a READS request. The IFCIDs start at X'0000'
and progress upward. You can use X'FFFF' to signify the last IFCID
in the area to process.
Output area
The output area is used on command and WRITE requests. The first two bytes
contain the length of the monitor program’s record to write or the DB2 command
to be issued, plus 4 additional bytes. The next two bytes are reserved. You can
specify any length from 10 to 4096 (X'000A0000' to X'10000000'). The rest of the
area is the actual command or record text.
As with READA or READS requests for single DB2 subsystems, you need to issue
a START TRACE command before you issue the READA or READS request. You
can issue START TRACE with the parameter SCOPE(GROUP) to start the trace at
all members of the data sharing group. For READA requests, specify DEST(OPX)
in the START TRACE command. DB2 collects data from all data sharing members
and returns it to the OPX buffer for the member from which you issue the READA
request.
If a new member joins a data sharing group while a trace with SCOPE(GROUP) is
active, the trace starts at the new member.
After you issue a READS or READA call for all members of a data sharing group,
DB2 returns data from all members in the requesting program's return area. Data
from the local member is first, followed by the IFCA and data for all other
members.
Example: If the local DB2 is called DB2A, and the other two members in the group
are DB2B and DB2C, the return area looks like this:
Data for DB2A
IFCA for DB2B (DB2 sets IFCARMBR to DB2B)
Data for DB2B
IFCA for DB2C (DB2 sets IFCARMBR to DB2C)
Data for DB2C
If an IFI application requests data from a single other member of a data sharing
group (IFCADMBR contains a member name), the requesting program's return area
Because a READA or READS request for a data sharing group can generate much
more data than a READA or READS request for a single DB2, you need to increase
the size of your return area to accommodate the additional data.
For detailed information about the format of trace records and their mapping
macros, see Appendix D, “Interpreting DB2 trace output,” on page 1101, or see the
mapping macros in prefix.SDSNMACS.
Figure 163 on page 1148 shows the return area after a READS request successfully
executed.
Figure 163. Example of IFI return area after READS request (IFCID 106). This output was
assembled by a user-written routine and printed with the DFSERA10 print program of IMS.
For more information about IFCIDs and mapping macros, see “DB2 trace” on page
1155 and Appendix D, “Interpreting DB2 trace output,” on page 1101.
Figure 164 on page 1149 shows the return area after a START TRACE command
successfully executes.
Figure 164. Example of IFI return area after a START TRACE command. This output was
assembled with a user-written routine and printed with DFSERA10 program of IMS.
The IFCABM field in the IFCA would indicate that X'00000076' (C + E) bytes
have been moved to the return area.
The serialization techniques used to obtain data for a given READA request might
minimally degrade performance on processes that simultaneously store data into
the instrumentation facility buffer. Failures during the serialization process are
handled by DB2.
The DB2 structures that are searched on a READS request are validated before they
are used. If the DB2 structures are updated while being searched, inconsistent data
might be returned. If the structures are deleted while being searched, users might
access invalid storage areas, causing an abend. If an abend does occur, the
functional recovery routine of the instrumentation facility traps the abend and
returns information about it to the application program’s IFCA.
A program can issue SQL statements through an attachment facility and DB2
commands through IFI. This environment creates the potential for an application to
deadlock or time-out with itself over DB2 locks acquired during the execution of
SQL statements and DB2 database commands. You should ensure that all DB2
locks acquired by preceding SQL statements are no longer held when the DB2
database command is issued. You can do this by:
v Binding the DB2 plan with ACQUIRE(USE) and RELEASE(COMMIT) bind
parameters
v Initiating a commit or rollback to free any locks your application is holding,
before issuing the DB2 command
If you use SQL in your application, the time between commit operations should be
short. For more information on locking, see Chapter 30, “Improving concurrency,”
on page 773.
Requests sent through IFI can fail for a variety of reasons, including:
v One or more parameters are invalid.
v The IFCA area is invalid.
v The specified OPn is in error.
v The requested information is not available.
v The return area is too small.
Return code and reason code information is stored in the IFCA in fields IFCARC1
and IFCARC2. Further return and reason code information is contained in Part 3 of
DB2 Codes.
DB2 catalog
queries*
CICS attachment Online or
RMF
Statistics* reports
monitor III
RMF Online or
monitor II reports
RMF
SMF RMF monitor I reports
GTF
In addition, the CICS attachment facility DSNC DISPLAY command allows any
authorized CICS user to dynamically display statistical information related to
thread usage and situations when all threads are busy. For more information about
the DSNC DISPLAY command, see Chapter 2 of DB2 Command Reference.
Be sure that the number of threads reserved for specific transactions or for the pool
| is large enough to handle the actual load. You can dynamically modify the value
| specified in the CICS resource definition online (RDO) attribute ACCOUNTREC
| with the DSNC MODIFY TRANSACTION command. You might also need to
modify the maximum number of threads specified for the MAX USERS field on
installation panel DSNTIPE.
In addition, the DB2 IMS attachment facility allows you to use the DB2 command
DISPLAY THREAD command to dynamically observe DB2 performance.
MAJOR CHANGES:
DB2 application DEST07 moved to production
Figure 166. User-created system resources report
The RMF reports used to produce the information in Figure 166 were:
v The RMF CPU activity report, which lists TOTAL CPU busy and the TOTAL
I/Os per second.
v RMF paging activity report, which lists the TOTAL paging rate per second for
real storage.
v The RMF work load activity report, which is used to estimate where resources
are spent. Each address space or group of address spaces to be reported on
separately must have different SRM reporting or performance groups. The
following SRM reporting groups are considered:
– DB2 address spaces:
DB2 database address space (ssnmDBM1)
DB2 system services address space (ssnmMSTR)
Distributed data facility (ssnmDIST)
IRLM (IRLMPROC)
– IMS or CICS
– TSO-QMF
1154 Administration Guide
– DB2 batch and utility jobs
The CPU for each group is obtained using the ratio (A/B) × C, where:
A is the sum of CPU and service request block (SRB) service units for the
specific group
B is the sum of CPU and SRB service units for all the groups
C is the total processor utilization.
The CPU and SRB service units must have the same coefficient.
You can use a similar approach for an I/O rate distribution.
In these reports:
v The transactions processed include DB2 and non-DB2 transactions.
v The transaction processor time includes the DB2 processor time for IMS but not
for CICS.
v The transaction transit response time includes the DB2 transit time.
A historical database is useful for saving monitoring data from different periods.
Such data can help you track the evolution of your system. You can use Tivoli
Decision Support for OS/390 or write your own application based on DB2 and
QMF when creating this database.
DB2 trace
The information under this heading, up to “Recording SMF trace data” on page
1159, is General-use Programming Interface and Associated Guidance Information,
as defined in “Notices” on page 1237.
DB2’s instrumentation facility component (IFC) provides a trace facility that you
can use to record DB2 data and events. With the IFC, however, analysis and
reporting of the trace records must take place outside of DB2. You can use
OMEGAMON to format, print, and interpret DB2 trace output. You can view an
online snapshot from trace records by using OMEGAMON or other online
monitors. For more information on OMEGAMON, see Using IBM Tivoli
OMEGAMON XE on z/OS. For the exact syntax of the trace commands see Chapter
2 of DB2 Command Reference.
Each trace class captures information on several subsystem events. These events are
identified by many instrumentation facility component identifiers (IFCIDs). The
IFCIDs are described by the comments in their mapping macros, contained in
prefix.SDSNMACS, which is shipped to you with DB2.
Types of traces
DB2 trace can record six types of data: statistics, accounting, audit, performance,
monitor, and global. The description of the START TRACE command in Chapter 2
of DB2 Command Reference indicates which IFCIDs are activated for the different
types of trace and the classes within those trace types. For details on what
information each IFCID returns, see the mapping macros in prefix.SDSNMACS.
The trace records are written using GTF or SMF records. See “Recording SMF trace
data” on page 1159 and “Recording GTF trace data” on page 1161 before starting
any traces. Trace records can also be written to storage, if you are using the
monitor trace class.
Statistics trace
The statistics trace reports information about how much the DB2 system services
and database services are used. It is a system-wide trace and should not be used
for chargeback accounting. Use the information the statistics trace provides to plan
DB2 capacity, or to tune the entire set of active DB2 programs.
| Statistics trace classes 1, 3, 4, 5, and 6 are the default classes for the statistics trace
if statistics is specified YES in panel DSNTIPN. If the statistics trace is started
using the START TRACE command, then class 1 is the default class.
v Class 1 provides information about system services and database statistics. It
also includes the system parameters that were in effect when the trace was
started.
v Class 3 provides information about deadlocks and timeouts.
v Class 4 provides information about exceptional conditions.
v Class 5 provides information about data sharing.
| v Class 6 provides storage statistics for the DBM1 address space.
If you specified YES in the SMF STATISTICS field on installation panel DSNTIPN,
the statistics trace starts automatically when you start DB2, sending class 1, 3, 4
and 5 statistics data to SMF. SMF records statistics data in both SMF type 100 and
102 records. IFCIDs 0001, 0002, 0202, and 0230 are of SMF type 100. All other
IFCIDs in statistics trace classes are of SMF type 102. From installation panel
DSNTIPN, you can also control the statistics collection interval (STATISTICS TIME
field).
The statistics trace is written on an interval basis, and you can control the exact
time that statistics traces are taken.
Accounting trace
The DB2 accounting trace provides information related to application programs,
including such things as:
DB2 trace begins collecting this data at successful thread allocation to DB2, and
writes a completed record when the thread terminates or when the authorization
ID changes.
During CICS thread reuse, a change in the authid or transaction code initiates the
sign-on process, which terminates the accounting interval and creates the
accounting record. TXIDSO=NO eliminates the sign-on process when only the
transaction code changes. When a thread is reused without initiating sign-on,
several transactions are accumulated into the same accounting record, which can
make it very difficult to analyze a specific transaction occurrence and correlate DB2
accounting with CICS accounting. However, applications that use
ACCOUNTREC(UOW) or ACCOUNTREC(TASK) in the DBENTRY RDO definition
initiate a “partial sign-on”, which creates an accounting record for each transaction.
You can use this data to perform program-related tuning and assess and charge
DB2 costs.
On the other hand, when you start class 2, 3, 7, or 8, many additional trace points
are activated. Every occurrence of these events is traced internally by DB2 trace,
but these traces are not written to any external destination. Rather, the accounting
facility uses these traces to compute the additional total statistics that appear in the
accounting record, IFCID 003, when class 2 or class 3 is activated. Accounting class
1 must be active to externalize the information.
To turn on accounting for packages and DBRMs, accounting trace classes 1 and 7
must be active. Though you can turn on class 7 while a plan is being executed,
accounting trace information is only gathered for packages or DBRMs executed
after class 7 is activated. Activate accounting trace class 8 with class 1 to collect
information about the amount of time an agent was suspended in DB2 for each
executed package. If accounting trace classes 2 and 3 are activated, there is
minimal additional performance cost for activating accounting trace classes 7 and
8.
If you want information from either, or both, accounting class 2 and 3, be sure to
activate class 2, class 3, or both classes before your application starts. If these
classes are activated during the application, the times gathered by DB2 trace are
only from the time the class was activated.
If you specified YES for SMF ACCOUNTING on installation panel DSNTIPN, the
accounting trace starts automatically when you start DB2, and sends IFCIDs that
| are of SMF type 101 to SMF. The accounting record IFCID 0003 is of SMF type 101.
Audit trace
The audit trace collects information about DB2 security controls and is used to
ensure that data access is allowed only for authorized purposes. On the CREATE
TABLE or ALTER TABLE statements, you can specify whether or not a table is to
be audited, and in what manner; you can also audit security information such as
any access denials, grants, or revokes for the table. The default causes no auditing
to take place. For descriptions of the available audit classes and the events they
trace, see “Audit class descriptions” on page 287.
If you specified YES for AUDIT TRACE on installation panel DSNTIPN, audit trace
class 1 starts automatically when you start DB2. By default, DB2 will send audit
data to SMF. SMF records audit data in type 102 records. When you invoke the
-START TRACE command, you can also specify GTF as a destination for audit
data. Chapter 13, “Auditing,” on page 285 describes the audit trace in detail.
Performance trace
The performance trace provides information about a variety of DB2 events,
including events related to distributed data processing. You can use this
information to further identify a suspected problem, or to tune DB2 programs and
resources for individual users or for DB2 as a whole.
You cannot automatically start collecting performance data when you install or
migrate DB2. To trace performance data, you must use the -START
TRACE(PERFM) command. For more information about the -START
TRACE(PERFM) command, refer to Chapter 2 of DB2 Command Reference.
Monitor trace
The monitor trace records data for online monitoring with user-written programs.
This trace type has several predefined classes; those that are used explicitly for
monitoring are listed here:
v Class 1 (the default) allows any application program to issue an instrumentation
facility interface (IFI) READS request to the IFI facility. If monitor class 1 is
inactive, a READS request is denied. Activating class 1 has a minimal impact on
performance.
v Class 2 collects processor and elapsed time information. The information can be
obtained by issuing a READS request for IFCID 0147 or 0148. In addition,
monitor trace class 2 information is available in the accounting record, IFCID
0003. Monitor class 2 is equivalent to accounting class 2 and results in equivalent
overhead. Monitor class 2 times appear in IFCIDs 0147, 0148, and 0003 if either
monitor trace class 2 or accounting class 2 is active.
v Class 3 activates DB2 wait timing and saves information about the resource
causing the wait. The information can be obtained by issuing a READS request
for IFCID 0147 or 0148. In addition, monitor trace class 3 information is available
For more detailed information about the amount of processor resources consumed
by DB2 trace, see “Reducing processor resource consumption” on page 628.
If you are not using measured usage licensing, do not specify type 89 records or
you will incur the overhead of collecting that data.
You can use the SMF program IFASMFDP to dump these records to a sequential
data set. You might want to develop an application or use OMEGAMON to
process these records. For a sample DB2 trace record sent to SMF, see Figure 156 on
page 1103. For more information about SMF, refer to z/OS JES2 Initialization and
Tuning Guide.
Activating SMF
SMF must be running before you can send data to it. To make it operational,
update member SMFPRMxx of SYS1.PARMLIB, which indicates whether SMF is
active and which types of records SMF accepts. For member SMFPRMxx, xx are
two user-defined alphanumeric characters appended to 'SMFPRM' to form the
name of an SMFPRMxx member. To update this member, specify the ACTIVE
parameter and the proper TYPE subparameter for SYS and SUBSYS.
You can also code an IEFU84 SMF exit to process the records that are produced.
If an SMF buffer shortage occurs, SMF rejects any trace records sent to it. DB2
sends a message (DSNW133I) to the MVS operator when this occurs. DB2 treats
the error as temporary and remains active even though data could be lost. DB2
sends another message (DSNW123I) to the z/OS operator when the shortage has
been alleviated and trace recording has resumed.
You can determine if trace data has been lost by examining the DB2 statistics
records with an IFCID of 0001, as mapped by macro DSNQWST. These records
show:
v The number of trace records successfully written
v The number of trace records that could not be written
v The reason for the failure
If your location uses SMF for performance data or global trace data, be sure that:
v Your SMF data sets are large enough to hold the data.
v SMF is set up to accept record type 102. (Specify member SMFPRMxx, for which
’xx’ are two user-defined alphanumeric characters.)
v Your SMF buffers are large enough.
Specify SMF buffering on the VSAM BUFSP parameter of the access method
services DEFINE CLUSTER statement. Do not use the default settings if DB2
performance or global trace data is sent to SMF. Specify CISZ(4096) and
DB2 runs above the 16MB line of virtual storage in a cross-memory environment.
In any of those ways you can compare any report for a current day, week, or
month with an equivalent sample, as far back as you want to go. The samples
become more widely spaced but are still available for analysis.
If a GTF member exists in SYS1.PARMLIB, the GTF trace option USR might not be
in effect. When no other member exists in SYS1.PARMLIB, you are sure to have
only the USR option activated, and no other options that might add unwanted
data to the GTF trace.
When starting GTF, if you use the JOBNAMEP option to obtain only those trace
records written for a specific job, trace records written for other agents are not
written to the GTF data set. This means that a trace record that is written by a
system agent that is processing for an allied agent is discarded if the JOBNAMEP
option is used. For example, after a DB2 system agent performs an IDENTIFY
request for an allied agent, an IFCID record is written. If the JOBNAMEP keyword
is used to collect trace data for a specific job, however, the record for the
IDENTIFY request is not written to GTF, even if the IDENTIFY request was
performed for the job named on the JOBNAMEP keyword.
Trace records longer than the GTF limit of 256 bytes are spanned by DB2. For
instructions on how to process GTF records, refer to Appendix D, “Interpreting
DB2 trace output,” on page 1101.
| OMEGAMON
| OMEGAMON provides performance monitoring, reporting, buffer pool analysis,
| and a performance warehouse all in one tool:
| v OMEGAMON includes the function of DB2 Performance Monitor (DB2 PM),
| which is also available as a stand-alone product. Both products report DB2
| instrumentation in a form that is easy to understand and analyze. The
| instrumentation data is presented in the following ways:
| – The Batch report sets present the data you select in comprehensive reports or
| graphs containing system-wide and application-related information for both
| single DB2 subsystems and DB2 members of a data sharing group. You can
| combine instrumentation data from several different DB2 locations into one
| report.
| Batch reports can be used to examine performance problems and trends over
| a period of time.
| – The Online Monitor gives a current “snapshot” view of a running DB2
| subsystem, including applications that are running. Its history function
| displays information about subsystem and application activity in the recent
| past.
| Both a host-based and Workstation Online Monitor are provided. The
| Workstation Online Monitor substantially improves usability, simplifies online
| monitoring and problem analysis, and offers significant advantages. For
| example, from Workstation Online Monitor, you can launch Visual Explain so
| you can examine the access paths and processing methods chosen by DB2 for
| the currently executing SQL statement.
| For more information about the Workstation Online Monitor, see
| OMEGAMON Monitoring Performance from Performance Expert Client or
| OMEGAMON for z/OS and Multiplatforms Monitoring Performance from
| Workstation for z/OS and Multiplatforms.
| In addition, OMEGAMON contains a Performance Warehouse function that lets
| you:
| – Save DB2 trace and report data in a performance database for further
| investigation and trend analysis
| – Configure and schedule the report and load process from the workstation
| interface
| – Define and apply analysis functions to identify performance bottlenecks.
| v OMEGAMON also includes the function of DB2 Buffer Pool Analyzer, which is
| also available as a stand-alone product. Both products help you optimize buffer
| pool usage by offering comprehensive reporting of buffer pool activity,
| including:
| – Ordering by various identifiers such as buffer pool, plan, object, and primary
| authorization ID
| – Sorting by getpage, sequential prefetch, and synchronous read
| – Filtering capability
When considering the use of Tivoli Decision Support for OS/390, consider the
following:
v Tivoli Decision Support data collection and reporting are based on user
specifications. Therefore, an experienced user can produce more suitable reports
than the predefined reports produced by other tools.
v Tivoli Decision Support provides historical performance data that you can use to
compare a current situation with previous data.
v Tivoli Decision Support can be used very effectively for reports based on the
DB2 statistics and accounting records. When using it for the performance trace
consider that:
– Because of the large number of different DB2 performance records, a
substantial effort is required to define their formats to Tivoli Decision
Support. Changes in the records require review of the definitions.
– Tivoli Decision Support does not handle information from paired records,
such as “start event” and “end event.” These record pairs are used by
OMEGAMON to calculate elapsed times, such as the elapsed time of I/Os
and lock suspensions.
The general recommendation for Tivoli Decision Support and OMEGAMON use in
a DB2 subsystem is:
v If Tivoli Decision Support is already used or there is a plan to use it at the
location:
– Extend Tivoli Decision Support usage to the DB2 accounting and statistics
records.
– Use OMEGAMON for the DB2 performance trace.
v If Tivoli Decision Support is not used and there is no plan to use it:
– Use OMEGAMON for the statistics, accounting, and performance trace.
– Consider extending OMEGAMON with user applications based on DB2 and
QMF, to provide historical performance data.
DB2 collects statistics that you can use to determine when you need to perform
certain maintenance functions on your table spaces and index spaces.
DB2 collects the statistics in real time. You create tables into which DB2
periodically writes the statistics. You can then write applications that query the
statistics and help you decide when to run REORG, RUNSTATS, or COPY, or to
enlarge your data sets. Figure 167 shows an overview of the process of collecting
and using real-time statistics.
DB2 catalog
Application
DB2
program
Real-time statistics
tables
The following sections provide detailed information about the real-time statistics
tables:
v “Setting up your system for real-time statistics”
v “Contents of the real-time statistics tables” on page 1167
v “Operating with real-time statistics” on page 1179
For information about a DB2-supplied stored procedure that queries the real-time
statistics tables, see “The DB2 real-time statistics stored procedure” on page 1193.
Before you can alter an object in the real-time statistics database, you must stop the
database. Otherwise, you receive an SQL error. Table 313 shows the DB2 objects for
storing real-time statistics.
Table 313. DB2 objects for storing real-time statistics
Object name Description
DSNRTSDB Database for real-time statistics objects
DSNRTSTS Table space for real-time statistics objects
SYSIBM.TABLESPACESTATS Table for statistics on table spaces and table space
partitions
SYSIBM.INDEXSPACESTATS Table for statistics on index spaces and index space
partitions
SYSIBM.TABLESPACESTATS_IX Unique index on SYSIBM.TABLESPACESTATS
(columns DBID, PSID, and PARTITION)
| SYSIBM.INDEXSPACESTATS_IX Unique index on SYSIBM.INDEXSPACESTATS
| (columns DBID, ISOBID, and PARTITION)
To create the real-time statistics objects, you need the authority to create tables and
indexes on behalf of the SYSIBM authorization ID.
DB2 inserts one row in the table for each partition or non-partitioned table space
or index space. You therefore need to calculate the amount of disk space that you
need for the real-time statistics tables based on the current number of table spaces
and indexes in your subsystem.
To determine the amount of storage that you need for the real-time statistics when
they are in memory, use the following formula:
Max_concurrent_objects_updated * 152 bytes = Storage_in_bytes
In a data sharing environment, each member has its own interval for writing
real-time statistics.
You must start the database in read-write modeso that DB2 can externalize
real-time statistics. See “When DB2 externalizes real-time statistics” on page 1179
for information about the conditions for which DB2 externalizes the statistics.
Table 314 describes the columns of the TABLESPACESTATS table and explains how
you can use them in deciding when to run REORG, RUNSTATS, or COPY.
Table 314. Descriptions of columns in the TABLESPACESTATS table
Column name Data type Description
DBNAME CHAR(8) NOT The name of the database. This column is used to map a database
NULL to its statistics.
NAME CHAR(8) NOT The name of the table space. This column is used to map a table
NULL space to its statistics.
PARTITION SMALLINT NOT The data set number within the table space. This column is used to
NULL map a data set number in a table space to its statistics. For
partitioned table spaces, this value corresponds to the partition
number for a single partition. For nonpartitioned table spaces, this
value is 0.
If the table space contains more than one table, this value is the
sum of all rows in all tables. A null value means that the number
of rows is unknown or that REORG or LOAD has never been run.
Use the TOTALROWS value with the value of any column that
contains some affected rows to determine the percentage of rows
that are affected by a particular action.
NACTIVE INTEGER The number of active pages in the table space or partition.
Use the NACTIVE value with the value of any column that
contains some affected pages to determine the percentage of pages
that are affected by a particular action.
For multi-piece linear page sets, this value is the amount of space
in all data sets. A null value means the amount of space is
unknown.
A null value means that LOAD REPLACE has never been run on
the table space or partition or that the timestamp of the last LOAD
REPLACE is unknown.
A null value means REORG has never been run on the table space
or partition or that the timestamp of the last REORG is unknown.
This value does not include LOB updates because LOB updates are
really deletions followed by insertions. A null value means that the
number of updated rows is unknown.
REORGDISORGLOB INTEGER The number of LOBs that were inserted since the last REORG or
LOAD REPLACE that are not perfectly chunked. A LOB is
perfectly chunked if the allocated pages are in the minimum
number of chunks. A null value means that the number of
imperfectly chunked LOBs is unknown.
Use this value to determine whether you need to run REORG. For
example, you might want to run REORG if the ratio of
REORGDISORGLOB to the total number of LOBs is greater than
10%:
((REORGDISORGLOB*100)/TOTALROWS)>10
REORGUNCLUSTINS INTEGER The number of records that were inserted since the last REORG or
LOAD REPLACE that are not well-clustered with respect to the
clustering index. A record is well-clustered if the record is inserted
into a page that is within 16 pages of the ideal candidate page. The
clustering index determines the ideal candidate page.
You can use this value to determine whether you need to run
REORG. For example, you might want to run REORG if the
following comparison is true:
((REORGUNCLUSTINS*100)/TOTALROWS)>10
REORGMASSDELETE INTEGER The number of mass deletes from a segmented or LOB table space,
or the number of dropped tables from a segmented table space,
since the last REORG or LOAD REPLACE.
A null value means that the number of overflow records near the
pointer record is unknown.
A null value means that the number of overflow records far from
the pointer record is unknown.
A null value means that RUNSTATS has never been run on the
table space or partition, or that the timestamp of the last
RUNSTATS is unknown.
This value does not include LOB updates because LOB updates are
really deletions followed by insertions. A null value means that the
number of updated rows is unknown.
A null value means that COPY has never been run on the table
space or partition, or that the timestamp of the last full image copy
is unknown.
You might want to take a full image copy when 20% of the pages
have changed:
((COPYUPDATEDPAGES*100)/NACTIVE)>20
COPYCHANGES INTEGER The number of insert, delete, and update operations since the last
COPY.
You might want to take a full image copy when DB2 processes
more than 10% of the rows from the logs:
((COPYCHANGES*100)/TOTALROWS)>10
Table 315 describes the columns of the INDEXSPACESTATS table and explains how
you can use them in deciding when to run REORG, RUNSTATS, or COPY.
Table 315. Descriptions of columns in the INDEXSPACESTATS table
Column name Data type Description
DBNAME CHAR(8) NOT NULL The name of the database. This column is used to map a
database to its statistics.
NAME CHAR(8) NOT NULL The name of the index space. This column is used to map an
index space to its statistics.
PARTITION SMALLINT NOT This column is used to map a data set number in an index space
NULL to its statistics. The data set number within the index space.
Use this value with the value of any column that contains a
number of affected index entries to determine the percentage of
index entries that are affected by a particular action.
NLEVELS SMALLINT The number of levels in the index tree.
Use this value with the value of any column that contains a
number of affected pages to determine the percentage of pages
that are affected by a particular action.
If COPY YES was specified when the index was created (the
value of COPY is Y in SYSIBM.SYSINDEXES), you can compare
this timestamp to the timestamp of the last COPY on the same
object to determine when a COPY is needed. If the date of the
last LOAD REPLACE is more recent than the last COPY, you
might need to run COPY:
(JULIAN_DAY(LOADRLASTTIME)>JULIAN_DAY(COPYLASTTIME))
REBUILDLASTTIME TIMESTAMP The timestamp of the last REBUILD INDEX on the index space
or partition.
If COPY YES was specified when the index was created (the
value of COPY is Y in SYSIBM.SYSINDEXES), you can compare
this timestamp to the timestamp of the last COPY on the same
object to determine when a COPY is needed. If the date of the
last REBUILD INDEX is more recent than the last COPY, you
might need to run COPY:
(JULIAN_DAY(REBUILDLASTTIME)>JULIAN_DAY(COPYLASTTIME))
REORGLASTTIME TIMESTAMP The timestamp of the last REORG INDEX on the index space or
partition.
If COPY YES was specified when the index was created (the
value of COPY is Y in SYSIBM.SYSINDEXES), you can compare
this timestamp to the timestamp of the last COPY on the same
object to determine when a COPY is needed. If the date of the
last REORG INDEX is more recent than the last COPY, you
might need to run COPY:
(JULIAN_DAY(REORGLASTTIME)>JULIAN_DAY(COPYLASTTIME))
A null value means that the number of split pages near their
original pages is unknown.
REORGLEAFFAR INTEGER The number of index page splits that occurred since the last
REORG, REBUILD INDEX, or LOAD REPLACE in which the
higher part of the split page was far from the location of the
original page. The higher part of a split page is far from the
original page if the two page numbers differ by more than 16.
A null value means that the number of split pages that are far
from their original pages is unknown.
If this value is less than zero, the index space contains empty
pages. Running REORG can save disk space and decrease index
sequential scan I/O time by eliminating those empty pages.
STATSLASTTIME TIMESTAMP The timestamp of the last RUNSTATS on the index space or
partition.
A null value means that RUNSTATS has never been run on the
index space or partition, or that the timestamp of the last
RUNSTATS is unknown.
A null value means that COPY has never been run on the index
space or partition, or that the timestamp of the last full image
copy is unknown.
For example, you might want to take a full image copy when
20% of the pages have changed:
((COPYUPDATEDPAGES*100)/NACTIVE)>20
COPYCHANGES INTEGER The number of insert or delete operations since the last COPY.
For example, you might want to take a full image copy when
DB2 processes more than 10% of the index entries from the logs:
((COPYCHANGES*100)/TOTALENTRIES)>10
Table 317 shows how running LOAD affects the INDEXSPACESTATS statistics for
an index space or physical index partition.
Table 317. Changed INDEXSPACESTATS values during LOAD
Settings for LOAD REPLACE after BUILD
Column name phase
TOTALENTRIES Number of index entries added1
NLEVELS Actual value
NACTIVE Actual value
SPACE Actual value
EXTENTS Actual value
LOADRLASTTIME Current timestamp
REORGINSERTS 0
REORGDELETES 0
REORGAPPENDINSERT 0
REORGPSEUDODELETES 0
REORGMASSDELETE 0
REORGLEAFNEAR 0
REORGLEAFFAR 0
REORGNUMLEVELS 0
STATSLASTTIME Current timestamp2
STATSINSERTS 02
STATSDELETES 02
STATSMASSDELETE 02
COPYLASTTIME Current timestamp3
COPYUPDATEDPAGES 03
COPYCHANGES 03
COPYUPDATELRSN Null3
Table 319 shows how running REORG affects the INDEXSPACESTATS statistics for
an index space or physical index partition.
Table 319. Changed INDEXSPACESTATS values during REORG
Settings for REORG Settings for REORG SHRLEVEL
SHRLEVEL NONE after REFERENCE or CHANGE after SWITCH
Column name RELOAD phase phase
TOTALENTRIES Number of index entries For SHRLEVEL REFERENCE: Number of
added1 added index entries during BUILD phase
For a logical index partition, DB2 does not reset the nonpartitioned index when it
does a REORG on a partition. Therefore, DB2 does not reset the statistics for the
index. The REORG counters and REORGLASTTIME are relative to the last time the
entire nonpartitioned index is reorganized. In addition, the REORG counters might
be low because, due to the methodology, some index entries are changed during
REORG of a partition.
For a logical index partition, DB2 does not collect TOTALENTRIES statistics for the
entire nonpartitioned index when it runs REBUILD INDEX. Therefore, DB2 does
not reset the statistics for the index. The REORG counters from the last REORG are
still correct. DB2 updates REBUILDLASTTIME when the entire nonpartitioned
index is rebuilt.
Table 321 shows how running RUNSTATS UPDATE ALL on a table space or table
space partition affects the TABLESPACESTATS statistics.
Table 321. Changed TABLESPACESTATS values during RUNSTATS UPDATE ALL
Column name During UTILINIT phase After RUNSTATS phase
1
STATSLASTTIME Current timestamp Timestamp of the start of
RUNSTATS phase
STATSINSERTS Actual value1 Actual value2
STATSDELETES Actual value1 Actual value2
STATSUPDATES Actual value1 Actual value2
STATSMASSDELETE Actual value1 Actual value2
Notes:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
Table 322 shows how running RUNSTATS UPDATE ALL on an index affects the
INDEXSPACESTATS statistics.
Table 322. Changed INDEXSPACESTATS values during RUNSTATS UPDATE ALL
Column name During UTILINIT phase After RUNSTATS phase
1
STATSLASTTIME Current timestamp Timestamp of the start of
RUNSTATS phase
STATSINSERTS Actual value1 Actual value2
STATSDELETES Actual value1 Actual value2
STATSMASSDELETE Actual value1 Actual value2
Table 323 shows how running COPY on a table space or table space partition
affects the TABLESPACESTATS statistics.
Table 323. Changed TABLESPACESTATS values during COPY
Column name During UTILINIT phase After COPY phase
COPYLASTTIME Current timestamp1 Timestamp of the start of
COPY phase
COPYUPDATEDPAGES Actual value1 Actual value2
COPYCHANGES Actual value1 Actual value2
COPYUPDATELRSN Actual value1 Actual value3
COPYUPDATETIME Actual value1 Actual value3
Notes:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
Table 324 shows how running COPY on an index affects the INDEXSPACESTATS
statistics.
Table 324. Changed INDEXSPACESTATS values during COPY
Column name During UTILINIT phase After COPY phase
COPYLASTTIME Current timestamp1 Timestamp of the start of
COPY phase
COPYUPDATEDPAGES Actual value1 Actual value2
COPYCHANGES Actual value1 Actual value2
COPYUPDATELRSN Actual value1 Actual value3
COPYUPDATETIME Actual value1 Actual value3
Note:
1. DB2 externalizes the current in-memory values.
2. This value is 0 for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
3. This value is null for SHRLEVEL REFERENCE, or the actual value for SHRLEVEL
CHANGE.
If a row still exists in the real-time statistics tables for a dropped table space or
index, and if you create a new object with the same DBID and PSID as the
dropped object, DB2 reinitializes the row before it updates any values in that row.
INSERT: When you perform an INSERT, DB2 increments the insert counters. DB2
keeps separate counters for clustered and unclustered INSERTs.
DELETE: When you perform a DELETE, DB2 increments the delete counters.
| Notice that for INSERT and DELETE, the counter for the inverse operation is
| incremented. For example, if two INSERT statements are rolled back, the delete
| counter is incremented by 2.
If an update to a partitioning key does not cause rows to move to a new partition,
the counts are accumulated as expected:
Mass DELETE:Performing a mass delete operation on a table space does not cause
DB2 to reset the counter columns in the real-time statistics tables. After a mass
delete operation, the value in a counter column includes the count from a time
prior to the mass delete operation, as well as the count after the mass delete
operation.
DB2 does locking based on the lock size of the DSNRTSDB.DSNRTSTS table space.
DB2 uses cursor stability isolation and CURRENTDATA(YES) when it reads the
statistics tables.
At the beginning of a RUNSTATS job, all data sharing members externalize their
statistics to the real-time statistics tables and reset their in-memory statistics. If all
members cannot externalize their statistics, DB2 sets STATSLASTTIME to null. An
error in gathering and externalizing statistics does not prevent RUNSTATS from
running.
At the beginning of a COPY job, all data sharing members externalize their
statistics to the real-time statistics tables and reset their in-memory statistics. If all
members cannot externalize their statistics, DB2 sets COPYLASTTIME to null. An
error in gathering and externalizing statistics does not prevent COPY from
running.
Utilities that reset page sets to empty can invalidate the in-memory statistics of
other DB2 members. The member that resets a page set notifies the other DB2
members that a page set has been reset to empty, and the in-memory statistics are
invalidated. If the notify process fails, the utility that resets the page set does not
fail. DB2 sets the appropriate timestamp (REORGLASTTIME, STATSLASTTIME, or
COPYLASTTIME) to null in the row for the empty page set to indicate that the
statistics for that page set are unknown.
Statistics accuracy
In general, the real-time statistics are accurate values. However, several factors can
affect the accuracy of the statistics:
v Certain utility restart scenarios
v Certain utility operations that leave indexes in a database restrictive state, such
as RECOVER-pending (RECP)
Always consider the database restrictive state of objects before accepting a utility
recommendation that is based on real-time statistics.
v A DB2 subsystem failure
If you think that some statistics values might be inaccurate, you can correct the
statistics by running REORG, RUNSTATS, or COPY on the objects for which DB2
generated the statistics.
DSNACCOR uses the set of criteria that are shown in “DSNACCOR formulas for
recommending actions” on page 1203 to evaluate table spaces and index spaces. By
default, DSNACCOR evaluates all table spaces and index spaces in the subsystem
that have entries in the real-time statistics tables. However, you can override this
default through input parameters.
DSNACCOR creates and uses declared temporary tables. Therefore, before you can
invoke DSNACCOR, you need to create a TEMP database and segmented table
spaces in the TEMP database. For information about creating TEMP databases and
table spaces, see CREATE DATABASE and CREATE TABLESPACE Chapter 5 of
DB2 SQL Reference.
You should bind the package for DSNACCOR with isolation UR to avoid lock
contention. You can find the installation steps for DSNACCOR in job DSNTIJSG.
The owner of the package or plan that contains the CALL statement must also
have:
v SELECT authority on the real-time statistics tables
v The DISPLAY system privilege
Figure 168 shows the formula that DSNACCOR uses to recommend a full image
copy on a table space.
Figure 168. DSNACCOR formula for recommending a full image copy on a table space
Figure 169. DSNACCOR formula for recommending a full image copy on an index space
Figure 170 shows the formula that DSNACCOR uses to recommend an incremental
image copy on a table space.
Figure 170. DSNACCOR formula for recommending an incremental image copy on a table
space
Figure 171 shows the formula that DSNACCOR uses to recommend a REORG on a
table space. If the table space is a LOB table space, and CHCKLVL=1, the formula
does not include EXTENTS>ExtentLimit.
Figure 172 on page 1205 shows the formula that DSNACCOR uses to recommend a
REORG on an index space.
Figure 173 shows the formula that DSNACCOR uses to recommend RUNSTATS on
a table space.
Figure 174 shows the formula that DSNACCOR uses to recommend RUNSTATS on
an index space.
Figure 175 shows the formula that DSNACCOR uses to that too many index space
or table space extents have been used.
EXTENTS>ExtentLimit
Figure 175. DSNACCOR formula for warning that too many data set extents for a table space
or index space are used
Recommendation: If you plan to put many rows in the exception table, create a
nonunique index on DBNAME, NAME, and QUERYTYPE.
After you create the exception table, insert a row for each object for which you
want to include information in the INEXCEPTTABLE column.
Example: Suppose that you want the INEXCEPTTABLE column to contain the
string 'IRRELEVANT’ for table space STAFF in database DSNDB04. You also want
the INEXCEPTTABLE column to contain ’CURRENT’ for table space DSN8S81D in
database DSN8D81A. Execute these INSERT statements:
INSERT INTO DSNACC.EXCEPT_TBL VALUES(’DSNDB04 ’, ’STAFF ’, ’IRRELEVANT’);
INSERT INTO DSNACC.EXCEPT_TBL VALUES(’DSN8D81A’, ’DSN8S81D’, ’CURRENT’);
Example: Suppose that you want to include all rows for database DSNDB04 in the
recommendations result set, except for those rows that contain the string
’IRRELEVANT’ in the INEXCEPTTABLE column. You might include the following
search condition in your Criteria input parameter:
DBNAME=’DSNDB04’ AND INEXCEPTTABLE<>’IRRELEVANT’
PROCEDURE DIVISION.
...
*********************************************************
* SET VALUES FOR DSNACCOR INPUT PARAMETERS: *
* - USE THE CHKLVL PARAMETER TO CAUSE DSNACCOR TO CHECK *
* FOR ORPHANED OBJECTS AND INDEX SPACES WITHOUT *
* TABLE SPACES, BUT INCLUDE THOSE OBJECTS IN THE *
* RECOMMENDATIONS RESULT SET (CHKLVL=1+2+16=19) *
* - USE THE CRITERIA PARAMETER TO CAUSE DSNACCOR TO *
* MAKE RECOMMENDATIONS ONLY FOR OBJECTS IN DATABASES *
* DSN8D81A AND DSN8D81L. *
DSNACCOR output
If DSNACCOR executes successfully, in addition to the output parameters
described in “DSN8EXP option descriptions” on page 1195, DSNACCOR returns
two result sets.
The first result set contains the results from IFI COMMAND calls that DSNACCOR
makes. Table 326 on page 1211 shows the format of the first result set.
The second result set contains DSNACCOR's recommendations. This result set
contains one or more rows for a table space or index space. A nonpartitioned table
space or nonpartitioning index space can have at most one row in the result set. A
partitioned table space or partitioning index space can have at most one row for
each partition. A table space, index space, or partition has a row in the result set if
both of the following conditions are true:
v If the Criteria input parameter contains a search condition, the search condition
is true for the table space, index space, or partition.
v DSNACCOR recommends at least one action for the table space, index space, or
partition.
Table 327 shows the columns of a result set row.
Table 327. Result set row for second DSNACCOR result set
Column name Data type Description
DBNAME CHAR(8) Name of the database that contains the object.
NAME CHAR(8) Table space or index space name.
PARTITION INTEGER Data set number or partition number.
OBJECTTYPE CHAR(2) DB2 object type:
v TS for a table space
v IX for an index space
| OBJECTSTATUS CHAR(36) Status of the object:
| v ORPHANED, if the object is an index space with no
| corresponding table space, or if the object does not exist
| v If the object is in a restricted state, one of the following
| values:
| – TS=restricted-state, if OBJECTTYPE is TS
| – IX=restricted-state, if OBJECTTYPE is IX
| restricted-state is one of the status codes that appear in
| DISPLAY DATABASE output. See Chapter 2 of DB2
| Command Reference for details.
| v A, if the object is in an advisory state.
| v L, if the object is a logical partition, but not in an advisory
| state.
| v AL, if the object is a logical partition and in an advisory
| state.
IMAGECOPY CHAR(3) COPY recommendation:
v If OBJECTTYPE is TS: FUL (full image copy), INC
(incremental image copy), or NO
v If OBJECTTYPE is IX: YES or NO
RUNSTATS CHAR(3) RUNSTATS recommendation: YES or NO.
EXTENTS CHAR(3) Indicates whether the data sets for the object have exceeded
ExtentLimit: YES or NO.
REORG CHAR(3) REORG recommendation: YES or NO.
The following syntax diagram shows the SQL CALL statement for invoking
WLM_REFRESH. The linkage convention for WLM_REFRESH is GENERAL WITH
NULLS.
If you use CICS Transaction Server for OS/390 Version 1 Release 3 or later, you can
register your CICS system as a resource manager with recoverable resource
management services (RRMS). When you do that, changes to DB2 databases that
are made by the program that calls DSNACICS and the CICS server program that
DSNACICS invokes are in the same two-phase commit scope. This means that
when the calling program performs an SQL COMMIT or ROLLBACK, DB2 and
RRS inform CICS about the COMMIT or ROLLBACK.
If the CICS server program that DSNACICS invokes accesses DB2 resources, the
server program runs under a separate unit of work from the original unit of work
that calls the stored procedure. This means that the CICS server program might
deadlock with locks that the client program acquires.
The CICS server program that DSNACICS calls runs under the same user ID as
DSNACICS. That user ID depends on the SECURITY parameter that you specify
when you define DSNACICS. See Part 2 of DB2 Installation Guide.
The DSNACICS caller also needs authorization from an external security system,
such as RACF, to use CICS resources. See Part 2 of DB2 Installation Guide.
Table 329 on page 1221 shows the contents of the DSNACICX exit parameter list,
XPL. Member DSNDXPL in data set prefix.SDSNMACS contains an assembler
language mapping macro for XPL. Sample exit routine DSNASCIO in data set
prefix.SDSNSAMP includes a COBOL mapping macro for XPL.
/***********************************************/
/* INDICATOR VARIABLES FOR DSNACICS PARAMETERS */
/***********************************************/
DECLARE 1 IND_VARS,
3 IND_PARM_LEVEL BIN FIXED(15),
3 IND_PGM_NAME BIN FIXED(15),
3 IND_CICS_APPLID BIN FIXED(15),
3 IND_CICS_LEVEL BIN FIXED(15),
3 IND_CONNECT_TYPE BIN FIXED(15),
3 IND_NETNAME BIN FIXED(15),
3 IND_MIRROR_TRANS BIN FIXED(15),
3 IND_COMMAREA BIN FIXED(15),
3 IND_COMMAREA_TOTAL_LEN BIN FIXED(15),
3 IND_SYNC_OPTS BIN FIXED(15),
3 IND_RETCODE BIN FIXED(15),
3 IND_MSG_AREA BIN FIXED(15);
/**************************/
/* LOCAL COPY OF COMMAREA */
/**************************/
DECLARE P1 POINTER;
DECLARE COMMAREA_STG CHAR(130) VARYING;
/**************************************************************/
/* ASSIGN VALUES TO INPUT PARAMETERS PARM_LEVEL, PGM_NAME, */
/* MIRROR_TRANS, COMMAREA, COMMAREA_TOTAL_LEN, AND SYNC_OPTS. */
/* SET THE OTHER INPUT PARAMETERS TO NULL. THE DSNACICX */
/* USER EXIT MUST ASSIGN VALUES FOR THOSE PARAMETERS. */
/**************************************************************/
PARM_LEVEL = 1;
IND_PARM_LEVEL = 0;
PGM_NAME = ’CICSPGM1’;
IND_PGM_NAME = 0 ;
MIRROR_TRANS = ’MIRT’;
IND_MIRROR_TRANS = 0;
P1 = ADDR(COMMAREA_STG);
COMMAREA_INPUT = ’THIS IS THE INPUT FOR CICSPGM1’;
COMMAREA_OUTPUT = ’ ’;
COMMAREA_LEN = LENGTH(COMMAREA_INPUT);
IND_COMMAREA = 0;
SYNC_OPTS = 1;
IND_SYNC_OPTS = 0;
IND_CICS_APPLID= -1;
IND_CICS_LEVEL = -1;
IND_CONNECT_TYPE = -1;
IND_NETNAME = -1;
/*****************************************/
/* INITIALIZE OUTPUT PARAMETERS TO NULL. */
DSNACICS output
DSNACICS places the return code from DSNACICS execution in the return-code
parameter. If the value of the return code is non-zero, DSNACICS puts its own
error messages and any error messages that are generated by CICS and the
DSNACICX user exit routine in the msg-area parameter.
The COMMAREA parameter contains the COMMAREA for the CICS server
program that DSNACICS calls. The COMMAREA parameter has a VARCHAR
type. Therefore, if the server program puts data other than character data in the
COMMAREA, that data can become corrupted by code page translation as it is
passed to the caller. To avoid code page translation, you can change the
COMMAREA parameter in the CREATE PROCEDURE statement for DSNACICS to
VARCHAR(32704) FOR BIT DATA. However, if you do so, the client program
might need to do code page translation on any character data in the COMMAREA
to make it readable.
DSNACICS restrictions
Because DSNACICS uses the distributed program link (DPL) function to invoke
CICS server programs, server programs that you invoke through DSNACICS can
contain only the CICS API commands that the DPL function supports. The list of
supported commands is documented in CICS Transaction Server for z/OS
Application Programming Reference.
DSNACICS debugging
If you receive errors when you call DSNACICS, ask your system administrator to
add a DSNDUMP DD statement in the startup procedure for the address space in
which DSNACICS runs. The DSNDUMP DD statement causes DB2 to generate an
SVC dump whenever DSNACICS issues an error message.
# The owner of the package or plan that contains the CALL statement must also
# have INSERT authority on SYSIBM.USERNAMES.
# ReturnCode, MsgArea )
#
#
# DSN8EXP output
# If DSNLEUSR executes successfully, it inserts a row into SYSIBM.USERNAMES
# with encrypted values for the NEWAUTHID and PASSWORD columns and returns
# 0 for the ReturnCode parameter value. If DSNLEUSR does not execute successfully,
# it returns a non-zero value for the ReturnCode value and additional diagnostic
# information for the MsgArea parameter value. See “DSNLEUSR option
# descriptions” on page 1224 for a description of the ReturnCode and MsgArea
# contents.
# The DSN8EXP stored procedure is a sample stored procedure that lets users
# perform an EXPLAIN on an SQL statement without having the authorization to
# execute that SQL statement.
# Environment
# DSN8EXP must run in a WLM-established stored procedure address space.
# Before you can invoke DSN8EXP, table sqlid.PLAN_TABLE must exist. sqlid is the
# value that you specify for the sqlid input parameter when you call DSN8EXP.
# Authorization required
# To execute the CALL DSN8.DSN8EXP statement, the owner of the package or plan
# that contains the CALL statement must have one or more of the following
# privileges on each package that the stored procedure uses:
# v The EXECUTE privilege on the package for DSN8EXP
# v Ownership of the package
# v PACKADM authority for the package collection
# v SYSADM authority
# In addition:
# v The SQL authorization ID of the process in which DSN8EXP is called must have
# the authority to execute SET CURRENT SQLID=sqlid.
# v The SQL authorization ID of the process must also have one of the following
# characteristics:
# – Be the owner of a plan table named PLAN_TABLE
# – Have an alias on a plan table named owner.PLAN_TABLE and have SELECT
# and INSERT privileges on the table
# DSN8EXP output
# If DSN8EXP executes successfully, sqlid.PLAN_TABLE contains the EXPLAIN
# output. A user with SELECT authority on sqlid.PLAN_TABLE can obtain the results
# of the EXPLAIN that was executed by DSN8EXP by executing this query:
# SELECT * FROM sqlid.PLAN_TABLE WHERE QUERYNO=’queryno’;
# If DSN8EXP does not execute successfully, sqlcode, sqlstate, and error-message contain
# error information.
The most rewarding task associated with a database management system is asking
questions of it and getting answers, the task called end use. Other tasks are also
necessary—defining the parameters of the system, putting the data in place, and so
on. The tasks that are associated with DB2 are grouped into the following major
categories (but supplemental information relating to all of the following tasks for
new releases of DB2 can be found in DB2 Release Planning Guide.
Installation: If you are involved with DB2 only to install the system, DB2
Installation Guide might be all you need.
If you will be using data sharing capabilities you also need DB2 Data Sharing:
Planning and Administration, which describes installation considerations for data
sharing.
If you want to set up a DB2 subsystem to meet the requirements of the Common
Criteria, you need DB2 Common Criteria Guide, which contains information that
supersedes other information in the DB2 UDB for z/OS library regarding Common
Criteria.
End use: End users issue SQL statements to retrieve data. They can also insert,
update, or delete data, with SQL statements. They might need an introduction to
SQL, detailed instructions for using SPUFI, and an alphabetized reference to the
types of SQL statements. This information is found in DB2 Application Programming
and SQL Guide, and DB2 SQL Reference.
End users can also issue SQL statements through the DB2 Query Management
Facility (QMF) or some other program, and the library for that licensed program
might provide all the instruction or reference material they need. For a list of the
titles in the DB2 QMF library, see the bibliography at the end of this book.
Application programming: Some users access DB2 without knowing it, using
programs that contain SQL statements. DB2 application programmers write those
programs. Because they write SQL statements, they need the same resources that
end users do.
The material needed for writing a host program containing SQL is in DB2
Application Programming and SQL Guide and in DB2 Application Programming Guide
and Reference for Java. The material needed for writing applications that use DB2
ODBC or ODBC to access DB2 servers is in DB2 ODBC Guide and Reference. For
handling errors, see DB2 Codes.
If you will be working in a distributed environment, you will need DB2 Reference
for Remote DRDA Requesters and Servers.
If you will be using the RACF access control module for DB2 authorization
checking, you will need DB2 RACF Access Control Module Guide.
If you are involved with DB2 only to design the database, or plan operational
procedures, you need DB2 Administration Guide. If you also want to carry out your
own plans by creating DB2 objects, granting privileges, running utility jobs, and so
on, you also need:
v DB2 SQL Reference, which describes the SQL statements you use to create, alter,
and drop objects and grant and revoke privileges
v DB2 Utility Guide and Reference, which explains how to run utilities
v DB2 Command Reference, which explains how to run commands
If you will be using data sharing, you need DB2 Data Sharing: Planning and
Administration, which describes how to plan for and implement data sharing.
Diagnosis: Diagnosticians detect and describe errors in the DB2 program. They
might also recommend or apply a remedy. The documentation for this task is in
DB2 Diagnosis Guide and Reference, DB2 Messages, and DB2 Codes.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan
The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions, therefore, this statement may not apply
to you.
Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
Notices 1239
Trademarks
The following terms are trademarks of International Business Machines
Corporation in the United States, other countries, or both:
3090 MQSeries
BookManager MVS
CICS MVS/ESA
DataPropagator NetView
DataRefresher OS/390
DB2 Parallel Sysplex
DB2 Connect PR/SM
DB2 Universal Database QMF
DFSMSdfp RACF
DFSMSdss RAMAC
DFSMShsm Redbooks
Distributed Relational Database RETAIN
Architecture RMF
DRDA SecureWay
Enterprise Storage Server SQL/DS
ES/3090 System/390
ESCON Tivoli
eServer TME
FICON TotalStorage
FlashCopy VTAM
IBM WebSphere
IBM Registry z/Architecture
ibm.com z/OS
IMS zSeries
iSeries
Language Environment
Java and all Java-based trademarks and logos are trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
because it is usually replaced at a later date by a more authorization ID. A string that can be verified for
permanent correction, such as a program temporary fix connection to DB2 and to which a set of privileges is
(PTF). allowed. It can represent an individual, an
organizational group, or a function, but DB2 does not
APF. Authorized program facility. determine this representation.
API. Application programming interface. authorized program analysis report (APAR). A report
of a problem that is caused by a suspected defect in a
APPL. A VTAM network definition statement that is current release of an IBM supplied program.
used to define DB2 to VTAM as an application program
that uses SNA LU 6.2 protocols. authorized program facility (APF). A facility that
permits the identification of programs that are
application. A program or set of programs that authorized to use restricted functions.
performs a task; for example, a payroll application.
| automatic query rewrite. A process that examines an
application-directed connection. A connection that an | SQL statement that refers to one or more base tables,
application manages using the SQL CONNECT | and, if appropriate, rewrites the query so that it
statement. | performs better. This process can also determine
application plan. The control structure that is
| whether to rewrite a query so that it refers to one or
| more materialized query tables that are derived from
produced during the bind process. DB2 uses the
application plan to process SQL statements that it
| the source tables.
encounters during statement execution. auxiliary index. An index on an auxiliary table in
which each index entry refers to a LOB.
application process. The unit to which resources and
locks are allocated. An application process involves the auxiliary table. A table that stores columns outside
execution of one or more programs. the table in which they are defined. Contrast with base
table.
application programming interface (API). A
functional interface that is supplied by the operating
system or by a separately orderable licensed program B
that allows an application program that is written in a
high-level language to use specific data or functions of backout. The process of undoing uncommitted
the operating system or licensed program. changes that an application process made. This might
be necessary in the event of a failure on the part of an
application requester. The component on a remote application process, or as a result of a deadlock
system that generates DRDA requests for data on situation.
behalf of an application. An application requester
accesses a DB2 database server using the DRDA backward log recovery. The fourth and final phase of
application-directed protocol. restart processing during which DB2 scans the log in a
backward direction to apply UNDO log records for all
application server. The target of a request from a aborted changes.
remote application. In the DB2 environment, the
application server function is provided by the base table. (1) A table that is created by the SQL
distributed data facility and is used to access DB2 data CREATE TABLE statement and that holds persistent
from remote applications. data. Contrast with result table and temporary table.
archive log. The portion of the DB2 log that contains (2) A table containing a LOB column definition. The
log records that have been copied from the active log. actual LOB column data is not stored with the base
table. The base table contains a row identifier for each
ASCII. An encoding scheme that is used to represent row and an indicator column for each of its LOB
strings in many environments, typically on PCs and columns. Contrast with auxiliary table.
workstations. Contrast with EBCDIC and Unicode.
base table space. A table space that contains base
| ASID. Address space identifier. tables.
attachment facility. An interface between DB2 and basic predicate. A predicate that compares two values.
TSO, IMS, CICS, or batch address spaces. An
attachment facility allows application programs to basic sequential access method (BSAM). An access
access DB2. method for storing or retrieving data blocks in a
continuous sequence, using either a sequential-access or
attribute. A characteristic of an entity. For example, in a direct-access device.
database design, the phone number of an employee is
one of that employee’s attributes.
# binary large object (BLOB). A sequence of bytes in built-in function. A function that DB2 supplies.
# which the size of the value ranges from 0 bytes to Contrast with user-defined function.
# 2 GB−1. Such a string has a CCSID value of 65535.
business dimension. A category of data, such as
binary string. A sequence of bytes that is not products or time periods, that an organization might
associated with a CCSID. For example, the BLOB data want to analyze.
type is a binary string.
Glossary 1243
catalog • closed application
catalog. In DB2, a collection of tables that contains | check pending. A state of a table space or partition
descriptions of objects such as tables, views, and | that prevents its use by some utilities and by some SQL
indexes. | statements because of rows that violate referential
| constraints, check constraints, or both.
catalog table. Any table in the DB2 catalog.
checkpoint. A point at which DB2 records internal
CCSID. Coded character set identifier. status information on the DB2 log; the recovery process
uses this information if DB2 abnormally terminates.
CDB. Communications database.
| child lock. For explicit hierarchical locking, a lock that
CDRA. Character Data Representation Architecture. | is held on either a table, page, row, or a large object
| (LOB). Each child lock has a parent lock. See also parent
CEC. Central electronic complex. See central processor
complex.
| lock.
objects, so that the objects are managed solely through command. A DB2 operator command or a DSN
the application’s external interface. subcommand. A command is distinct from an SQL
statement.
CLPA. Create link pack area.
command prefix. A one- to eight-character command
| clustering index. An index that determines how rows identifier. The command prefix distinguishes the
| are physically ordered (clustered) in a table space. If a command as belonging to an application or subsystem
| clustering index on a partitioned table is not a rather than to MVS.
| partitioning index, the rows are ordered in cluster
| sequence within each data partition instead of spanning command recognition character (CRC). A character
| partitions. Prior to Version 8 of DB2 UDB for z/OS, the that permits a z/OS console operator or an IMS
| partitioning index was required to be the clustering subsystem user to route DB2 commands to specific DB2
| index. subsystems.
coded character set. A set of unambiguous rules that command scope. The scope of command operation in
establish a character set and the one-to-one a data sharing group. If a command has member scope,
relationships between the characters of the set and their the command displays information only from the one
coded representations. member or affects only non-shared resources that are
owned locally by that member. If a command has group
coded character set identifier (CCSID). A 16-bit scope, the command displays information from all
number that uniquely identifies a coded representation members, affects non-shared resources that are owned
of graphic characters. It designates an encoding scheme locally by all members, displays information on
identifier and one or more pairs consisting of a sharable resources, or affects sharable resources.
character set identifier and an associated code page
identifier. commit. The operation that ends a unit of work by
releasing locks so that the database changes that are
code page. (1) A set of assignments of characters to made by that unit of work can be perceived by other
code points. In EBCDIC, for example, the character 'A' processes.
is assigned code point X'C1' (2) , and character 'B' is
assigned code point X'C2'. Within a code page, each commit point. A point in time when data is
code point has only one specific meaning. considered consistent.
code point. In CDRA, a unique bit pattern that committed phase. The second phase of the multisite
represents a character in a code page. update process that requests all participants to commit
the effects of the logical unit of work.
# code unit. The fundamental binary width in a
# computer architecture that is used for representing common service area (CSA). In z/OS, a part of the
# character data, such as 7 bits, 8 bits, 16 bits, or 32 bits. common area that contains data areas that are
# Depending on the character encoding form that is used, addressable by all address spaces.
# each code point in a coded character set can be
# represented internally by one or more code units. communications database (CDB). A set of tables in
the DB2 catalog that are used to establish conversations
coexistence. During migration, the period of time in with remote database management systems.
which two releases exist in the same data sharing
group. comparison operator. A token (such as =, >, or <) that
is used to specify a relationship between two values.
cold start. A process by which DB2 restarts without
processing any log records. Contrast with warm start. composite key. An ordered set of key columns of the
same table.
collection. A group of packages that have the same
qualifier. compression dictionary. The dictionary that controls
the process of compression and decompression. This
column. The vertical component of a table. A column dictionary is created from the data in the table space or
has a name and a particular data type (for example, table space partition.
character, decimal, or integer).
concurrency. The shared use of resources by more
# column function. See aggregate function. than one application process at the same time.
"come from" checking. An LU 6.2 security option that conditional restart. A DB2 restart that is directed by a
defines a list of authorization IDs that are allowed to user-defined conditional restart control record (CRCR).
connect to DB2 from a partner LU.
connection. In SNA, the existence of a communication
path between two partner LUs that allows information
Glossary 1245
connection context • coupling facility
to be exchanged (for example, two DB2 subsystems transaction program over an SNA logical unit-to-logical
that are connected and communicating by way of a unit (LU-LU) session that allows communication while
conversation). processing a transaction.
connection context. In SQLJ, a Java object that coordinator. The system component that coordinates
represents a connection to a data source. the commit or rollback of a unit of work that includes
work that is done on one or more other systems.
connection declaration clause. In SQLJ, a statement
that declares a connection to a data source. | copy pool. A named set of SMS storage groups that
| contains data that is to be copied collectively. A copy
connection handle. The data object containing | pool is an SMS construct that lets you define which
information that is associated with a connection that | storage groups are to be copied by using FlashCopy
DB2 ODBC manages. This includes general status | functions. HSM determines which volumes belong to a
information, transaction status, and diagnostic | copy pool.
information.
| copy target. A named set of SMS storage groups that
connection ID. An identifier that is supplied by the | are to be used as containers for copy pool volume
attachment facility and that is associated with a specific | copies. A copy target is an SMS construct that lets you
address space connection. | define which storage groups are to be used as
| containers for volumes that are copied by using
consistency token. A timestamp that is used to | FlashCopy functions.
generate the version identifier for an application. See
also version. | copy version. A point-in-time FlashCopy copy that is
| managed by HSM. Each copy pool has a version
constant. A language element that specifies an | parameter that specifies how many copy versions are
unchanging value. Constants are classified as string | maintained on disk.
constants or numeric constants. Contrast with variable.
correlated columns. A relationship between the value
constraint. A rule that limits the values that can be of one column and the value of another column.
inserted, deleted, or updated in a table. See referential
constraint, check constraint, and unique constraint. correlated subquery. A subquery (part of a WHERE
or HAVING clause) that is applied to a row or group of
context. The application’s logical connection to the rows of a table or view that is named in an outer
data source and associated internal DB2 ODBC subselect statement.
connection information that allows the application to
direct its operations to a data source. A DB2 ODBC correlation ID. An identifier that is associated with a
context represents a DB2 thread. specific thread. In TSO, it is either an authorization ID
or the job name.
contracting conversion. A process that occurs when
the length of a converted string is smaller than that of correlation name. An identifier that designates a table,
the source string. For example, this process occurs a view, or individual rows of a table or view within a
when an EBCDIC mixed-data string that contains DBCS single SQL statement. It can be defined in any FROM
characters is converted to ASCII mixed data; the clause or in the first clause of an UPDATE or DELETE
converted string is shorter because of the removal of statement.
the shift codes.
cost category. A category into which DB2 places cost
control interval (CI). A fixed-length area or disk in estimates for SQL statements at the time the statement
which VSAM stores records and creates distributed free is bound. A cost estimate can be placed in either of the
space. Also, in a key-sequenced data set or file, the set following cost categories:
of records that an entry in the sequence-set index v A: Indicates that DB2 had enough information to
record points to. The control interval is the unit of make a cost estimate without using default values.
information that VSAM transmits to or from disk. A v B: Indicates that some condition exists for which DB2
control interval always includes an integral number of was forced to use default values for its estimate.
physical records.
The cost category is externalized in the
control interval definition field (CIDF). In VSAM, a COST_CATEGORY column of the
field that is located in the 4 bytes at the end of each DSN_STATEMNT_TABLE when a statement is
control interval; it describes the free space, if any, in the explained.
control interval.
coupling facility. A special PR/SM LPAR logical
conversation. Communication, which is based on LU partition that runs the coupling facility control program
6.2 or Advanced Program-to-Program Communication and provides high-speed caching, list processing, and
(APPC), between an application and a remote locking functions in a Parallel Sysplex.
| coupling facility resource management. A component current data. Data within a host structure that is
| of z/OS that provides the services to manage coupling current with (identical to) the data within the base
| facility resources in a Parallel Sysplex. This table.
| management includes the enforcement of CFRM
| policies to ensure that the coupling facility and current SQL ID. An ID that, at a single point in time,
| structure requirements are satisfied. holds the privileges that are exercised when certain
dynamic SQL statements run. The current SQL ID can
CP. Central processor. be a primary authorization ID or a secondary
authorization ID.
CPC. Central processor complex.
current status rebuild. The second phase of restart
C++ member. A data object or function in a structure, processing during which the status of the subsystem is
union, or class. reconstructed from information on the log.
C++ member function. An operator or function that is cursor. A named control structure that an application
declared as a member of a class. A member function program uses to point to a single row or multiple rows
has access to the private and protected data members within some ordered set of rows of a result table. A
and to the member functions of objects in its class. cursor can be used to retrieve, update, or delete rows
Member functions are also called methods. from a result table.
C++ object. (1) A region of storage. An object is cursor sensitivity. The degree to which database
created when a variable is defined or a new function is updates are visible to the subsequent FETCH
invoked. (2) An instance of a class. statements in a cursor. A cursor can be sensitive to
changes that are made with positioned update and
CRC. Command recognition character. delete statements specifying the name of that cursor. A
cursor can also be sensitive to changes that are made
CRCR. Conditional restart control record. See also
with searched update or delete statements, or with
conditional restart.
cursors other than this cursor. These changes can be
create link pack area (CLPA). An option that is used made by this application process or by another
during IPL to initialize the link pack pageable area. application process.
created temporary table. A table that holds temporary cursor stability (CS). The isolation level that provides
data and is defined with the SQL statement CREATE maximum concurrency without the ability to read
GLOBAL TEMPORARY TABLE. Information about uncommitted data. With cursor stability, a unit of work
created temporary tables is stored in the DB2 catalog, holds locks only on its uncommitted changes and on
so this kind of table is persistent and can be shared the current row of each of its cursors.
across application processes. Contrast with declared
cursor table (CT). The copy of the skeleton cursor
temporary table. See also temporary table.
table that is used by an executing application process.
cross-memory linkage. A method for invoking a
cycle. A set of tables that can be ordered so that each
program in a different address space. The invocation is
table is a descendent of the one before it, and the first
synchronous with respect to the caller.
table is a descendent of the last table. A self-referencing
cross-system coupling facility (XCF). A component of table is a cycle with a single member.
z/OS that provides functions to support cooperation
between authorized programs that run within a D
Sysplex.
| DAD. See Document access definition.
cross-system extended services (XES). A set of z/OS
services that allow multiple instances of an application | disk. A direct-access storage device that records data
or subsystem, running on different systems in a Sysplex | magnetically.
environment, to implement high-performance,
high-availability data sharing by using a coupling database. A collection of tables, or a collection of table
facility. spaces and index spaces.
CS. Cursor stability. database access thread. A thread that accesses data at
the local subsystem on behalf of a remote subsystem.
CSA. Common service area.
database administrator (DBA). An individual who is
CT. Cursor table. responsible for designing, developing, operating,
safeguarding, maintaining, and using a database.
Glossary 1247
database alias • DBA
| database alias. The name of the target server if Data Language/I (DL/I). The IMS data manipulation
| different from the location name. The database alias language; a common high-level interface between a
| name is used to provide the name of the database user application and IMS.
| server as it is known to the network. When a database
| alias name is defined, the location name is used by the data mart. A small data warehouse that applies to a
| application to reference the server, but the database single department or team. See also data warehouse.
| alias name is used to identify the database server to be
| accessed. Any fully qualified object names within any data mining. The process of collecting critical business
| SQL statements are not modified and are sent information from a data warehouse, correlating it, and
| unchanged to the database server. uncovering associations, patterns, and trends.
| database descriptor (DBD). An internal representation data partition. A VSAM data set that is contained
| of a DB2 database definition, which reflects the data within a partitioned table space.
| definition that is in the DB2 catalog. The objects that
data-partitioned secondary index (DPSI). A secondary
| are defined in a database descriptor are table spaces,
index that is partitioned. The index is partitioned
| tables, indexes, index spaces, relationships, check
according to the underlying data.
| constraints, and triggers. A DBD also contains
| information about accessing tables in the database. data sharing. The ability of two or more DB2
subsystems to directly access and change a single set of
database exception status. An indication that
data.
something is wrong with a database. All members of a
data sharing group must know and share the exception data sharing group. A collection of one or more DB2
status of databases. subsystems that directly access and change the same
data while maintaining data integrity.
| database identifier (DBID). An internal identifier of
| the database. data sharing member. A DB2 subsystem that is
assigned by XCF services to a data sharing group.
database management system (DBMS). A software
system that controls the creation, organization, and data source. A local or remote relational or
modification of a database and the access to the data non-relational data manager that is capable of
that is stored within it. supporting data access via an ODBC driver that
supports the ODBC APIs. In the case of DB2 UDB for
database request module (DBRM). A data set
z/OS, the data sources are always relational database
member that is created by the DB2 precompiler and
managers.
that contains information about SQL statements.
DBRMs are used in the bind process. | data space. In releases prior to DB2 UDB for z/OS,
| Version 8, a range of up to 2 GB of contiguous virtual
database server. The target of a request from a local
| storage addresses that a program can directly
application or an intermediate database server. In the
| manipulate. Unlike an address space, a data space can
DB2 environment, the database server function is
| hold only data; it does not contain common areas,
provided by the distributed data facility to access DB2
| system data, or programs.
data from local applications, or from a remote database
server that acts as an intermediate database server. data type. An attribute of columns, literals, host
variables, special registers, and the results of functions
data currency. The state in which data that is
and expressions.
retrieved into a host variable in your program is a copy
of data in the base table. data warehouse. A system that provides critical
business information to an organization. The data
data definition name (ddname). The name of a data
warehouse system cleanses the data for accuracy and
definition (DD) statement that corresponds to a data
currency, and then presents the data to decision makers
control block containing the same name.
so that they can interpret and use it effectively and
data dictionary. A repository of information about an efficiently.
organization’s application programs, databases, logical
date. A three-part value that designates a day, month,
data models, users, and authorizations. A data
and year.
dictionary can be manual or automated.
date duration. A decimal integer that represents a
data-driven business rules. Constraints on particular
number of years, months, and days.
data values that exist as a result of requirements of the
business. datetime value. A value of the data type DATE, TIME,
or TIMESTAMP.
DBCLOB. Double-byte character large object. can be used only by the application process that issued
the DECLARE statement. Contrast with created
DBCS. Double-byte character set. temporary table. See also temporary table.
DB2I Kanji Feature. The tape that contains the panels delete rule. The rule that tells DB2 what to do to a
and jobs that allow a site to display DB2I panels in dependent row when a parent row is deleted. For each
Kanji. relationship, the rule might be CASCADE, RESTRICT,
SET NULL, or NO ACTION.
DB2 PM. DB2 Performance Monitor.
delete trigger. A trigger that is defined with the
DB2 thread. The DB2 structure that describes an triggering SQL operation DELETE.
application’s connection, traces its progress, processes
resource functions, and delimits its accessibility to DB2 delimited identifier. A sequence of characters that are
resources and services. enclosed within double quotation marks ("). The
sequence must consist of a letter followed by zero or
DCLGEN. Declarations generator. more characters, each of which is a letter, digit, or the
underscore character (_).
DDF. Distributed data facility.
delimiter token. A string constant, a delimited
ddname. Data definition name. identifier, an operator symbol, or any of the special
characters that are shown in DB2 syntax diagrams.
deadlock. Unresolvable contention for the use of a
resource, such as a table or an index. denormalization. A key step in the task of building a
physical relational database design. Denormalization is
declarations generator (DCLGEN). A subcomponent the intentional duplication of columns in multiple
of DB2 that generates SQL table declarations and tables, and the consequence is increased data
COBOL, C, or PL/I data structure declarations that redundancy. Denormalization is sometimes necessary to
conform to the table. The declarations are generated minimize performance problems. Contrast with
from DB2 system catalog information. DCLGEN is also normalization.
a DSN subcommand.
dependent. An object (row, table, or table space) that
declared temporary table. A table that holds has at least one parent. The object is also said to be a
temporary data and is defined with the SQL statement dependent (row, table, or table space) of its parent. See
DECLARE GLOBAL TEMPORARY TABLE. Information also parent row, parent table, parent table space.
about declared temporary tables is not stored in the
DB2 catalog, so this kind of table is not persistent and
Glossary 1249
dependent row • drain lock
dependent row. A row that contains a foreign key that distributed data. Data that resides on a DBMS other
matches the value of a primary key in the parent row. than the local system.
dependent table. A table that is a dependent in at distributed data facility (DDF). A set of DB2
least one referential constraint. components through which DB2 communicates with
another relational database management system.
DES-based authenticator. An authenticator that is
generated using the DES algorithm. Distributed Relational Database Architecture (DRDA
). A connection protocol for distributed relational
descendent. An object that is a dependent of an object database processing that is used by IBM’s relational
or is the dependent of a descendent of an object. database products. DRDA includes protocols for
communication between an application and a remote
descendent row. A row that is dependent on another relational database management system, and for
row, or a row that is a descendent of a dependent row. communication between relational database
management systems. See also DRDA access.
descendent table. A table that is a dependent of
another table, or a table that is a descendent of a DL/I. Data Language/I.
dependent table.
DNS. Domain name server.
deterministic function. A user-defined function whose
result is dependent on the values of the input | document access definition (DAD). Used to define
arguments. That is, successive invocations with the | the indexing scheme for an XML column or the
same input values produce the same answer. | mapping scheme of an XML collection. It can be used
Sometimes referred to as a not-variant function. | to enable an XML Extender column of an XML
Contrast this with an nondeterministic function | collection, which is XML formatted.
(sometimes called a variant function), which might not
always produce the same result for the same inputs. domain. The set of valid values for an attribute.
DFP. Data Facility Product (in z/OS). domain name. The name by which TCP/IP
applications refer to a TCP/IP host within a TCP/IP
DFSMS. Data Facility Storage Management Subsystem network.
(in z/OS). Also called Storage Management Subsystem
(SMS). domain name server (DNS). A special TCP/IP
network server that manages a distributed directory
| DFSMSdss. The data set services (dss) component of that is used to map TCP/IP host names to IP addresses.
| DFSMS (in z/OS).
double-byte character large object (DBCLOB). A
| DFSMShsm. The hierarchical storage manager (hsm) sequence of bytes representing double-byte characters
| component of DFSMS (in z/OS). where the size of the values can be up to 2 GB. In
general, DBCLOB values are used whenever a
dimension. A data category such as time, products, or double-byte character string might exceed the limits of
markets. The elements of a dimension are referred to as the VARGRAPHIC type.
members. Dimensions offer a very concise, intuitive
way of organizing and selecting data for retrieval, double-byte character set (DBCS). A set of characters,
exploration, and analysis. See also dimension table. which are used by national languages such as Japanese
and Chinese, that have more symbols than can be
dimension table. The representation of a dimension in represented by a single byte. Each character is 2 bytes
a star schema. Each row in a dimension table in length. Contrast with single-byte character set and
represents all of the attributes for a particular member multibyte character set.
of the dimension. See also dimension, star schema, and
star join. double-precision floating point number. A 64-bit
approximate representation of a real number.
directory. The DB2 system database that contains
internal objects such as database descriptors and downstream. The set of nodes in the syncpoint tree
skeleton cursor tables. that is connected to the local DBMS as a participant in
the execution of a two-phase commit.
# distinct predicate. In SQL, a predicate that ensures
# that two row values are not equal, and that both row | DPSI. Data-partitioned secondary index.
# values are not null.
drain. The act of acquiring a locked resource by
distinct type. A user-defined data type that is quiescing access to that object.
internally represented as an existing type (its source
type), but is considered to be a separate and drain lock. A lock on a claim class that prevents a
incompatible type for semantic purposes. claim from occurring.
Glossary 1251
execution context • foreign key
execution context. In SQLJ, a Java object that can be external subsystem module table (ESMT). In IMS, the
used to control the execution of SQL statements. table that specifies which attachment modules must be
loaded.
exit routine. A user-written (or IBM-provided default)
program that receives control from DB2 to perform
specific functions. Exit routines run as extensions of F
DB2.
failed member state. A state of a member of a data
expanding conversion. A process that occurs when sharing group. When a member fails, the XCF
the length of a converted string is greater than that of permanently records the failed member state. This state
the source string. For example, this process occurs usually means that the member’s task, address space,
when an ASCII mixed-data string that contains DBCS or z/OS system terminated before the state changed
characters is converted to an EBCDIC mixed-data from active to quiesced.
string; the converted string is longer because of the
addition of shift codes. fallback. The process of returning to a previous
release of DB2 after attempting or completing migration
explicit hierarchical locking. Locking that is used to to a current release.
make the parent-child relationship between resources
known to IRLM. This kind of locking avoids global false global lock contention. A contention indication
locking overhead when no inter-DB2 interest exists on a from the coupling facility when multiple lock names
resource. are hashed to the same indicator and when no real
contention exists.
exposed name. A correlation name or a table or view
name for which a correlation name is not specified. fan set. A direct physical access path to data, which is
Names that are specified in a FROM clause are exposed provided by an index, hash, or link; a fan set is the
or non-exposed. means by which the data manager supports the
ordering of data.
expression. An operand or a collection of operators
and operands that yields a single value. federated database. The combination of a DB2
Universal Database server (in Linux, UNIX, and
extended recovery facility (XRF). A facility that Windows environments) and multiple data sources to
minimizes the effect of failures in z/OS, VTAM , the which the server sends queries. In a federated database
host processor, or high-availability applications during system, a client application can use a single SQL
sessions between high-availability applications and statement to join data that is distributed across multiple
designated terminals. This facility provides an database management systems and can view the data
alternative subsystem to take over sessions from the as if it were local.
failing subsystem.
fetch orientation. The specification of the desired
Extensible Markup Language (XML). A standard placement of the cursor as part of a FETCH statement
metalanguage for defining markup languages that is a (for example, BEFORE, AFTER, NEXT, PRIOR,
subset of Standardized General Markup Language CURRENT, FIRST, LAST, ABSOLUTE, and RELATIVE).
(SGML). The less complex nature of XML makes it
easier to write applications that handle document field procedure. A user-written exit routine that is
types, to author and manage structured information, designed to receive a single value and transform
and to transmit and share structured information across (encode or decode) it in any way the user can specify.
diverse computing environments.
filter factor. A number between zero and one that
external function. A function for which the body is estimates the proportion of rows in a table for which a
written in a programming language that takes scalar predicate is true.
argument values and produces a scalar result for each
fixed-length string. A character or graphic string
invocation. Contrast with sourced function, built-in
whose length is specified and cannot be changed.
function, and SQL function.
Contrast with varying-length string.
external procedure. A user-written application
FlashCopy. A function on the IBM Enterprise Storage
program that can be invoked with the SQL CALL
Server that can create a point-in-time copy of data
statement, which is written in a programming
while an application is running.
language. Contrast with SQL procedure.
foreign key. A column or set of columns in a
external routine. A user-defined function or stored
dependent table of a constraint relationship. The key
procedure that is based on code that is written in an
must have the same number of columns, with the same
external programming language.
descriptions, as the primary key of the parent table.
Each foreign key value must either match a parent key list of the applicable schema names (called the SQL
value in the related parent table or be null. path) to make the selection. This process is sometimes
called function selection.
| forest. An ordered set of subtrees of XML nodes.
function selection. See function resolution.
forget. In a two-phase commit operation, (1) the vote
that is sent to the prepare phase when the participant function signature. The logical concatenation of a
has not modified any data. The forget vote allows a fully qualified function name with the data types of all
participant to release locks and forget about the logical of its parameters.
unit of work. This is also referred to as the read-only
vote. (2) The response to the committed request in the
second phase of the operation.
G
forward log recovery. The third phase of restart GB. Gigabyte (1 073 741 824 bytes).
processing during which DB2 processes the log in a
GBP. Group buffer pool.
forward direction to apply all REDO log records.
GBP-dependent. The status of a page set or page set
free space. The total amount of unused space in a
partition that is dependent on the group buffer pool.
page; that is, the space that is not used to store records
Either read/write interest is active among DB2
or control information is free space.
subsystems for this page set, or the page set has
full outer join. The result of a join operation that changed pages in the group buffer pool that have not
includes the matched rows of both tables that are being yet been cast out to disk.
joined and preserves the unmatched rows of both
generalized trace facility (GTF). A z/OS service
tables. See also join.
program that records significant system events such as
fullselect. A subselect, a values-clause, or a number of I/O interrupts, SVC interrupts, program interrupts, or
both that are combined by set operators. Fullselect external interrupts.
specifies a result table. If UNION is not used, the result
generic resource name. A name that VTAM uses to
of the fullselect is the result of the specified subselect.
represent several application programs that provide the
| fully escaped mapping. A mapping from an SQL same function in order to handle session distribution
| identifier to an XML name when the SQL identifier is a and balancing in a Sysplex environment.
| column name.
getpage. An operation in which DB2 accesses a data
# function. A mapping, which is embodied as a page.
# program (the function body) that is invocable by means
global lock. A lock that provides concurrency control
# of zero or more input values (arguments) to a single
within and among DB2 subsystems. The scope of the
# value (the result). See also aggregate function and scalar
lock is across all DB2 subsystems of a data sharing
# function.
group.
# Functions can be user-defined, built-in, or generated by
# DB2. (See also built-in function, cast function, external global lock contention. Conflicts on locking requests
# function, sourced function, SQL function, and user-defined between different DB2 members of a data sharing
# function.) group when those members are trying to serialize
shared resources.
function definer. The authorization ID of the owner
of the schema of the function that is specified in the governor. See resource limit facility.
CREATE FUNCTION statement.
graphic string. A sequence of DBCS characters.
function implementer. The authorization ID of the
owner of the function program and function package. gross lock. The shared, update, or exclusive mode locks
on a table, partition, or table space.
function package. A package that results from binding
the DBRM for a function program. group buffer pool (GBP). A coupling facility cache
structure that is used by a data sharing group to cache
function package owner. The authorization ID of the data and to ensure that the data is consistent for all
user who binds the function program’s DBRM into a members.
function package.
group buffer pool duplexing. The ability to write
function resolution. The process, internal to the data to two instances of a group buffer pool structure: a
DBMS, by which a function invocation is bound to a primary group buffer pool and a secondary group buffer
particular function instance. This process uses the
function name, the data types of the arguments, and a
Glossary 1253
group level • image copy
pool. z/OS publications refer to these instances as the host structure. In an application program, a structure
"old" (for primary) and "new" (for secondary) that is referenced by embedded SQL statements.
structures.
host variable. In an application program, an
group level. The release level of a data sharing group, application variable that is referenced by embedded
which is established when the first member migrates to SQL statements.
a new release.
| host variable array. An array of elements, each of
group name. The z/OS XCF identifier for a data | which corresponds to a value for a column. The
sharing group. | dimension of the array determines the maximum
| number of rows for which the array can be used.
group restart. A restart of at least one member of a
data sharing group after the loss of either locks or the HSM. Hierarchical storage manager.
shared communications area.
HTML. Hypertext Markup Language, a standard
GTF. Generalized trace facility. method for presenting Web data to users.
heuristic decision. A decision that forces indoubt identify. A request that an attachment service
resolution at a participant by means other than program in an address space that is separate from DB2
automatic resynchronization between coordinator and issues thorough the z/OS subsystem interface to inform
participant. DB2 of its existence and to initiate the process of
becoming connected to DB2.
| hole. A row of the result table that cannot be accessed
| because of a delete or an update that has been identity column. A column that provides a way for
| performed on the row. See also delete hole and update DB2 to automatically generate a numeric value for each
| hole. row. The generated values are unique if cycling is not
used. Identity columns are defined with the AS
home address space. The area of storage that z/OS IDENTITY clause. Uniqueness of values can be ensured
currently recognizes as dispatched. by defining a unique index that contains only the
identity column. A table can have no more than one
host. The set of programs and resources that are identity column.
available on a given TCP/IP instance.
IFCID. Instrumentation facility component identifier.
host expression. A Java variable or expression that is
referenced by SQL clauses in an SQLJ application IFI. Instrumentation facility interface.
program.
IFI call. An invocation of the instrumentation facility
host identifier. A name that is declared in the host interface (IFI) by means of one of its defined functions.
program.
IFP. IMS Fast Path.
host language. A programming language in which
you can embed SQL statements. image copy. An exact reproduction of all or part of a
table space. DB2 provides utility programs to make full
host program. An application program that is written image copies (to copy the entire table space) or
in a host language and that contains embedded SQL incremental image copies (to copy only those pages
statements. that have been modified since the last image copy).
implied forget. In the presumed-abort protocol, an to be committed or rolled back. At emergency restart, if
implied response of forget to the second-phase DB2 lacks the information it needs to make this
committed request from the coordinator. The response is decision, the status of the unit of recovery is indoubt
implied when the participant responds to any until DB2 obtains this information from the coordinator.
subsequent request from the coordinator. More than one unit of recovery can be indoubt at
restart.
IMS. Information Management System.
indoubt resolution. The process of resolving the
IMS attachment facility. A DB2 subcomponent that status of an indoubt logical unit of work to either the
uses z/OS subsystem interface (SSI) protocols and committed or the rollback state.
cross-memory linkage to process requests from IMS to
DB2 and to coordinate resource commitment. inflight. A status of a unit of recovery. If DB2 fails
before its unit of recovery completes phase 1 of the
IMS DB. Information Management System Database. commit process, it merely backs out the updates of its
unit of recovery at restart. These units of recovery are
IMS TM. Information Management System termed inflight.
Transaction Manager.
inheritance. The passing downstream of class
in-abort. A status of a unit of recovery. If DB2 fails resources or attributes from a parent class in the class
after a unit of recovery begins to be rolled back, but hierarchy to a child class.
before the process is completed, DB2 continues to back
out the changes during restart. initialization file. For DB2 ODBC applications, a file
containing values that can be set to adjust the
in-commit. A status of a unit of recovery. If DB2 fails performance of the database manager.
after beginning its phase 2 commit processing, it
"knows," when restarted, that changes made to data are inline copy. A copy that is produced by the LOAD or
consistent. Such units of recovery are termed in-commit. REORG utility. The data set that the inline copy
produces is logically equivalent to a full image copy
independent. An object (row, table, or table space) that is produced by running the COPY utility with
that is neither a parent nor a dependent of another read-only access (SHRLEVEL REFERENCE).
object.
inner join. The result of a join operation that includes
index. A set of pointers that are logically ordered by only the matched rows of both tables that are being
the values of a key. Indexes can provide faster access to joined. See also join.
data and can enforce uniqueness on the rows in a table.
inoperative package. A package that cannot be used
| index-controlled partitioning. A type of partitioning because one or more user-defined functions or
| in which partition boundaries for a partitioned table are procedures that the package depends on were dropped.
| controlled by values that are specified on the CREATE Such a package must be explicitly rebound. Contrast
| INDEX statement. Partition limits are saved in the with invalid package.
| LIMITKEY column of the SYSIBM.SYSINDEXPART
| catalog table. | insensitive cursor. A cursor that is not sensitive to
| inserts, updates, or deletes that are made to the
index key. The set of columns in a table that is used | underlying rows of a result table after the result table
to determine the order of index entries. | has been materialized.
index partition. A VSAM data set that is contained insert trigger. A trigger that is defined with the
within a partitioning index space. triggering SQL operation INSERT.
index space. A page set that is used to store the install. The process of preparing a DB2 subsystem to
entries of one index. operate as a z/OS subsystem.
indicator column. A 4-byte value that is stored in a installation verification scenario. A sequence of
base table in place of a LOB column. operations that exercises the main DB2 functions and
tests whether DB2 was correctly installed.
indicator variable. A variable that is used to represent
the null value in an application program. If the value instrumentation facility component identifier
for the selected column is null, a negative value is (IFCID). A value that names and identifies a trace
placed in the indicator variable. record of an event that can be traced. As a parameter
on the START TRACE and MODIFY TRACE
indoubt. A status of a unit of recovery. If DB2 fails
commands, it specifies that the corresponding event is
after it has finished its phase 1 commit processing and
to be traced.
before it has started phase 2, only the commit
coordinator knows if an individual unit of recovery is
Glossary 1255
instrumentation facility interface (IFI) • Kerberos ticket
Interactive System Productivity Facility (ISPF). An iterator. In SQLJ, an object that contains the result set
IBM licensed program that provides interactive dialog of a query. An iterator is equivalent to a cursor in other
services in a z/OS environment. host languages.
inter-DB2 R/W interest. A property of data in a table iterator declaration clause. In SQLJ, a statement that
space, index, or partition that has been opened by more generates an iterator declaration class. An iterator is an
than one member of a data sharing group and that has object of an iterator declaration class.
been opened for writing by at least one of those
members. J
intermediate database server. The target of a request
from a local application or a remote application
| Japanese Industrial Standard. An encoding scheme
| that is used to process Japanese characters.
requester that is forwarded to another database server.
In the DB2 environment, the remote request is | JAR. Java Archive.
forwarded transparently to another database server if
the object that is referenced by a three-part name does Java Archive (JAR). A file format that is used for
not reference the local location. aggregating many files into a single file.
internationalization. The support for an encoding JCL. Job control language.
scheme that is able to represent the code points of
characters from many different geographies and JDBC. A Sun Microsystems database application
languages. To support all geographies, the Unicode programming interface (API) for Java that allows
standard requires more than 1 byte to represent a single programs to access database management systems by
character. See also Unicode. using callable SQL. JDBC does not require the use of an
SQL preprocessor. In addition, JDBC provides an
internal resource lock manager (IRLM). A z/OS architecture that lets users add modules called database
subsystem that DB2 uses to control communication and drivers, which link the application to their choice of
database locking. database management systems at run time.
| International Organization for Standardization. An JES. Job Entry Subsystem.
| international body charged with creating standards to
| facilitate the exchange of goods and services as well as JIS. Japanese Industrial Standard.
| cooperation in intellectual, scientific, technological, and
| economic activity. job control language (JCL). A control language that is
used to identify a job to an operating system and to
invalid package. A package that depends on an object describe the job’s requirements.
(other than a user-defined function) that is dropped.
Such a package is implicitly rebound on invocation. Job Entry Subsystem (JES). An IBM licensed program
Contrast with inoperative package. that receives jobs into the system and processes all
output data that is produced by the jobs.
invariant character set. (1) A character set, such as the
syntactic character set, whose code point assignments join. A relational operation that allows retrieval of
do not change from code page to code page. (2) A data from two or more tables based on matching
minimum set of characters that is available as part of column values. See also equijoin, full outer join, inner
all character sets. join, left outer join, outer join, and right outer join.
ISO. International Organization for Standardization. Kerberos. A network authentication protocol that is
designed to provide strong authentication for
isolation level. The degree to which a unit of work is client/server applications by using secret-key
isolated from the updating operations of other units of cryptography.
work. See also cursor stability, read stability, repeatable
read, and uncommitted read. Kerberos ticket. A transparent application mechanism
that transmits the identity of an initiating principal to
its target. A simple ticket contains the principal’s
identity, a session key, a timestamp, and other modules by resolving cross references among the
information, which is sealed using the target’s secret modules and, if necessary, adjusting addresses.
key.
link-edit. The action of creating a loadable computer
key. A column or an ordered collection of columns program using a linkage editor.
that is identified in the description of a table, index, or
referential constraint. The same column can be part of list. A type of object, which DB2 utilities can process,
more than one key. that identifies multiple table spaces, multiple index
spaces, or both. A list is defined with the LISTDEF
key-sequenced data set (KSDS). A VSAM file or data utility control statement.
set whose records are loaded in key sequence and
controlled by an index. list structure. A coupling facility structure that lets
data be shared and manipulated as elements of a
keyword. In SQL, a name that identifies an option queue.
that is used in an SQL statement.
LLE. Load list element.
KSDS. Key-sequenced data set.
L-lock. Logical lock.
latch. A DB2 internal mechanism for controlling LOB lock. A lock on a LOB value.
concurrent events or the use of system resources.
LOB table space. A table space in an auxiliary table
LCID. Log control interval definition. that contains all the data for a particular LOB column
in the related base table.
LDS. Linear data set.
local. A way of referring to any object that the local
leaf page. A page that contains pairs of keys and RIDs DB2 subsystem maintains. A local table, for example, is
and that points to actual data. Contrast with nonleaf a table that is maintained by the local DB2 subsystem.
page. Contrast with remote.
left outer join. The result of a join operation that locale. The definition of a subset of a user’s
includes the matched rows of both tables that are being environment that combines a CCSID and characters
joined, and that preserves the unmatched rows of the that are defined for a specific language and country.
first table. See also join.
local lock. A lock that provides intra-DB2 concurrency
limit key. The highest value of the index key for a control, but not inter-DB2 concurrency control; that is,
partition. its scope is a single DB2.
linear data set (LDS). A VSAM data set that contains local subsystem. The unique relational DBMS to
data but no control information. A linear data set can which the user or application program is directly
be accessed as a byte-addressable string in virtual connected (in the case of DB2, by one of the DB2
storage. attachment facilities).
linkage editor. A computer program for creating load | location. The unique name of a database server. An
modules from one or more object modules or load | application uses the location name to access a DB2
Glossary 1257
location alias • LU
| database server. A database alias can be used to logical lock (L-lock). The lock type that transactions
| override the location name when accessing a remote use to control intra- and inter-DB2 data concurrency
| server. between transactions. Contrast with physical lock
(P-lock).
| location alias. Another name by which a database
| server identifies itself in the network. Applications can logically complete. A state in which the concurrent
| use this name to access a DB2 database server. copy process is finished with the initialization of the
target objects that are being copied. The target objects
lock. A means of controlling concurrent events or are available for update.
access to data. DB2 locking is performed by the IRLM.
logical page list (LPL). A list of pages that are in error
lock duration. The interval over which a DB2 lock is and that cannot be referenced by applications until the
held. pages are recovered. The page is in logical error because
the actual media (coupling facility or disk) might not
lock escalation. The promotion of a lock from a row, contain any errors. Usually a connection to the media
page, or LOB lock to a table space lock because the has been lost.
number of page locks that are concurrently held on a
given resource exceeds a preset limit. logical partition. A set of key or RID pairs in a
nonpartitioning index that are associated with a
locking. The process by which the integrity of data is particular partition.
ensured. Locking prevents concurrent users from
accessing inconsistent data. logical recovery pending (LRECP). The state in which
the data and the index keys that reference the data are
lock mode. A representation for the type of access that inconsistent.
concurrently running programs can have to a resource
that a DB2 lock is holding. logical unit (LU). An access point through which an
application program accesses the SNA network in order
lock object. The resource that is controlled by a DB2 to communicate with another application program.
lock.
logical unit of work (LUW). The processing that a
lock promotion. The process of changing the size or program performs between synchronization points.
mode of a DB2 lock to a higher, more restrictive level.
logical unit of work identifier (LUWID). A name that
lock size. The amount of data that is controlled by a uniquely identifies a thread within a network. This
DB2 lock on table data; the value can be a row, a page, name consists of a fully-qualified LU network name, an
a LOB, a partition, a table, or a table space. LUW instance number, and an LUW sequence number.
lock structure. A coupling facility data structure that log initialization. The first phase of restart processing
is composed of a series of lock entries to support during which DB2 attempts to locate the current end of
shared and exclusive locking for logical resources. the log.
log. A collection of records that describe the events log record header (LRH). A prefix, in every logical
that occur during DB2 execution and that indicate their record, that contains control information.
sequence. The information thus recorded is used for
recovery in the event of a failure during DB2 execution. log record sequence number (LRSN). A unique
identifier for a log record that is associated with a data
| log control interval definition. A suffix of the sharing member. DB2 uses the LRSN for recovery in
| physical log record that tells how record segments are the data sharing environment.
| placed in the physical control interval.
log truncation. A process by which an explicit starting
logical claim. A claim on a logical partition of a RBA is established. This RBA is the point at which the
nonpartitioning index. next byte of log data is to be written.
logical data modeling. The process of documenting LPL. Logical page list.
the comprehensive business information requirements
in an accurate and consistent format. Data modeling is LRECP. Logical recovery pending.
the first task of designing a database.
LRH. Log record header.
logical drain. A drain on a logical partition of a
nonpartitioning index. LRSN. Log record sequence number.
logical index partition. The set of all keys that LU. Logical unit.
reference the same data partition.
LU name. Logical unit name, which is the name by MODEENT. A VTAM macro instruction that
which VTAM refers to a node in a network. Contrast associates a logon mode name with a set of parameters
with location name. representing session protocols. A set of MODEENT
macro instructions defines a logon mode table.
LUW. Logical unit of work.
modeling database. A DB2 database that you create
LUWID. Logical unit of work identifier. on your workstation that you use to model a DB2 UDB
for z/OS subsystem, which can then be evaluated by
M the Index Advisor.
| materialized query table. A table that is used to Multiple Virtual Storage. An element of the z/OS
| contain information that is derived and can be operating system. This element is also called the Base
| summarized from one or more source tables. Control Program (BCP).
MB. Megabyte (1 048 576 bytes). multisite update. Distributed relational database
processing in which data is updated in more than one
MBCS. Multibyte character set. UTF-8 is an example location within a single unit of work.
of an MBCS. Characters in UTF-8 can range from 1 to 4
bytes in DB2. multithreading. Multiple TCBs that are executing one
copy of DB2 ODBC code concurrently (sharing a
member name. The z/OS XCF identifier for a processor) or in parallel (on separate central
particular DB2 subsystem in a data sharing group. processors).
menu. A displayed list of available functions for must-complete. A state during DB2 processing in
selection by the operator. A menu is sometimes called a which the entire operation must be completed to
menu panel. maintain data integrity.
| metalanguage. A language that is used to create other mutex. Pthread mutual exclusion; a lock. A Pthread
| specialized languages. mutex variable is used as a locking mechanism to allow
serialization of critical sections of code by temporarily
migration. The process of converting a subsystem blocking the execution of all but one thread.
with a previous release of DB2 to an updated or
current release. In this process, you can acquire the | MVS. See Multiple Virtual Storage.
functions of the updated or current release without
losing the data that you created on the previous
release. N
mixed data string. A character string that can contain negotiable lock. A lock whose mode can be
both single-byte and double-byte characters. downgraded, by agreement among contending users, to
be compatible to all. A physical lock is an example of a
MLPA. Modified link pack area. negotiable lock.
Glossary 1259
nested table expression • overloaded function
nested table expression. A fullselect in a FROM clause For Unicode UCS-2 (wide) strings, the null terminator
(surrounded by parentheses). is a double-byte value (X'0000').
NRE. Network recovery element. originating task. In a parallel group, the primary
agent that receives data from other execution units
NUL. The null character (’\0’), which is represented (referred to as parallel tasks) that are executing portions
by the value X'00'. In C, this character denotes the end of the query in parallel.
of a string.
OS/390. Operating System/390®.
null. A special value that indicates the absence of
information. outer join. The result of a join operation that includes
the matched rows of both tables that are being joined
NULLIF. A scalar function that evaluates two passed and preserves some or all of the unmatched rows of the
expressions, returning either NULL if the arguments tables that are being joined. See also join.
are equal or the value of the first argument if they are
not. overloaded function. A function name for which
multiple function instances exist.
null-terminated host variable. A varying-length host
variable in which the end of the data is indicated by a
null terminator.
Glossary 1261
partitioning index • primary authorization ID
| partitioning index. An index in which the leftmost policy. See CFRM policy.
| columns are the partitioning columns of the table. The
| index can be partitioned or nonpartitioned. Portable Operating System Interface (POSIX). The
IEEE operating system interface standard, which
| partition pruning. The removal from consideration of defines the Pthread standard of threading. See also
| inapplicable partitions through setting up predicates in Pthread.
| a query on a partitioned table to access only certain
| partitions to satisfy the query. POSIX. Portable Operating System Interface.
partner logical unit. An access point in the SNA postponed abort UR. A unit of recovery that was
network that is connected to the local DB2 subsystem inflight or in-abort, was interrupted by system failure
by way of a VTAM conversation. or cancellation, and did not complete backout during
restart.
path. See SQL path.
PPT. (1) Processing program table (in CICS). (2)
PCT. Program control table (in CICS). Program properties table (in z/OS).
PDS. Partitioned data set. precision. In SQL, the total number of digits in a
decimal number (called the size in the C language). In
piece. A data set of a nonpartitioned page set. the C language, the number of digits to the right of the
decimal point (called the scale in SQL). The DB2 library
physical claim. A claim on an entire nonpartitioning uses the SQL terms.
index.
precompilation. A processing of application programs
physical consistency. The state of a page that is not in containing SQL statements that takes place before
a partially changed state. compilation. SQL statements are replaced with
statements that are recognized by the host language
physical drain. A drain on an entire nonpartitioning
compiler. Output from this precompilation includes
index.
source code that can be submitted to the compiler and
physical lock (P-lock). A type of lock that DB2 the database request module (DBRM) that is input to
acquires to provide consistency of data that is cached in the bind process.
different DB2 subsystems. Physical locks are used only
predicate. An element of a search condition that
in data sharing environments. Contrast with logical lock
expresses or implies a comparison operation.
(L-lock).
prefix. A code at the beginning of a message or
physical lock contention. Conflicting states of the
record.
requesters for a physical lock. See also negotiable lock.
preformat. The process of preparing a VSAM ESDS
physically complete. The state in which the
for DB2 use, by writing specific data patterns.
concurrent copy process is completed and the output
data set has been created. prepare. The first phase of a two-phase commit
process in which all participants are requested to
plan. See application plan.
prepare for commit.
plan allocation. The process of allocating DB2
prepared SQL statement. A named object that is the
resources to a plan in preparation for execution.
executable form of an SQL statement that has been
plan member. The bound copy of a DBRM that is processed by the PREPARE statement.
identified in the member clause.
presumed-abort. An optimization of the
plan name. The name of an application plan. presumed-nothing two-phase commit protocol that
reduces the number of recovery log records, the
plan segmentation. The dividing of each plan into duration of state maintenance, and the number of
sections. When a section is needed, it is independently messages between coordinator and participant. The
brought into the EDM pool. optimization also modifies the indoubt resolution
responsibility.
P-lock. Physical lock.
presumed-nothing. The standard two-phase commit
PLT. Program list table (in CICS). protocol that defines coordinator and participant
responsibilities, relative to logical unit of work states,
point of consistency. A time when all recoverable recovery logging, and indoubt resolution.
data that an application accesses is consistent with
other data. The term point of consistency is primary authorization ID. The authorization ID that
synonymous with sync point or commit point. is used to identify the application process to DB2.
primary group buffer pool. For a duplexed group current unaltered release of a licensed program. An
buffer pool, the structure that is used to maintain the authorized program analysis report (APAR) fix is
coherency of cached data. This structure is used for corrective service for an existing problem. A PTF is
page registration and cross-invalidation. The z/OS preventive service for problems that might be
equivalent is old structure. Compare with secondary encountered by other users of the product. A PTF is
group buffer pool. temporary, because a permanent fix is usually not
incorporated into the product until its next release.
primary index. An index that enforces the uniqueness
of a primary key. protected conversation. A VTAM conversation that
supports two-phase commit flows.
primary key. In a relational database, a unique,
nonnull key that is part of the definition of a table. A PSRCP. Page set recovery pending.
table cannot be defined as a parent unless it has a
unique key or primary key. PTF. Program temporary fix.
principal. An entity that can communicate securely Pthread. The POSIX threading standard model for
with another entity. In Kerberos, principals are splitting an application into subtasks. The Pthread
represented as entries in the Kerberos registry database standard includes functions for creating threads,
and include users, servers, computers, and others. terminating threads, synchronizing threads through
locking, and other thread control facilities.
principal name. The name by which a principal is
known to the DCE security services.
Q
private connection. A communications connection that
is specific to DB2. QMF. Query Management Facility.
private protocol access. A method of accessing QSAM. Queued sequential access method.
distributed data by which you can direct a query to
query. A component of certain SQL statements that
another DB2 system. Contrast with DRDA access.
specifies a result table.
private protocol connection. A DB2 private connection
query block. The part of a query that is represented
of the application process. See also private connection.
by one of the FROM clauses. Each FROM clause can
privilege. The capability of performing a specific have multiple query blocks, depending on DB2’s
function, sometimes on a specific object. The types of internal processing of the query.
privileges are:
query CP parallelism. Parallel execution of a single
explicit privileges, which have names and are held
query, which is accomplished by using multiple tasks.
as the result of SQL GRANT and REVOKE
See also Sysplex query parallelism.
statements. For example, the SELECT privilege.
implicit privileges, which accompany the query I/O parallelism. Parallel access of data, which
ownership of an object, such as the privilege to drop is accomplished by triggering multiple I/O requests
a synonym that one owns, or the holding of an within a single query.
authority, such as the privilege of SYSADM
authority to terminate any utility job. queued sequential access method (QSAM). An
extended version of the basic sequential access method
privilege set. For the installation SYSADM ID, the set (BSAM). When this method is used, a queue of data
of all possible privileges. For any other authorization blocks is formed. Input data blocks await processing,
ID, the set of all privileges that are recorded for that ID and output data blocks await transfer to auxiliary
in the DB2 catalog. storage or to an output device.
process. In DB2, the unit to which DB2 allocates quiesce point. A point at which data is consistent as a
resources and locks. Sometimes called an application result of running the DB2 QUIESCE utility.
process, a process involves the execution of one or more
programs. The execution of an SQL statement is always quiesced member state. A state of a member of a data
associated with some process. The means of initiating sharing group. An active member becomes quiesced
and terminating a process are dependent on the when a STOP DB2 command takes effect without a
environment. failure. If the member’s task, address space, or z/OS
system fails before the command takes effect, the
program. A single, compilable collection of executable member state is failed.
statements in a programming language.
Glossary 1263
RACF • referential integrity
RCT. Resource control table (in CICS attachment recovery. The process of rebuilding databases after a
facility). system failure.
RDB. Relational database. recovery log. A collection of records that describes the
events that occur during DB2 execution and indicates
RDBMS. Relational database management system. their sequence. The recorded information is used for
recovery in the event of a failure during DB2 execution.
RDBNAM. Relational database name.
recovery manager. (1) A subcomponent that supplies
RDF. Record definition field. coordination services that control the interaction of DB2
resource managers during commit, abort, checkpoint,
read stability (RS). An isolation level that is similar to and restart processes. The recovery manager also
repeatable read but does not completely isolate an supports the recovery mechanisms of other subsystems
application process from all other concurrently (for example, IMS) by acting as a participant in the
executing application processes. Under level RS, an other subsystem’s process for protecting data that has
application that issues the same query more than once reached a point of consistency. (2) A coordinator or a
might read additional rows that were inserted and participant (or both), in the execution of a two-phase
committed by a concurrently executing application commit, that can access a recovery log that maintains
process. the state of the logical unit of work and names the
immediate upstream coordinator and downstream
rebind. The creation of a new application plan for an
participants.
application program that has been bound previously. If,
for example, you have added an index for a table that recovery pending (RECP). A condition that prevents
your application accesses, you must rebind the SQL access to a table space that needs to be recovered.
application in order to take advantage of that index.
recovery token. An identifier for an element that is
rebuild. The process of reallocating a coupling facility used in recovery (for example, NID or URID).
structure. For the shared communications area (SCA)
and lock structure, the structure is repopulated; for the RECP. Recovery pending.
group buffer pool, changed pages are usually cast out
to disk, and the new structure is populated only with redo. A state of a unit of recovery that indicates that
changed pages that were not successfully cast out. changes are to be reapplied to the disk media to ensure
data integrity.
RECFM. Record format.
reentrant. Executable code that can reside in storage
record. The storage representation of a row or other as one shared copy for all threads. Reentrant code is
data. not self-modifying and provides separate storage areas
for each thread. Reentrancy is a compiler and operating
record identifier (RID). A unique identifier that DB2 system concept, and reentrancy alone is not enough to
uses internally to identify a row of data in a table. guarantee logically consistent results when
Compare with row ID. multithreading. See also threadsafe.
| record identifier (RID) pool. An area of main storage referential constraint. The requirement that nonnull
| that is used for sorting record identifiers during values of a designated foreign key are valid only if they
| list-prefetch processing. equal values of the primary key of a designated table.
record length. The sum of the length of all the referential integrity. The state of a database in which
columns in a table, which is the length of the data as it all values of all foreign keys are valid. Maintaining
is physically stored in the database. Records can be referential integrity requires the enforcement of
fixed length or varying length, depending on how the referential constraints on all operations that change the
columns are defined. If all columns are fixed-length data in a table on which the referential constraints are
defined.
referential structure. A set of tables and relationships reoptimization, DB2 uses the values of host variables,
that includes at least one table and, for every table in parameter markers, or special registers.
the set, all the relationships in which that table
participates and all the tables to which it is related. REORG pending (REORP). A condition that restricts
SQL access and most utility access to an object that
| refresh age. The time duration between the current must be reorganized.
| time and the time during which a materialized query
| table was last refreshed. REORP. REORG pending.
registry. See registry database. repeatable read (RR). The isolation level that provides
maximum protection from other executing application
registry database. A database of security information programs. When an application program executes with
about principals, groups, organizations, accounts, and repeatable read protection, rows that the program
security policies. references cannot be changed by other programs until
the program reaches a commit point.
relational database (RDB). A database that can be
perceived as a set of tables and manipulated in repeating group. A situation in which an entity
accordance with the relational model of data. includes multiple attributes that are inherently the
same. The presence of a repeating group violates the
relational database management system (RDBMS). A requirement of first normal form. In an entity that
collection of hardware and software that organizes and satisfies the requirement of first normal form, each
provides access to a relational database. attribute is independent and unique in its meaning and
its name. See also normalization.
relational database name (RDBNAM). A unique
identifier for an RDBMS within a network. In DB2, this replay detection mechanism. A method that allows a
must be the value in the LOCATION column of table principal to detect whether a request is a valid request
SYSIBM.LOCATIONS in the CDB. DB2 publications from a source that can be trusted or whether an
refer to the name of another RDBMS as a LOCATION untrustworthy entity has captured information from a
value or a location name. previous exchange and is replaying the information
exchange to gain access to the principal.
relationship. A defined connection between the rows
of a table or the rows of two tables. A relationship is request commit. The vote that is submitted to the
the internal representation of a referential constraint. prepare phase if the participant has modified data and
is prepared to commit or roll back.
relative byte address (RBA). The offset of a data
record or control interval from the beginning of the requester. The source of a request to access data at a
storage space that is allocated to the data set or file to remote server. In the DB2 environment, the requester
which it belongs. function is provided by the distributed data facility.
remigration. The process of returning to a current resource. The object of a lock or claim, which could be
release of DB2 following a fallback to a previous a table space, an index space, a data partition, an index
release. This procedure constitutes another migration partition, or a logical partition.
process.
resource allocation. The part of plan allocation that
remote. Any object that is maintained by a remote deals specifically with the database resources.
DB2 subsystem (that is, by a DB2 subsystem other than
the local one). A remote view, for example, is a view that resource control table (RCT). A construct of the CICS
is maintained by a remote DB2 subsystem. Contrast attachment facility, created by site-provided macro
with local. parameters, that defines authorization and access
attributes for transactions or transaction groups.
remote attach request. A request by a remote location
to attach to the local DB2 subsystem. Specifically, the resource definition online. A CICS feature that you
request that is sent is an SNA Function Management use to define CICS resources online without assembling
Header 5. tables.
remote subsystem. Any relational DBMS, except the resource limit facility (RLF). A portion of DB2 code
local subsystem, with which the user or application can that prevents dynamic manipulative SQL statements
communicate. The subsystem need not be remote in from exceeding specified time limits. The resource limit
any physical sense, and might even operate on the facility is sometimes called the governor.
same processor under the same z/OS system.
resource limit specification table (RLST). A
reoptimization. The DB2 process of reconsidering the site-defined table that specifies the limits to be enforced
access path of an SQL statement at run time; during by the resource limit facility.
Glossary 1265
resource manager • scalar function
resource manager. (1) A function that is responsible routine. A term that refers to either a user-defined
for managing a particular resource and that guarantees function or a stored procedure.
the consistency of all updates made to recoverable
resources within a logical unit of work. The resource row. The horizontal component of a table. A row
that is being managed can be physical (for example, consists of a sequence of values, one for each column of
disk or main storage) or logical (for example, a the table.
particular type of system service). (2) A participant, in
the execution of a two-phase commit, that has ROWID. Row identifier.
recoverable resources that could have been modified.
row identifier (ROWID). A value that uniquely
The resource manager has access to a recovery log so
identifies a row. This value is stored with the row and
that it can commit or roll back the effects of the logical
never changes.
unit of work to the recoverable resources.
row lock. A lock on a single row of data.
restart pending (RESTP). A restrictive state of a page
set or partition that indicates that restart (backout) | rowset. A set of rows for which a cursor position is
work needs to be performed on the object. All access to | established.
the page set or partition is denied except for access by
the: | rowset cursor. A cursor that is defined so that one or
v RECOVER POSTPONED command | more rows can be returned as a rowset for a single
v Automatic online backout (which DB2 invokes after | FETCH statement, and the cursor is positioned on the
restart if the system parameter LBACKOUT=AUTO) | set of rows that is fetched.
result table. The set of rows that are specified by a RRE. Residual recovery entry (in IMS).
SELECT statement.
RRSAF. Recoverable Resource Manager Services
retained lock. A MODIFY lock that a DB2 subsystem attachment facility.
was holding at the time of a subsystem failure. The
lock is retained in the coupling facility lock structure RS. Read stability.
across a DB2 failure.
RTT. Resource translation table.
RID. Record identifier.
RURE. Restart URE.
RID pool. Record identifier pool.
scale. In SQL, the number of digits to the right of the self-referencing constraint. A referential constraint
decimal point (called the precision in the C language). that defines a relationship in which a table is a
The DB2 library uses the SQL definition. dependent of itself.
| schema. (1) The organization or structure of a self-referencing table. A table with a self-referencing
| database. (2) A logical grouping for user-defined constraint.
| functions, distinct types, triggers, and stored
| procedures. When an object of one of these types is | sensitive cursor. A cursor that is sensitive to changes
| created, it is assigned to one schema, which is | that are made to the database after the result table has
| determined by the name of the object. For example, the | been materialized.
| following statement creates a distinct type T in schema
| C: | sequence. A user-defined object that generates a
| sequence of numeric values according to user
| CREATE DISTINCT TYPE C.T ... | specifications.
| scrollability. The ability to use a cursor to fetch in sequential data set. A non-DB2 data set whose
either a forward or backward direction. The FETCH records are organized on the basis of their successive
statement supports multiple fetch orientations to physical positions, such as on magnetic tape. Several of
indicate the new position of the cursor. See also fetch the DB2 database utilities require sequential data sets.
orientation.
sequential prefetch. A mechanism that triggers
scrollable cursor. A cursor that can be moved in both consecutive asynchronous I/O operations. Pages are
a forward and a backward direction. fetched before they are required, and several pages are
read with a single I/O operation.
SDWA. System diagnostic work area.
serial cursor. A cursor that can be moved only in a
search condition. A criterion for selecting rows from a
forward direction.
table. A search condition consists of one or more
predicates. serialized profile. A Java object that contains SQL
statements and descriptions of host variables. The SQLJ
secondary authorization ID. An authorization ID that
translator produces a serialized profile for each
has been associated with a primary authorization ID by
connection context.
an authorization exit routine.
server. The target of a request from a remote
secondary group buffer pool. For a duplexed group
requester. In the DB2 environment, the server function
buffer pool, the structure that is used to back up
is provided by the distributed data facility, which is
changed pages that are written to the primary group
used to access DB2 data from remote applications.
buffer pool. No page registration or cross-invalidation
occurs using the secondary group buffer pool. The server-side programming. A method for adding DB2
z/OS equivalent is new structure. data into dynamic Web pages.
| secondary index. A nonpartitioning index on a service class. An eight-character identifier that is used
| partitioned table. by the z/OS Workload Manager to associate user
performance goals with a particular DDF thread or
section. The segment of a plan or package that
stored procedure. A service class is also used to classify
contains the executable structures for a single SQL
work on parallelism assistants.
statement. For most SQL statements, one section in the
plan exists for each SQL statement in the source service request block. A unit of work that is
program. However, for cursor-related statements, the scheduled to execute in another address space.
DECLARE, OPEN, FETCH, and CLOSE statements
reference the same section because they each refer to session. A link between two nodes in a VTAM
the SELECT statement that is named in the DECLARE network.
CURSOR statement. SQL statements such as COMMIT,
ROLLBACK, and some SET statements do not use a session protocols. The available set of SNA
section. communication requests and responses.
segment. A group of pages that holds rows of a single shared communications area (SCA). A coupling
table. See also segmented table space. facility list structure that a DB2 data sharing group uses
for inter-DB2 communication.
segmented table space. A table space that is divided
into equal-sized groups of pages called segments. share lock. A lock that prevents concurrently
Segments are assigned to tables so that rows of executing application processes from changing data,
different tables are never stored in the same segment. but not from reading data. Contrast with exclusive lock.
Glossary 1267
shift-in character • SQLJ
shift-in character. A special control character (X'0F') source program. A set of host language statements
that is used in EBCDIC systems to denote that the and SQL statements that is processed by an SQL
subsequent bytes represent SBCS characters. See also precompiler.
shift-out character.
| source table. A table that can be a base table, a view, a
shift-out character. A special control character (X'0E') | table expression, or a user-defined table function.
that is used in EBCDIC systems to denote that the
subsequent bytes, up to the next shift-in control source type. An existing type that DB2 uses to
character, represent DBCS characters. See also shift-in internally represent a distinct type.
character.
space. A sequence of one or more blank characters.
sign-on. A request that is made on behalf of an
individual CICS or IMS application process by an special register. A storage area that DB2 defines for an
attachment facility to enable DB2 to verify that it is application process to use for storing information that
authorized to use DB2 resources. can be referenced in SQL statements. Examples of
special registers are USER and CURRENT DATE.
simple page set. A nonpartitioned page set. A simple
page set initially consists of a single data set (page set specific function name. A particular user-defined
piece). If and when that data set is extended to 2 GB, function that is known to the database manager by its
another data set is created, and so on, up to a total of specific name. Many specific user-defined functions can
32 data sets. DB2 considers the data sets to be a single have the same function name. When a user-defined
contiguous linear address space containing a maximum function is defined to the database, every function is
of 64 GB. Data is stored in the next available location assigned a specific name that is unique within its
within this address space without regard to any schema. Either the user can provide this name, or a
partitioning scheme. default name is used.
simple table space. A table space that is neither SPUFI. SQL Processor Using File Input.
partitioned nor segmented.
SQL. Structured Query Language.
single-byte character set (SBCS). A set of characters
SQL authorization ID (SQL ID). The authorization ID
in which each character is represented by a single byte.
that is used for checking dynamic SQL statements in
Contrast with double-byte character set or multibyte
some situations.
character set.
SQLCA. SQL communication area.
single-precision floating point number. A 32-bit
approximate representation of a real number. SQL communication area (SQLCA). A structure that
is used to provide an application program with
size. In the C language, the total number of digits in a
information about the execution of its SQL statements.
decimal number (called the precision in SQL). The DB2
library uses the SQL term. SQL connection. An association between an
application process and a local or remote application
SMF. System Management Facilities.
server or database server.
SMP/E. System Modification Program/Extended.
SQLDA. SQL descriptor area.
SMS. Storage Management Subsystem.
SQL descriptor area (SQLDA). A structure that
SNA. Systems Network Architecture. describes input variables, output variables, or the
columns of a result table.
SNA network. The part of a network that conforms to
the formats and protocols of Systems Network SQL escape character. The symbol that is used to
Architecture (SNA). enclose an SQL delimited identifier. This symbol is the
double quotation mark ("). See also escape character.
socket. A callable TCP/IP programming interface that
TCP/IP network applications use to communicate with SQL function. A user-defined function in which the
remote TCP/IP partners. CREATE FUNCTION statement contains the source
code. The source code is a single SQL expression that
sourced function. A function that is implemented by evaluates to a single value. The SQL user-defined
another built-in or user-defined function that is already function can return only one parameter.
known to the database manager. This function can be a
scalar function or a column (aggregating) function; it SQL ID. SQL authorization ID.
returns a single value from a set of values (for example,
SQLJ. Structured Query Language (SQL) that is
MAX or AVG). Contrast with built-in function, external
embedded in the Java programming language.
function, and SQL function.
SQL path. An ordered list of schema names that are statement string. For a dynamic SQL statement, the
used in the resolution of unqualified references to character string form of the statement.
user-defined functions, distinct types, and stored
procedures. In dynamic SQL, the current path is found statement trigger. A trigger that is defined with the
in the CURRENT PATH special register. In static SQL, trigger granularity FOR EACH STATEMENT.
it is defined in the PATH bind option.
| static cursor. A named control structure that does not
SQL procedure. A user-written program that can be | change the size of the result table or the order of its
invoked with the SQL CALL statement. Contrast with | rows after an application opens the cursor. Contrast
external procedure. | with dynamic cursor.
SQL processing conversation. Any conversation that static SQL. SQL statements, embedded within a
requires access of DB2 data, either through an program, that are prepared during the program
application or by dynamic query requests. preparation process (before the program is executed).
After being prepared, the SQL statement does not
SQL Processor Using File Input (SPUFI). A facility of change (although values of host variables that are
the TSO attachment subcomponent that enables the specified by the statement might change).
DB2I user to execute SQL statements without
embedding them in an application program. storage group. A named set of disks on which DB2
data can be stored.
SQL return code. Either SQLCODE or SQLSTATE.
stored procedure. A user-written application program
SQL routine. A user-defined function or stored that can be invoked through the use of the SQL CALL
procedure that is based on code that is written in SQL. statement.
SQL statement coprocessor. An alternative to the DB2 string. See character string or graphic string.
precompiler that lets the user process SQL statements
at compile time. The user invokes an SQL statement strong typing. A process that guarantees that only
coprocessor by specifying a compiler option. user-defined functions and operations that are defined
on a distinct type can be applied to that type. For
SQL string delimiter. A symbol that is used to enclose example, you cannot directly compare two currency
an SQL string constant. The SQL string delimiter is the types, such as Canadian dollars and U.S. dollars. But
apostrophe ('), except in COBOL applications, where you can provide a user-defined function to convert one
the user assigns the symbol, which is either an currency to the other and then do the comparison.
apostrophe or a double quotation mark (").
structure. (1) A name that refers collectively to
SRB. Service request block. different types of DB2 objects, such as tables, databases,
views, indexes, and table spaces. (2) A construct that
SSI. Subsystem interface (in z/OS). uses z/OS to map and manage storage on a coupling
facility. See also cache structure, list structure, or lock
SSM. Subsystem member (in IMS). structure.
stand-alone. An attribute of a program that means Structured Query Language (SQL). A standardized
that it is capable of executing separately from DB2, language for defining and manipulating data in a
without using DB2 services. relational database.
star join. A method of joining a dimension column of structure owner. In relation to group buffer pools, the
a fact table to the key column of the corresponding DB2 member that is responsible for the following
dimension table. See also join, dimension, and star activities:
schema.
v Coordinating rebuild, checkpoint, and damage
star schema. The combination of a fact table (which assessment processing
contains most of the data) and a number of dimension v Monitoring the group buffer pool threshold and
tables. See also star join, dimension, and dimension table. notifying castout owners when the threshold has
been reached
statement handle. In DB2 ODBC, the data object that
contains information about an SQL statement that is subcomponent. A group of closely related DB2
managed by DB2 ODBC. This includes information modules that work together to provide a general
such as dynamic arguments, bindings for dynamic function.
arguments and columns, cursor information, result
values, and status information. Each statement handle subject table. The table for which a trigger is created.
is associated with the connection handle. When the defined triggering event occurs on this table,
the trigger is activated.
Glossary 1269
subpage • table space
subpage. The unit into which a physical index page system agent. A work request that DB2 creates
can be divided. internally such as prefetch processing, deferred writes,
and service tasks.
subquery. A SELECT statement within the WHERE or
HAVING clause of another SQL statement; a nested system conversation. The conversation that two DB2
SQL statement. subsystems must establish to process system messages
before any distributed processing can begin.
subselect. That form of a query that does not include
an ORDER BY clause, an UPDATE clause, or UNION system diagnostic work area (SDWA). The data that
operators. is recorded in a SYS1.LOGREC entry that describes a
program or hardware error.
substitution character. A unique character that is
substituted during character conversion for any system-directed connection. A connection that a
characters in the source program that do not have a relational DBMS manages by processing SQL
match in the target coding representation. statements with three-part names.
Sysplex. See Parallel Sysplex. table locator. A mechanism that allows access to
trigger transition tables in the FROM clause of SELECT
Sysplex query parallelism. Parallel execution of a statements, in the subselect of INSERT statements, or
single query that is accomplished by using multiple from within user-defined functions. A table locator is a
tasks on more than one DB2 subsystem. See also query fullword integer value that represents a transition table.
CP parallelism.
table space. A page set that is used to store the
system administrator. The person at a computer records in one or more tables.
installation who designs, controls, and manages the use
of the computer system.
table space set. A set of table spaces and partitions time duration. A decimal integer that represents a
that should be recovered together for one of these number of hours, minutes, and seconds.
reasons:
v Each of them contains a table that is a parent or timeout. Abnormal termination of either the DB2
descendent of a table in one of the others. subsystem or of an application because of the
v The set contains a base table and associated auxiliary unavailability of resources. Installation specifications
tables. are set to determine both the amount of time DB2 is to
wait for IRLM services after starting, and the amount
A table space set can contain both types of of time IRLM is to wait if a resource that an application
relationships. requests is unavailable. If either of these time
specifications is exceeded, a timeout is declared.
task control block (TCB). A z/OS control block that is
used to communicate information about tasks within an Time-Sharing Option (TSO). An option in MVS that
address space that are connected to DB2. See also provides interactive time sharing from remote
address space connection. terminals.
TB. Terabyte (1 099 511 627 776 bytes). timestamp. A seven-part value that consists of a date
and time. The timestamp is expressed in years, months,
TCB. Task control block (in z/OS).
days, hours, minutes, seconds, and microseconds.
TCP/IP. A network communication protocol that
TMP. Terminal Monitor Program.
computer systems use to exchange information across
telecommunication links. to-do. A state of a unit of recovery that indicates that
the unit of recovery’s changes to recoverable DB2
TCP/IP port. A 2-byte value that identifies an end user
resources are indoubt and must either be applied to the
or a TCP/IP network application within a TCP/IP host.
disk media or backed out, as determined by the
template. A DB2 utilities output data set descriptor commit coordinator.
that is used for dynamic allocation. A template is
trace. A DB2 facility that provides the ability to
defined by the TEMPLATE utility control statement.
monitor and collect DB2 monitoring, auditing,
temporary table. A table that holds temporary data. performance, accounting, statistics, and serviceability
Temporary tables are useful for holding or sorting (global) data.
intermediate results from queries that contain a large
transaction lock. A lock that is used to control
number of rows. The two types of temporary table,
concurrent execution of SQL statements.
which are created by different SQL statements, are the
created temporary table and the declared temporary transaction program name. In SNA LU 6.2
table. Contrast with result table. See also created conversations, the name of the program at the remote
temporary table and declared temporary table. logical unit that is to be the other half of the
conversation.
Terminal Monitor Program (TMP). A program that
provides an interface between terminal users and | transient XML data type. A data type for XML values
command processors and has access to many system | that exists only during query processing.
services (in z/OS).
transition table. A temporary table that contains all
thread. The DB2 structure that describes an the affected rows of the subject table in their state
application’s connection, traces its progress, processes before or after the triggering event occurs. Triggered
resource functions, and delimits its accessibility to DB2 SQL statements in the trigger definition can reference
resources and services. Most DB2 functions execute the table of changed rows in the old state or the new
under a thread structure. See also allied thread and state.
database access thread.
transition variable. A variable that contains a column
threadsafe. A characteristic of code that allows value of the affected row of the subject table in its state
multithreading both by providing private storage areas before or after the triggering event occurs. Triggered
for each thread, and by properly serializing shared SQL statements in the trigger definition can reference
(global) storage areas. the set of old values or the set of new values.
three-part name. The full name of a table, view, or | tree structure. A data structure that represents entities
alias. It consists of a location name, authorization ID, | in nodes, with a most one parent node for each node,
and an object name, separated by a period. | and with only one root node.
time. A three-part value that designates a time of day
in hours, minutes, and seconds.
Glossary 1271
trigger • unique index
trigger. A set of SQL statements that are stored in a that are not written for the CICS or IMS environments
DB2 database and executed when a certain event can run under the TSO attachment facility.
occurs in a DB2 table.
typed parameter marker. A parameter marker that is
trigger activation. The process that occurs when the specified along with its target data type. It has the
trigger event that is defined in a trigger definition is general form:
executed. Trigger activation consists of the evaluation CAST(? AS data-type)
of the triggered action condition and conditional
execution of the triggered SQL statements. type 1 indexes. Indexes that were created by a release
of DB2 before DB2 Version 4 or that are specified as
trigger activation time. An indication in the trigger type 1 indexes in Version 4. Contrast with type 2
definition of whether the trigger should be activated indexes. As of Version 8, type 1 indexes are no longer
before or after the triggered event. supported.
trigger body. The set of SQL statements that is type 2 indexes. Indexes that are created on a release
executed when a trigger is activated and its triggered of DB2 after Version 7 or that are specified as type 2
action condition evaluates to true. A trigger body is indexes in Version 4 or later.
also called triggered SQL statements.
triggering SQL operation. The SQL operation that union. An SQL operation that combines the results of
causes a trigger to be activated when performed on the two SELECT statements. Unions are often used to
subject table. merge lists of values that are obtained from several
tables.
trigger package. A package that is created when a
CREATE TRIGGER statement is executed. The package unique constraint. An SQL rule that no two values in
is executed when the trigger is activated. a primary key, or in the key of a unique index, can be
the same.
TSO. Time-Sharing Option.
unique index. An index that ensures that no identical
TSO attachment facility. A DB2 facility consisting of key values are stored in a column or a set of columns
the DSN command processor and DB2I. Applications in a table.
unit of recovery. A recoverable sequence of operations statement and that can be referenced thereafter in SQL
within a single resource manager, such as an instance statements. A user-defined function can be an external
of DB2. Contrast with unit of work. function, a sourced function, or an SQL function. Contrast
with built-in function.
unit of recovery identifier (URID). The LOGRBA of
the first log record for a unit of recovery. The URID user view. In logical data modeling, a model or
also appears in all subsequent log records for that unit representation of critical information that the business
of recovery. requires.
unit of work. A recoverable sequence of operations UTF-8. Unicode Transformation Format, 8-bit
within an application process. At any time, an encoding form, which is designed for ease of use with
application process is a single unit of work, but the life existing ASCII-based systems. The CCSID value for
of an application process can involve many units of data in UTF-8 format is 1208. DB2 UDB for z/OS
work as a result of commit or rollback operations. In a supports UTF-8 in mixed data fields.
multisite update operation, a single unit of work can
include several units of recovery. Contrast with unit of UTF-16. Unicode Transformation Format, 16-bit
recovery. encoding form, which is designed to provide code
values for over a million characters and a superset of
Universal Unique Identifier (UUID). An identifier UCS-2. The CCSID value for data in UTF-16 format is
that is immutable and unique across time and space (in 1200. DB2 UDB for z/OS supports UTF-16 in graphic
z/OS). data fields.
unlock. The act of releasing an object or system UUID. Universal Unique Identifier.
resource that was previously locked and returning it to
general availability within DB2.
V
untyped parameter marker. A parameter marker that
is specified without its target data type. It has the form value. The smallest unit of data that is manipulated in
of a single question mark (?). SQL.
updatability. The ability of a cursor to perform variable. A data element that specifies a value that
positioned updates and deletes. The updatability of a can be changed. A COBOL elementary data item is an
cursor can be influenced by the SELECT statement and example of a variable. Contrast with constant.
the cursor sensitivity option that is specified on the
variant function. See nondeterministic function.
DECLARE CURSOR statement.
varying-length string. A character or graphic string
update hole. The location on which a cursor is
whose length varies within set limits. Contrast with
positioned when a row in a result table is fetched again
fixed-length string.
and the new values no longer satisfy the search
condition. DB2 marks a row in the result table as an version. A member of a set of similar programs,
update hole when an update to the corresponding row DBRMs, packages, or LOBs.
in the database causes that row to no longer qualify for A version of a program is the source code that is
the result table. produced by precompiling the program. The
program version is identified by the program name
update trigger. A trigger that is defined with the
and a timestamp (consistency token).
triggering SQL operation UPDATE.
A version of a DBRM is the DBRM that is
upstream. The node in the syncpoint tree that is produced by precompiling a program. The DBRM
responsible, in addition to other recovery or resource version is identified by the same program name and
managers, for coordinating the execution of a timestamp as a corresponding program version.
two-phase commit. A version of a package is the result of binding a
DBRM within a particular database system. The
UR. Uncommitted read. package version is identified by the same program
name and consistency token as the DBRM.
URE. Unit of recovery element. A version of a LOB is a copy of a LOB value at a
point in time. The version number for a LOB is
URID . Unit of recovery identifier. stored in the auxiliary index entry for the LOB.
URL. Uniform resource locator. view. An alternative representation of data from one
or more tables. A view can include all or some of the
user-defined data type (UDT). See distinct type.
columns that are contained in tables on which it is
user-defined function (UDF). A function that is defined.
defined to DB2 by using the CREATE FUNCTION
Glossary 1273
view check option • z/OS Distributed Computing Environment (z/OS DCE)
view check option. An option that specifies whether | XML attribute. A name-value pair within a tagged
every row that is inserted or updated through a view | XML element that modifies certain features of the
must conform to the definition of that view. A view | element.
check option can be specified with the WITH
CASCADED CHECK OPTION, WITH CHECK # XML element. A logical structure in an XML
OPTION, or WITH LOCAL CHECK OPTION clauses of # document that is delimited by a start and an end tag.
the CREATE VIEW statement. # Anything between the start tag and the end tag is the
# content of the element.
Virtual Storage Access Method (VSAM). An access
method for direct or sequential processing of fixed- and | XML node. The smallest unit of valid, complete
varying-length records on disk devices. The records in | structure in a document. For example, a node can
a VSAM data set or file can be organized in logical | represent an element, an attribute, or a text string.
sequence by a key field (key sequence), in the physical
sequence in which they are written on the data set or | XML publishing functions. Functions that return
file (entry-sequence), or by relative-record number (in | XML values from SQL values.
z/OS).
X/Open. An independent, worldwide open systems
Virtual Telecommunications Access Method (VTAM). organization that is supported by most of the world’s
An IBM licensed program that controls communication largest information systems suppliers, user
and the flow of data in an SNA network (in z/OS). organizations, and software companies. X/Open's goal
is to increase the portability of applications by
| volatile table. A table for which SQL operations combining existing and emerging standards.
| choose index access whenever possible.
XRF. Extended recovery facility.
VSAM. Virtual Storage Access Method.
X
XCF. See cross-system coupling facility.
Bibliography 1277
v Open Group Technical Standard; the Open Group v IMS Application Programming: Database Manager,
presently makes the following DRDA books SC27-1286
available through its Web site at v IMS Application Programming: Design Guide,
www.opengroup.org SC27-1287
– Open Group Technical Standard, DRDA Version v IMS Application Programming: Transaction
3 Vol. 1: Distributed Relational Database Manager, SC27-1289
Architecture v IMS Command Reference, SC27-1291
– Open Group Technical Standard, DRDA Version v IMS Customization Guide, SC27-1294
3 Vol. 2: Formatted Data Object Content v IMS Install Volume 1: Installation Verification,
Architecture GC27-1297
– Open Group Technical Standard, DRDA Version v IMS Install Volume 2: System Definition and
3 Vol. 3: Distributed Data Management Tailoring, GC27-1298
Architecture v IMS Messages and Codes Volumes 1 and 2,
GC27-1301 and GC27-1302
Domain Name System v IMS Open Transaction Manager Access Guide and
v DNS and BIND, Third Edition, Paul Albitz and Reference, SC18-7829
Cricket Liu, O’Reilly, ISBN 0-59600-158-4 v IMS Utilities Reference: System, SC27-1309
Bibliography 1279
WebSphere family v z/OS MVS Planning: Workload Management,
v WebSphere MQ Integrator Broker: Administration SA22-7602
Guide, SC34-6171 v z/OS MVS Programming: Assembler Services
v WebSphere MQ Integrator Broker for z/OS: Guide, SA22-7605
Customization and Administration Guide, v z/OS MVS Programming: Assembler Services
SC34-6175 Reference, Volumes 1 and 2, SA22-7606 and
v WebSphere MQ Integrator Broker: Introduction and SA22-7607
Planning, GC34-5599 v z/OS MVS Programming: Authorized Assembler
v WebSphere MQ Integrator Broker: Using the Services Guide, SA22-7608
Control Center, SC34-6168 v z/OS MVS Programming: Authorized Assembler
Services Reference Volumes 1-4, SA22-7609,
z/Architecture SA22-7610, SA22-7611, and SA22-7612
v z/Architecture Principles of Operation, SA22-7832 v z/OS MVS Programming: Callable Services for
High-Level Languages, SA22-7613
z/OS v z/OS MVS Programming: Extended Addressability
v z/OS C/C++ Programming Guide, SC09-4765 Guide, SA22-7614
v z/OS C/C++ Run-Time Library Reference, v z/OS MVS Programming: Sysplex Services Guide,
SA22-7821 SA22-7617
v z/OS C/C++ User's Guide, SC09-4767 v z/OS MVS Programming: Sysplex Services
v z/OS Communications Server: IP Configuration Reference, SA22-7618
Guide, SC31-8875 v z/OS MVS Programming: Workload Management
v z/OS DCE Administration Guide, SC24-5904 Services, SA22-7619
v z/OS DCE Introduction, GC24-5911 v z/OS MVS Recovery and Reconfiguration Guide,
v z/OS DCE Messages and Codes, SC24-5912 SA22-7623
v z/OS Information Roadmap, SA22-7500 v z/OS MVS Routing and Descriptor Codes,
v z/OS Introduction and Release Guide, GA22-7502 SA22-7624
v z/OS JES2 Initialization and Tuning Guide, v z/OS MVS Setting Up a Sysplex, SA22-7625
SA22-7532 v z/OS MVS System Codes SA22-7626
v z/OS JES3 Initialization and Tuning Guide, v z/OS MVS System Commands, SA22-7627
SA22-7549 v z/OS MVS System Messages Volumes 1-10,
v z/OS Language Environment Concepts Guide, SA22-7631, SA22-7632, SA22-7633, SA22-7634,
SA22-7567 SA22-7635, SA22-7636, SA22-7637, SA22-7638,
v z/OS Language Environment Customization, SA22-7639, and SA22-7640
SA22-7564 v z/OS MVS Using the Subsystem Interface,
v z/OS Language Environment Debugging Guide, SA22-7642
GA22-7560 v z/OS Planning for Multilevel Security and the
v z/OS Language Environment Programming Guide, Common Criteria, SA22-7509
SA22-7561 v z/OS RMF User's Guide, SC33-7990
v z/OS Language Environment Programming v z/OS Security Server Network Authentication
Reference, SA22-7562 Server Administration, SC24-5926
v z/OS Managed System Infrastructure for Setup v z/OS Security Server RACF Auditor's Guide,
User's Guide, SC33-7985 SA22-7684
v z/OS MVS Diagnosis: Procedures, GA22-7587 v z/OS Security Server RACF Command Language
v z/OS MVS Diagnosis: Reference, GA22-7588 Reference, SA22-7687
v z/OS MVS Diagnosis: Tools and Service Aids, v z/OS Security Server RACF Macros and Interfaces,
GA22-7589 SA22-7682
v z/OS MVS Initialization and Tuning Guide, v z/OS Security Server RACF Security
SA22-7591 Administrator's Guide, SA22-7683
v z/OS MVS Initialization and Tuning Reference, v z/OS Security Server RACF System Programmer's
SA22-7592 Guide, SA22-7681
v z/OS MVS Installation Exits, SA22-7593 v z/OS Security Server RACROUTE Macro
v z/OS MVS JCL Reference, SA22-7597 Reference, SA22-7692
v z/OS MVS JCL User's Guide, SA22-7598 v z/OS Support for Unicode: Using Conversion
v z/OS MVS Planning: Global Resource Serialization, Services, SA22-7649
SA22-7600 v z/OS TSO/E CLISTs, SA22-7781
v z/OS MVS Planning: Operations, SA22-7601 v z/OS TSO/E Command Reference, SA22-7782
Bibliography 1281
1282 Administration Guide
Index
Special characters access control (continued)
internal DB2 data 129
_ (underscore) access method services
in DDL registration tables 219 bootstrap data set definition 404
% (percent sign) commands
in DDL registration tables 219 ALTER 37, 518
874 ALTER ADDVOLUMES 31, 37
ALTER REMOVEVOLUMES 31
DEFINE 37, 581
Numerics DEFINE CLUSTER 37, 39, 40, 625
16-KB page size 113 DELETE CLUSTER 37
32-KB page size 113 EXPORT 451
8-KB page size 113 IMPORT 106, 579
PRINT 472
REPRO 472, 507
A data set management 29, 37
delete damaged BSDS 506
abend redefine user work file 461
AEY9 493 rename damaged BSDS 506
after SQLCODE -923 498 table space re-creation 581
ASP7 493 access path
backward log recovery 575 direct row access 906
CICS hints 766
abnormal termination of application 493 index access 880
scenario 498 index-only access 905
transaction abends when disconnecting from DB2 356, low cluster ratio
357 effects of 881
waits 493 suggests table space scan 911
current status rebuild 563 with list prefetch 935
disconnects DB2 366 multiple index access
DXR122E 485 description 916
effects of 412 disabling 651
forward log recovery 570 PLAN_TABLE 904
IMS selection
U3047 492 influencing with SQL 754
U3051 492 problems 711
IMS, scenario 490, 492 queries containing host variables 739
IRLM Visual Explain 754, 891
scenario 485 table space scan 911
stop command 348 unique index with matching value 918
stop DB2 347 access profile, in RACF 266
log accounting
damage 559 elapsed times 630
initialization 562 trace
lost information 581 description 1156
page problem 580 ACQUIRE
restart 561 option of BIND PLAN subcommand
starting DB2 after 324 locking tables and table spaces 807
VVDS (VSAM volume data set) thread creation 696
destroyed 515 active log
out of space 515 data set
acceptance option 243 changing high-level qualifier for 99
access control copying with IDCAMS REPRO statement 405
authorization exit routine 1025 effect of stopped 503
closed application 215, 227 offloaded to archive log 395
DB2 subsystem placement 660
local 131, 232 VSAM linear 1082
process overview 231 description 8
RACF 131 dual logging 396
remote 132, 238 offloading 396
external DB2 data sets 132 problems 499
field level 143
Index X-3
backup (continued) BSDS (bootstrap data set)
database (continued) archive log information 404
DSN1COPY 466 changing high-level qualifier of 99
image copies 457 changing log inventory 405
planning 439 defining 404
system procedures 439 description 9
BACKUP SYSTEM utility 36 dual copies 403
backward log recovery phase dual recovery 507
recovery scenario 575 failure symptoms 561
restart 416 logging considerations 664
base table managing 403
distinctions from temporary tables 48 recovery scenario 506, 578
basic direct access method (BDAM) registers log data 404
See BDAM (basic direct access method) restart use 412
basic sequential access method (BSAM) restoring from the archive log 507
See BSAM (basic sequential access method) single recovery 507
batch message processing (BMP) program stand-alone log services role 1092
See BMP (batch message processing) program BSDS privilege
batch processing description 136
TSO 327 buffer information area used in IFI 1121
BDAM (basic direct access method) 398 buffer pool
BIND PACKAGE subcommand of DSN advantages of large pools 641
options advantages of multiple pools 641
DISABLE 154 allocating storage 641
ENABLE 154 altering attributes 643
ISOLATION 810 available pages 634
OWNER 148 considerations 675
RELEASE 807 description 9
REOPT(ALWAYS) 739 displaying current status 643
REOPT(NONE) 739 hit ratio 639
REOPT(ONCE) 739 immediate writes 647
privileges for remote bind 154 in-use pages 634
BIND PLAN subcommand of DSN long-term page fix option 643
options monitoring 645, 1162
ACQUIRE 807 page-stealing algorithm 642
DISABLE 154 read operations 634
ENABLE 154 size 640, 698
ISOLATION 810 statistics 645
OWNER 148 thresholds 635, 647
RELEASE 807 update efficiency 646
REOPT(ALWAYS) 739 updated pages 634
REOPT(NONE) 739 use in logging 395
REOPT(ONCE) 739 write efficiency 646
BIND privilege write operations 634
description 137 BUFFERPOOL clause
BINDADD privilege ALTER INDEX statement 635
description 136 ALTER TABLESPACE statement 635
BINDAGENT privilege CREATE DATABASE statement 635
description 136 CREATE INDEX statement 635
naming plan or package owner 148 CREATE TABLESPACE statement 635
binding BUFFERPOOL privilege
privileges needed 164 description 138
bit data built-in functions for encryption
altering subtype 87 See encryption
blank
column with a field procedure 1054
block fetch
description 969
C
cache
enabling 971
dynamic SQL
LOB data impact 971
effect of RELEASE(DEALLOCATE) 808
scrollable cursors 971
cache controller 664
BMP (batch message processing) program
cache for authorization IDs 152
connecting from dependent regions 364
CAF (call attachment facility)
bootstrap data set (BSDS)
application program
See BSDS (bootstrap data set)
running 328
BSAM (basic sequential access method)
submitting 329
reading archive log data sets 398
description 18
Index X-5
CDB (communications database) CICS (continued)
backing up 441 planning
changing high-level qualifier 102 DB2 considerations 15
description 8 environment 327
updating tables 247 programming
CHANGE command of IMS applications 327
purging residual recovery entries 358 recovery scenarios
change log inventory utility application failure 493
changing attachment facility failure 498
BSDS 346, 405 CICS not operational 493
control of data set access 282 DB2 connection failure 494
change number of sessions (CNOS) indoubt resolution failure 495
See CNOS (change number of sessions) starting a connection 354
CHANGE SUBSYS command of IMS 363 statistics 1151
CHARACTER data type system administration 16
altering 87 two-phase commit 423
CHECK DATA utility XRF (extended recovery facility) 16
checks referential constraints 297 CICS transaction invocation stored procedure 1069
CHECK INDEX utility claim
checks consistency of indexes 297 class 828
checkpoint definition 828
log records 1077, 1081 effect of cursor WITH HOLD 820
queue 422 Class 1 elapsed time 610
CHECKPOINT FREQ field of panel DSNTIPN 666 CLOSE
CI (control interval) clause of CREATE INDEX statement
description 395, 398 effect on virtual storage use 674
reading 1092 clause of CREATE TABLESPACE statement
CICS deferred close 659
commands effect on virtual storage use 674
accessing databases 354 closed application
DSNC DISCONNECT 356 controlling access 215, 227
DSNC DISPLAY PLAN 356 definition 215
DSNC DISPLAY TRANSACTION 356 cluster ratio
DSNC STOP 357 description 881
response destination 320 effects
used in DB2 environment 315 low cluster ratio 881
connecting to table space scan 911
controlling 358 with list prefetch 935
disconnecting applications 356 CLUSTERED column of SYSINDEXES catalog table
thread 355 data collected by RUNSTATS utility 865
connecting to DB2 CLUSTERING column
authorization IDs 327 SYSINDEXES_HIST catalog table 873
connection processing 233 CLUSTERING column of SYSINDEXES catalog table
controlling 353 access path selection 865
disconnecting applications 391 CLUSTERRATIO column
sample authorization routines 235 SYSINDEXSTATS_HIST catalog table 874
security 280 CLUSTERRATIOF column
sign-on processing 236 SYSINDEXES catalog table
supplying secondary IDs 233 data collected by RUNSTATS utility 866
correlating DB2 and CICS accounting records 618 SYSINDEXES_HIST catalog table 873
description, attachment facility 15 SYSINDEXSTATS catalog table
disconnecting from DB2 357 access path selection 867
dynamic plan selection CNOS (change number of sessions)
exit routine 1066 failure 523
dynamic plan switching 1066 coding
facilities 1066 exit routines
diagnostic trace 390 general rules 1069
monitoring facility (CMF) 610, 1151 parameters 1071
tools 1153 COLCARD column of SYSCOLSTATS catalog table
language interface module (DSNCLI) data collected by RUNSTATS utility 865
IFI entry point 1119 updating 878
running CICS applications 327 COLCARDDATA column of SYSCOLSTATS catalog table 865
operating COLCARDF column
entering DB2 commands 319 SYSCOLUMNS catalog table 865
identify outstanding indoubt units 429 SYSCOLUMNS_HIST catalog table 873
recovery from system failure 16 COLCARDF column of SYSCOLUMNS catalog table
terminates AEY9 499 statistics not exact 870
Index X-7
conversation-level security 243 created temporary table
conversion procedure distinctions from base tables 48
description 1049 table space scan 911
writing 1049 CREATEDBA privilege
coordinator description 136
in multi-site update 435 CREATEDBC privilege
in two-phase commit 423 description 136
copy pools CREATEIN privilege
creating 37 description 136
SMS construct 36 CREATESG privilege
COPY privilege description 136
description 136 CREATETAB privilege
COPY utility description 136
backing up 466 CREATETMTAB privilege
copying data from table space 457 description 136
DFSMSdss concurrent copy 449, 458 CREATETS privilege
effect on real-time statistics 1186 description 136
restoring data 466 CS (cursor stability)
COPY-pending status claim class 828
resetting 63 distributed environment 810
copying drain lock 829
a DB2 subsystem 109 effect on locking 810
a package, privileges for 154, 164 optimistic concurrency control 812
a relational database 108 page and row locking 812
correlated subqueries 745 CURRENDATA option of BIND
correlation ID plan and package options differ 819
CICS 496 CURRENT MAINTAINED TABLE TYPES FOR
duplicate 362, 497 OPTIMIZATION special register 854
identifier for connections from TSO 352 CURRENT REFRESH AGE special register 854
IMS 362 current status rebuild
outstanding unit of recovery 414 phase of restart 414
RECOVER INDOUBT command 355, 361, 369 recovery scenario 561
COST_CATEGORY_B column of RLST 688 CURRENTDATA option
CP processing, disabling parallel operations 637 BIND PACKAGE subcommand
CRC (command recognition character) enabling block fetch 972
description 319 BIND PLAN subcommand 972
CREATE DATABASE statement cursor
description 43 ambiguous 819, 972
privileges required 164 defined WITH HOLD, subsystem parameter to release
CREATE GLOBAL TEMPORARY TABLE statement locks 805
distinctions from base tables 48 WITH HOLD
CREATE IN privilege claims 820
description 136 locks 820
CREATE INDEX statement Customer Information Control System (CICS)
privileges required 164 See CICS
USING clause 41 CVD (column value descriptor) 1055, 1057
CREATE SCHEMA statement 57
CREATE STOGROUP statement
description 29
privileges required 164
D
damage, heuristic 432
VOLUMES(’*’) attribute 30, 35
data
CREATE TABLE statement
See also mixed data
AUDIT clause 288
access control
creating a table space implicitly 45
description 128
privileges required 164
field-level 143
test table 64
using option of START DB2 323
CREATE TABLESPACE statement
backing up 466
creating a table space explicitly 44
checking consistency of updates 297
deferring allocation of data sets 30
coding
DEFINE NO clause 30, 44
conversion procedures 1049
DSSIZE option 41
date and time exit routines 1046
privileges required 164
edit exit routines 1040
USING STOGROUP clause 30, 44
field procedures 1052
CREATE VIEW statement
compression
privileges required 164
See data compression
CREATEALIAS privilege
consistency
description 136
ensuring 293
Index X-9
database (continued) DB2 Interactive (DB2I)
recovery See DB2I (DB2 Interactive)
description 459 DB2 Performance Monitor (DB2 PM)
failure scenarios 510 See DB2 Performance Expert
planning 439 DB2 PM (DB2 Performance Monitor)
RECOVER TOCOPY 468 EXPLAIN 889
RECOVER TOLOGPOINT 468 DB2 private protocol access
RECOVER TORBA 468 description 967
starting 334 resource limit facility 693
starting and stopping as unit 43 DB2-managed objects, changing data set high-level
status information 335 qualifier 104
stopping 341 DB2I (DB2 Interactive)
TEMP for declared temporary tables 44 description 11, 325
users who need their own 44 panels
database controller privileges 176 description 17
database descriptor (DBD) used to connect from TSO 351
See DBD (database descriptor) DBA (database administrator)
database exception table, log records description 171
exception states 1078 sample privileges 176
image copies of special table spaces 1078 DBADM authority
LPL 1082 description 141
WEPR 1082 DBCTRL authority
DataPropagator NonRelational (DPropNR) description 140
See DPropNR (Data Propagator DBD (database descriptor)
DataRefresher 65 contents 8
DATE FORMAT field of panel DSNTIPF 1047 EDM pool 647, 649
date routine freeing 699
DATE FORMAT field at installation 1047 load
description 1046 in EDM pool 697
LOCAL DATE LENGTH field at installation 1047 using ACQUIRE(ALLOCATE) 696
writing 1046 locks on 789
datetime use count 699
exit routine for. DBD01 directory table space
See date routine contents 8
See time routine placement of data sets 660
format quiescing 450
table 1046 recovery after conditional restart 464
DB2 Buffer Pool Analyzer recovery information 448
description 1162 DBFULTA0 (Fast Path Log Analysis Utility) 1151
DB2 coded format for numeric data 1075 DBMAINT authority
DB2 commands description 140
authority 321 DD limit
authorized for SYSOPR 322 See DSMAX
commands DDCS (data definition control support)
RECOVER INDOUBT 433 database 9
RESET INDOUBT 434 DDF (distributed data facility)
START DB2 323 block fetch 969
START DDF 371 controlling connections 370
STOP DDF 387 description 18
STOP DDF MODE(SUSPEND) 371 resuming 371
description 316 suspending 371
destination of responses 320 DDL, controlling usage of
entering from See data definition control support
CICS 319 deadlock
DSN session 326 description 775
IMS 318 detection scenarios 840
TSO 319 example 775
z/OS 318 recommendation for avoiding 778
issuing from IFI 1120, 1122 row vs. page locks 803
users authorized to enter 321 wait time calculation 798
DB2 Connect 18 with RELEASE(DEALLOCATE) 779
DB2 data set statistics X’00C90088’ reason code in SQLCA 776
obtaining through IFCID 0199 1134 DEADLOCK TIME field of panel DSNTIPJ 797
DB2 DataPropagator DEADLOK option of START irlmproc command 796
altering a table for 87 decision, heuristic 432
DB2 decoded procedure for numeric data 1075 DECLARE GLOBAL TEMPORARY TABLE statement
distinctions from base tables 48
Index X-11
DISPLAY THREAD command (continued) DROP
shows IMS threads 359, 364 statement
shows parallel tasks 962 TABLE 89
DISPLAY TRACE command TABLESPACE 69
AUDIT option 286 DROP privilege
DISPLAY UTILITY command description 136
data set control log record 1077 DROPIN privilege
DISPLAYDB privilege description 136
description 136 dropping
displaying columns from a table 88
buffer pool information 643 database 68
indoubt units of recovery 360, 496 privileges needed for package 164
information about table spaces 69
originating threads 352 tables 89
parallel threads 352 views 96
postponed units of recovery 361 volumes from a storage group 67
distinct type DSMAX
privileges of ownership 146 calculating 656
DISTINCT TYPE privilege, description 138 limit on open data sets 655
distributed data DSN command of TSO
controlling connections 370 command processor
DB2 private protocol access connecting from TSO 351
See DB2 private protocol description 18
DRDA protocol invoked by TSO batch work 328
See DRDA access invoking 17
operating issues commands 326
displaying status 1134 running TSO programs 325
in an overloaded network 598 subcommands
performance considerations 968 END 353
programming DSN command processor
block fetch 969 See DSN command of TSO
FOR FETCH ONLY 971 DSN message prefix 329
FOR READ ONLY 971 DSN_STATEMNT_TABLE table
resource limit facility 692 column descriptions 946
server-elapsed time monitoring 981 DSN1CHKR utility
tuning 968 control of data set access 282
distributed data facility (DDF) DSN1COMP utility
See DDF (distributed data facility) description 672
Distributed Relational Database Architecture (DRDA) 18 DSN1COPY utility
distribution statistics 879 control of data set access 282
DL/I resetting log RBA 589
batch restoring data 466
features 17 DSN1LOGP utility
loading data 65 control of data set access 282
DL/I BATCH TIMEOUT field of installation panel example 569
DSNTIPI 798 extract log records 1077
DMTH (data management threshold) 636 JCL
double-hop situation 150 sample 566
down-level detection limitations 586
controlling 512 print log records 1077
LEVEL UPDATE FREQ field of panel DSNTIPN 512 shows lost work 559
down-level page sets 511 DSN1PRNT utility
DPropNR (DataPropagator NonRelational) 17 description 282
DPSI DSN3@ATH connection exit routine.
performance considerations 752 See connection exit routine
drain DSN3@SGN sign-on exit routine.
definition 828 See sign-on exit routine
DRAIN ALL 831 DSN6SPRM macro
wait calculation 800 RELCURHL parameter 805
drain lock DSN6SYSP macro
description 773, 829 PCLOSEN parameter 659
types 829 PCLOSET parameter 659
wait calculation 800 DSN8EAE1 exit routine 1040
DRDA access DSN8EXP stored procedure
description 967 description 1231
resource limit facility 692 example call 1232
security mechanisms 238 option descriptions 1232
Index X-13
dynamic SQL (continued) escape character
example 168 example 223
privileges required 164 in DDL registration tables 219
skeletons, EDM pool 647 EVALUATE UNCOMMITTED field of panel DSNTIP4 805
DYNAMICRULES EXCLUSIVE
description 164 lock mode
example 168 effect on resources 786
LOB 825
page 785
E row 785
table, partition, and table space 785
EA-enabled page sets 41
EXECUTE privilege
edit procedure, changing 87
after BIND REPLACE 153
edit routine
description 134, 137
description 294, 1040
effect 148
ensuring data accuracy 294
exit parameter list (EXPL) 1071
row formats 1072
exit point
specified by EDITPROC option 1040
authorization routines 1017
writing 1040
connection routine 1017
EDITPROC clause
conversion procedure 1050
exit points 1041
date and time routines 1047
specifies edit exit routine 1041
edit routine 1041
EDM pool
field procedure 1054
DBD freeing 699
plan selection exit routine 1068
description 647
sign-on routine 1017
EDPROC column of SYSTABLES catalog tabley 869
validation routine 1044
employee photo and resume sample table 1001
exit routine
employee sample table 998
authorization control 1025
employee-to-project-activity sample table 1004
determining if active 1039
ENABLE
DSNACICX 1219
option of BIND PLAN subcommand 154
general considerations 1069
enclave 705
writing 1015
ENCRYPT 208, 209
exit routine.
encrypting
See also connection exit routine
data 1040
See also conversion procedure
passwords from workstation 264
See also date routine
passwords on attachment requests 244, 262
See also edit routine
encryption 206
See also field procedure
built-in functions for 206
See also log capture exit routine
column level 208
See also sign-on exit routine
data 206
See also time routine
defining columns for 207
See validation routine
non-character values 211
EXPL (exit parameter list) 1071
password hints 209, 210
EXPLAIN
performance recommendations 211
report of outer join 920
predicate evaluation 210
statement
value level 209
alternative using IFI 1118
with viewsl 209
description 891
END
executing under DB2 QMF 901
subcommand of DSN
index scans 905
disconnecting from TSO 353
interpreting output 903
Enterprise Storage Server
investigating SQL processing 891
backup 459
EXPLAIN PROCESSING field of panel DSNTIPO
environment, operating
overhead 901
CICS 327
EXPORT command of access method services 106, 451
DB2 19
Extended Remote Copy (XRC) 549
IMS 327
EXTENDED SECURITY field of panel DSNTIPR 239
TSO 325
extending a data set, procedure 518
z/OS 19
EXTENTS column
ERRDEST option
SYSINDEXPART catalog table
DSNC MODIFY 354
data collected by RUNSTATS utility 866
unsolicited CICS messages 330
SYSINDEXPART_HIST catalog table 874
error
SYSTABLEPART catalog table
application program 488
data collected by RUNSTATS utility 868
IFI (instrumentation facility interface) 1150
SYSTABLEPART_HIST catalog table 875
physical RW 339
external storage
SQL query 297
See auxiliary storage
escalation, lock 793
Index X-15
FREEPAGE (continued) HIGH2KEY column (continued)
clause of CREATE INDEX statement SYSCOLUMNS catalog table
effect on DB2 speed 620 access path selection 865
clause of CREATE TABLESPACE statement recommendation for updating 878
effect on DB2 speed 620 SYSCOLUMNS_HIST catalog table 873
FREESPACE column HIGHKEY column of SYSCOLSTATS catalog table 865
SYSLOBSTATS catalog table 867 hints, optimization 766
SYSLOBSTATS_HIST catalog table 874 HMIGRATE command of DFSMShsm (Hierarchical Storage
FREQUENCYF column Manager) 107
SYSCOLDIST catalog table hop situation 151
access path selection 864 host variable
SYSCOLDIST_HIST catalog table 873 example query 739
SYSCOLDISTSTATS catalog table 864 impact on access path selection 739
full image copy in equal predicate 742
use after LOAD 667 tuning queries 739
use after REORG 667 HRECALL command of DFSMShsm (Hierarchical Storage
FULLKEYCARDDATA column Manager) 107
SYSINDEXSTATS catalog table 867 Huffman compression.
FULLKEYCARDF column See also data compression, Huffman
SYSINDEXES catalog table exit routine 1040
data collected by RUNSTATS utility 866 hybrid join
SYSINDEXES_HIST catalog table 873 description 925
SYSINDEXSTATS catalog table 867 disabling 651
SYSINDEXSTATS_HIST catalog table 874
function
column
when evaluated 910
I
I/O activity, monitoring by data set 661
FUNCTION privilege, description 136
I/O error
function, user-defined 155
catalog 514
FVD (field value descriptor) 1055, 1057
directory 514
occurrence 403
table spaces 513
G I/O processing
generalized trace facility (GTF). minimizing contention 625, 678
See GTF (generalized trace facility) parallel
GETHINT 209 disabling 637
global transaction queries 953
definition of 429 identity column
glossary 1241 altering attributes 88
governor (resource limit facility) loading data into 62
See resource limit facility (governor) identity columns
GRANT statement conditional restart 420
examples 174, 180 IEFSSNxx member of SYS1.PARMLIB
format 174 IRLM 346
privileges required 164 IFCA (instrumentation facility communication area)
granting privileges and authorities 174 command request 1121
GROUP BY clause description 1140
effect on OPTIMIZE clause 756 field descriptions 1141
GROUP DD statement for stand-alone log services OPEN IFI READS request 1123
request 1093 READA request of IFI 1137
GTF (generalized trace facility) WRITE request of IFI 1139
event identifiers 1161 IFCID (instrumentation facility component identifier)
format of trace records 1101 0199 645, 661
interpreting trace records 1106 0330 396, 500
recording trace records 1161 area
description 1145
READS request of IFI 1124
H WRITE request of IFI 1139
description 1102
help
identifiers by number
DB2 UTILITIES panel 11
0001 977, 1133, 1156
heuristic damage 432
0002 1133
heuristic decision 432
0015 696
Hierarchical Storage Manager (DFSMShsm)
0021 698
See DFSMShsm (Hierarchical Storage Manager)
0032 698
HIGH2KEY column
0033 698
SYSCOLSTATS catalog table 865
0038 698
Index X-17
IMS (continued) index (continued)
programming types (continued)
application 16 data-partitioned secondary 55
error checking 327 nonpartitioned secondary 55
recovery partitioning 54
resolution of indoubt units of recovery 428 secondary 55
recovery scenarios 489, 490 unique 54
system administration 17 versions 94
thread 359, 360 recycling version numbers 95
two-phase commit 423 INDEX privilege
using with DB2 16 description 134
IMS BMP TIMEOUT field of panel DSNTIPI 798 index structure
IMS Performance Analyzer (IMS PA) root page
description 1151 leaf pages 117
IMS transit times 610 index-only access
IMS transactions stored procedure NOT PADDED attribute 57
multiple connections 1230 INDEXSPACESTATS
option descriptions 1227 contents 1173
syntax diagram 1227 real-time statistics table 1166
IMS.PROCLIB library indoubt thread
connecting from dependent regions 363 displaying information about 432
inactive connections 702 recovering 433
index resetting status 433
access methods resolving 550
access path selection 914 information center consultant 171
by nonmatching index 915 INITIAL_INSTS column of SYSROUTINES catalog table 868
description 912 INITIAL_IOS column of SYSROUTINES catalog table 868
IN-list index scan 915 INLISTP 766
matching index columns 905 INSERT privilege
matching index description 914 description 134
multiple 916 INSERT processing, effect of MEMBER CLUSTER option of
one-fetch index scan 917 CREATE TABLESPACE 777
altering INSERT statement
ALTER INDEX statement 92 example 64
effects of dropping 94 load data 61, 63
backward index scan 57 installation
copying 457 macros
costs 912 automatic IRLM start 347
description 6 installation SYSADM authority
evaluating effectiveness 673 privileges 142
implementing 53 use of RACF profiles 283
locking 788 installation SYSOPR authority
NOT PADDED privilege 140
advantages of using 56 use of RACF profiles 283
disadvantages of using 57 instrumentation facility communication area (IFCA)
index-only access 56 See IFCA (instrumentation facility communication area)
varying-length column 56 instrumentation facility interface (IFI)
ordered data See IFI (instrumentation facility interface)
backward scan 57 INSTS_PER_INVOC column of SYSROUTINES catalog
forward scan 57 table 868
ownership 146 integrated catalog facility
privileges of ownership 146 changing alias name for DB2 data sets 98
reasons for using 912 controlling storage 29
reorganizing 95 integrity
space IFI data 1149
description 6 reports 298
estimating size 118, 119 INTENT EXCLUSIVE lock mode 786, 825
recovery scenario 513 INTENT SHARE lock mode 786, 825
storage allocated 41 Interactive System Productivity Facility (ISPF)
structure See ISPF (Interactive System Productivity Facility)
index tree 117 internal resource lock manager (IRLM)
leaf pages 117 See IRLM (internal resource lock manager)
overall 118 invalid LOB, recovering 512
root page 117 invoker, description 155
subpages 117 invoking
types 53 DSN command processor 17
clustering 54 IOS_PER_INVOC column of SYSROUTINES catalog table 868
Index X-19
LOB (large object) (continued) lock (continued)
lock duration 825 options affecting (continued)
LOCK TABLE statement 827 repeatable read 815
locking 823 uncommitted read 814
LOCKSIZE clause of CREATE or ALTER page locks
TABLESPACE 827 commit duration 698
modes of LOB locks 825 CS, RS, and RR compared 815
modes of table space locks 825 description 781
recommendations for buffer pool DWQT threshold 639 performance 834
recovering invalid 512 promotion 793
when to reorganize 886 recommendations for concurrency 776
local attachment request 242 row locks
LOCAL DATE LENGTH compared to page 803
field of panel DSNTIPF 1047 size
LOCAL TIME LENGTH controlling 802, 803
field of panel DSNTIPF 1047 page 781
lock partition 781
avoidance 805, 818 table 781
benefits 774 table space 781
class storage needed 796
drain 773 suspension time 837
transaction 773 table of modes acquired 790
compatibility 787 trace records 697
DB2 installation options 796 LOCK TABLE statement
description 773 effect on auxiliary tables 827
drain effect on locks 822
description 829 lock/latch suspension time 611
types 829 LOCKMAX clause
wait calculation 800 effect of options 803
duration LOCKPART clause of CREATE and ALTER TABLESPACE
controlling 807 effect on locking 782
description 784 LOCKS PER TABLE(SPACE) field of panel DSNTIPJ 804
LOBs 825 LOCKS PER USER field of panel DSNTIPJ 801
page locks 698 LOCKSIZE clause
effect of cursor WITH HOLD 820 effect of options 802, 827
effects recommendations 777
deadlock 775 log
deadlock wait calculation 798 buffer
suspension 774 creating log records 395
timeout 775 retrieving log records 395
timeout periods 797 size 662
escalation capture exit routine 1077, 1100
description 793 changing BSDS inventory 405
OMEGAMON reports 834 checkpoint records 1081
hierarchy contents 1077
description 781 deciding how long to keep 405
LOB locks 823 determining size of active logs 666
LOB table space, LOCKSIZE clause 827 dual
maximum number 801 active copy 396
mode 785 archive logs 404
modes for various processes 795 synchronization 396
object to minimize restart effort 578
DB2 catalog 788 effects of data compression 1078
DBD 789 excessive loss 581
description 787 failure
indexes 788 recovery scenario 499, 503
LOCKMAX clause 803 symptoms 561
LOCKSIZE clause 802 total loss 581
SKCT (skeleton cursor table) 789 hierarchy 395
SKPT (skeleton package table) 789 implementing logging 400
options affecting initialization phase
bind 807 failure scenario 561
cursor stability 812 process 413
IFI (instrumentation facility interface) 1149 operation 298
IRLM 796 performance
program 806 considerations 662
read stability 815 recommendations 663
Index X-21
message by identifier (continued) message by identifier (continued)
DSN1160I 569, 577 DSNJ319I 402
DSN1162I 569, 576 DSNL001I 371
DSN1213I 583 DSNL002I 388
DSN2001I 495 DSNL003I 371
DSN2025I 498 DSNL004I 371
DSN2034I 495 DSNL005I 388
DSN2035I 495 DSNL006I 388
DSN2036I 495 DSNL009I 381
DSN3100I 322, 324, 498 DSNL010I 381
DSN3104I 324, 498 DSNL030I 524
DSN3201I 494 DSNL080I 372, 373
DSN9032I 371 DSNL200I 373
DSNB204I 510 DSNL432I 388
DSNB207I 510 DSNL433I 388
DSNB232I 511 DSNL500I 523
DSNBB440I 962 DSNL501I 521, 523
DSNC012I 357 DSNL502I 521, 523
DSNC016I 429 DSNL700I 522
DSNC025I 357 DSNL701I 522
DSNI006I 340 DSNL702I 522
DSNI021I 340 DSNL703I 522
DSNI103I 793 DSNL704I 522
DSNJ001I 323, 397, 414, 560, 561 DSNL705I 522
DSNJ002I 397 DSNM001I 359, 366
DSNJ003I 397, 507 DSNM002I 366, 490, 498
DSNJ004I 397, 501 DSNM003I 359, 366
DSNJ005I 397 DSNM004I 428, 490
DSNJ007I 563, 566, 574 DSNM005I 362, 428, 491
DSNJ008E 397 DSNP001I 516, 517
DSNJ012I 563, 564, 572 DSNP007I 516
DSNJ072E 400 DSNP012I 515
DSNJ099I 323 DSNR001I 323
DSNJ100I 506, 560, 578 DSNR002I 323, 560
DSNJ103I 503, 563, 565, 572, 574 DSNR003I 323, 407, 574, 575, 576
DSNJ104I 503, 563 DSNR004I 323, 414, 416, 560, 561, 570
DSNJ105I 501 DSNR005I 323, 416, 561, 575
DSNJ106I 501, 563, 564, 572 DSNR006I 323, 417, 561
DSNJ107I 506, 560, 578 DSNR007I 323, 414, 416
DSNJ108I 506 DSNR031I 416
DSNJ110E 396, 500 DSNT360I 335, 338, 341
DSNJ111E 396, 500 DSNT361I 335, 338, 341
DSNJ113E 563, 565, 572, 573, 578 DSNT362I 335, 338, 341
DSNJ114I 504 DSNT392I 341, 1079
DSNJ115I 503 DSNT397I 338, 341
DSNJ1191 560 DSNU086I 513, 514
DSNJ119I 578 DSNU234I 672
DSNJ120I 413, 507 DSNU244I 672
DSNJ123E 506 DSNU561I 520
DSNJ124I 502 DSNU563I 520
DSNJ125I 405, 506 DSNV086E 498
DSNJ126I 506 DSNV400I 402
DSNJ127I 323 DSNV401I 355, 360, 361, 402, 496
DSNJ128I 504 DSNV402I 319, 350, 364, 376, 402
DSNJ130I 413 DSNV404I 352, 364
DSNJ139I 397 DSNV406I 350, 355, 360, 361, 496
DSNJ301I 506 DSNV407I 350
DSNJ302I 506 DSNV408I 355, 361, 368, 421, 496
DSNJ303I 506 DSNV414I 355, 361, 368, 497
DSNJ304I 506 DSNV415I 355, 361, 369, 497
DSNJ305I 506 DSNV431I 355
DSNJ306I 506 DSNV435I 421
DSNJ307I 506 DSNX940I 383
DSNJ311E 402 DSNY001I 323
DSNJ312I 402 DSNY002I 324
DSNJ317I 402 DSNZ002I 323
DSNJ318I 402 DXR105E 348
Index X-23
NOT NULL clause optimistic concurrency control 812
CREATE TABLE statement optimization hints 766
requires presence of data 293 OPTIMIZE FOR n ROWS clause 755
notices, legal 1237 interaction with FETCH FIRST clause 755
NPAGES column OPTIMIZE FOR n ROWS clause
SYSTABLES catalog table 869 effect on distributed performance 974, 975
SYSTABSTATS catalog table 870 interaction with FETCH FIRST clause 975
SYSTABSTATS_HIST catalog table 875 ORDER BY clause
NPAGESF column effect on OPTIMIZE clause 756
SYSTABLES catalog table ORGRATIO column
data collected by RUNSTATS utilit 869 SYSLOBSTATS catalog table 868
SYSTABLES_HIST catalog table 875 SYSLOBSTATS_HIST catalog table 874
null value originating sequence number (OASN)
effect on storage space 1072 See OASN (originating sequence number)
NUMBER OF LOGS field of panel DSNTIPL 666 originating task 954
NUMCOLUMNS column ORT (object registration table)
SYSCOLDIST catalog table See registration tables for DDL
access path selection 864 OS/390 environment 13
SYSCOLDIST_HIST catalog tableST_HIST catalog outer join
table 873 EXPLAIN report 920
SYSCOLDISTSTATS catalog table 864 materialization 921
numeric output area used in IFI
data command request 1121
format in storage 1075 description 1145
example 1122
WRITE request 1139
O output, unsolicited
CICS 330
OASN (originating sequence number)
operational control 330
indoubt threads 491
subsystem messages 330
part of the NID 362
overflow 1080
object
OWNER
controlling access to 133, 191
qualifies names in plan or package 145
ownership 145, 148
ownership
object of a lock 787
changing 148
object registration table (ORT)
ownership of objects
See registration tables for DDL
establishing 145, 146
objects
privileges 146
recovering dropped objects 472
offloading
active log 396
description 395 P
messages 397 PACKADM authority
trigger events 396 description 140
OMEGAMON package
accounting report accounting trace 1157
concurrency scenario 836 administrator 171, 175
overview 608 authorization to execute SQL in 149
description 1151, 1162 binding
scenario using reports 835 EXPLAIN option for remote 901
statistics report PLAN_TABLE 893
buffer pools 645 controlling use of DDL 215, 227
DB2 log 663 inoperative, when privilege is revoked 185
EDM pool 649 invalidated
thread queuing 709 dropping a view 96
online monitor program using IFI 1117 dropping an index 94
OPEN when privilege is revoked 185
statement when table is dropped 89
performance 938 list
operation privilege needed to include package 164
continuous 12 privileges needed to bind 154
description 333 monitoring 1163
log 298 privileges
operator description 130
CICS 16 explicit 137
commands 315 for copying 154
not required for IMS start 16 of ownership 146
START command 18 remote bind 154
Index X-25
PIECESIZE clause (continued) primary authorization ID
CREATE INDEX statement See authorization ID, primary
recommendations 623 PRIMARY_ACCESSTYPE column of PLAN_TABLE 906
relation to PRIQTY 627 PRINT
PLAN command of access method services 472
option of DSNC DISPLAY command 356 print log map utility
plan selection exit routine before fall back 579
description 1066 control of data set access 282
execution environment 1067 prints contents of BSDS 346, 408
sample routine 1067 prioritizing resources 682
writing 1066 privilege
PLAN_TABLE table description 133
column descriptions 893 executing an application plan 130
report of outer join 920 exercised by type of ID 161
plan, application exercised through a plan or package 148, 154
See application plan explicitly granted 133, 143
planning granting 130, 173, 180, 187
auditing 127 implicitly held 145, 148
security 127 needed for various roles 171
POE ownership 146
See port of entry remote bind 154
point in time recovery remote users 174
catalog and directory 461 retrieving catalog information 187, 191
description 468 revoking 181
point of consistency routine plans, packages 155
CICS 423 types 134, 138
description 393 used in different jobs 171
IMS 423 privilege selection, sample security plan 302
recovering data 462 problem determination
single system 423 using OMEGAMON 1162
pointer, overflow 1080 PROCEDURE privilege 136
pool, inactive connections 702 process
populating description 128
tables 61 processing
port of entry 246, 252 attachment requests 245, 258
RACF APPCPORT class 274 connection requests 233, 235
RACF SERVAUTH class 274 sign-on requests 236, 238
postponed abort unit of recovery 426 processing speed
power failure recovery scenario, z/OS 486 processor resources consumed
PQTY column accounting trace 612, 1159
SYSINDEXPART catalog table buffer pool 641
data collected by RUNSTATS utility 867 fixed-length records 629
SYSINDEXPART_HIST catalog table 874 thread creation 699
SYSTABLEPART catalog table thread reuse 628
data collected by RUNSTATS utility 869 traces 628
SYSTABLEPART_HIST catalog table 875 transaction manager 1155
predicate RMF reports 1154
description 715 time needed to perform I/O operations 622
evaluation rules 719 PROCLIM option of IMS TRANSACTION macro 709
filter factor 726 production binder
generation 735 description 171
impact on access paths 715 privileges 178
indexable 717 project activity sample table 1003
join 716 project sample table 1002
local 716 protocols
modification 735 SNA 242
properties 715 TCP/IP 249
stage 1 (sargable) 717 PSB name, IMS 327
stage 2 PSEUDO_DELETED_ENTRIES column
evaluated 717 SYSINDEXPART catalog table 867
influencing creation 761 SYSINDEXPART_HIST catalog table 874
subquery 716 PSRCP (page set recovery pending) status
PREFORMAT description 63
option of LOAD utility 626 PSTOP transaction type 364
option of REORG TABLESPACE utility 626 PUBLIC AT ALL LOCATIONS clause
preformatting space for data sets 626 GRANT statement 174
Index X-27
RECOVER INDOUBT command recovery (continued)
free locked resources 496 table space (continued)
recover indoubt thread 433 dropped 475
RECOVER privilege DSN1COPY 471
description 136 point in time 450
RECOVER TABLESPACE utility QUIESCE 450
DFSMSdss concurrent copy 458 RECOVER TOCOPY 468
recovers data modified after shutdown 580 RECOVER TOLOGPOINT 468
RECOVER utility RECOVER TORBA 468
cannot use with work file table space 460 scenario 513
catalog and directory tables 461 work file table space 461
data inconsistency problems 454 recovery log
deferred objects during restart 419 description 8
functions 459 record formats 1085
kinds of objects 459 RECOVERY option of REPORT utility 488
messages issued 459 recovery scenarios
options application program error 488
TOCOPY 468 CICS-related failures
TOLOGPOINT 468 application failure 493
TOLOGPOINT in application program error 488 attachment facility failure 498
TORBA 468 manually recovering indoubt units of recovery 495
problem on DSNDB07 461 not operational 493
recovers pages in error 340 DB2-related failures
running in parallel 456 active log failure 499
use of fast log apply during processing 456 archive log failure 503
RECOVER utility, DFSMSdss RESTORE command 36 BSDS 506
RECOVERDB privilege catalog or directory I/O errors 514
description 135 database failures 510
recovery subsystem termination 498
BSDS 507 system resource failures 499
catalog and directory 461 table space I/O errors 513
data set disk failure 486
using DFSMS 458 failure during log initialization or current status
using DFSMShsm 445 rebuild 561
using non-DB2 dump and restore 472 IMS-related failures 489
database application failure 492
active log 1077 control region failure 490
using a backup copy 441 fails during indoubt resolution 490
using RECOVER TOCOPY 468 indoubt threads 550
using RECOVER TOLOGPOINT 468 integrated catalog facility catalog VVDS failure 515
using RECOVER TORBA 468 invalid LOB 512
down-level page sets 511 IRLM failure 485
dropped objects 472 out of space 516
dropped table 473 restart 559
dropped table space 475 starting 322
IFI calls 1150 z/OS failure 486
indexes 441 RECP (RECOVERY-pending) status
indoubt threads 550 description 63
indoubt units of recovery redefining an index-based partition 519
CICS 355, 495 redefining an table-based partition 519
IMS 361 redo log records 1078
media 460 REFERENCES privilege
minimizing outages 445 description 134
multiple systems environment 426 referential constraint
operation 442 adding to existing table 77
point in time 468 data consistency 294
prior point of consistency 462 recovering from violating 520
real-time statistics tables 1189 referential structure, maintaining consistency for recovery 455
reducing time 444 refresh age 854
reporting information 448 REFRESH TABLE statement 851
restart 450, 579 registering a base table as 848
scenarios registration tables for DDL
See recovery scenarios adding columns 216, 228
subsystem 1077 CREATE statements 226
system procedures 439 creating 216
table space escape character 219
COPY 471 examples 220, 225
Index X-29
RLFASUWARN column of RLST 688 RUNSTATS utility
RLST (resource limit specification table) aggregate statistics 876
columns 686 effect on real-time statistics 1185
creating 684 timestamp 879
distributed processing 692 use
precedence of entries 688 tuning DB2 619
RMF (Resource Measurement Facility) 1151, 1154 tuning queries 875
RO SWITCH CHKPTS field of installation panel RVA (RAMAC Virtual Array)
DSNTIPN 659 backup 459
RO SWITCH TIME field of installation panel DSNTIPN 659
rollback
effect on performance 665
maintaining consistency 425
S
sample application
unit of recovery 394
databases, for 1012
root page
structure of 1011
description 117
sample exit routine
index 117
CICS dynamic plan selection 1067
route codes for messages 321
connection
router table in RACF 266, 267
location 1016
routine
processing 1021
example, authorization 157
supplies secondary IDs 234
plans, packages 155
edit 1040
retrieving information about authorization IDs 189
sign-on
routine privileges 136
location 1016
row
processing 1021
formats for exit routines 1072
supplies secondary IDs 237
validating 1043
sample library
row-level security
See SDSNSAMP library
security label column 197
sample security plan
using SQL statements 197
new application 174, 180
ROWID
sample table 995
index-only access 906
DSN8810.ACT (activity) 995
ROWID column
DSN8810.DEMO_UNICODE (Unicode sample ) 1005
inserting 65
DSN8810.DEPT (department) 996
loading data into 62
DSN8810.EMP (employee) 998
RR (repeatable read)
DSN8810.EMP_PHOTO_RESUME (employee photo and
claim class 828
resume) 1001
drain lock 829
DSN8810.EMPPROJACT (employee-to-project
effect on locking 811
activity) 1004
how locks are held (figure) 815
DSN8810.PROJ (project) 1002
page and row locking 815
PROJACT (project activity) 1003
RRDF (Remote Recovery Data Facility)
views on 1006
altering a table for 87
SBCS data
RRE (residual recovery entry)
altering subtype 87
detect 362
schema
logged at IMS checkpoint 428
privileges 136
not resolved 428
schema definition
purge 362
authorization to process 58
RRSAF (Recoverable Resource Manager Services attachment
description 57
facility)
example of processor input 58
application program
processing 59
authorization 151
processor 58
transactions
scope of a lock 781
using global transactions 781
SCOPE option
RRSAF (Resource Recovery Services attachment facility)
START irlmproc command 796
application program
scrollable cursor
running 329
block fetching 971
RS (read stability)
optimistic concurrency control 812
claim class 828
performance considerations 750
effect on locking 811
SCT02 table space
page and row locking (figure) 815
description 8
RTT (resource translation table)
placement of data sets 660
transaction type 364
SDSNLOAD library
RUN
loading 363
subcommand of DSN
SDSNSAMP library
example 325
processing schema definitions 59
SECACPT option of APPL statement 243
Index X-31
sort (continued) SSR command of IMS
program (continued) entering 318
RIDs (record identifiers) 938 prefix 333
when performed 938 stand-alone utilities
removing duplicates 938 recommendation 346
shown in PLAN_TABLE 937 standard, SQL (ANSI/ISO)
SORT POOL SIZE field of panel DSNTIPC 652 schemas 57
sorting sequence, altering by a field procedure 1052 star join 926
space attributes 69 dedicated virtual memory pool 931
specifying 81 star schema
SPACE column defining indexes for 761
SYSTABLEPART catalog table 869 START DATABASE command
SPACE column of SYSTABLESPACE catalog table example 334
data collected by RUNSTATS utility 870 problem on DSNDB07 461
space reservation options 620 SPACENAM option 335
SPACEF column START DB2 command
SYSINDEXES catalog table 866 description 323
SYSINDEXPART catalog table 867 entered from z/OS console 322
SYSINDEXPART_HIST catalog table 874 mode identified by reason code 366
SYSTABLEPART catalog table 869 PARM option 323
SYSTABLEPART_HIST catalog table 875 restart 419
SYSTABLES catalog table 869 START FUNCTION SPECIFIC command
SPACEF column of SYSTABLESPACE catalog table starting user-defined functions 343
data collected by RUNSTATS utility 870 START REGION command of IMS 366
SPACENAM option START SUBSYS command of IMS 358
DISPLAY DATABASE command 338 START TRACE command
START DATABASE command 335 AUDIT option 286
speed, tuning DB2 619 controlling data 389
SPT01 table space 8 STARTDB privilege
SPTH (sequential prefetch threshold) 636 description 135
SPUFI started procedures table in RACF 271
disconnecting 353 started-task address space 268
resource limit facility (governor) 690 starting
SQL (Structured Query Language) audit trace 286
performance trace 697 databases 334
statement cost 698 DB2
statements after an abend 324
See SQL statements process 322
performance factors 698 IRLM
transaction unit of recovery 393 process 347
SQL authorization ID table space or index space having restrictions 335
See authorization ID, SQL user-defined functions 343
SQL statements state
DECLARE CURSOR of a lock 785
to ensure block fetching 971 statement table
EXPLAIN column descriptions 946
monitor access paths 891 static SQL
RELEASE 969 privileges required 164
SET CURRENT DEGREE 957 statistics
SQLCA (SQL communication area) aggregate 876
reason code for deadlock 776 created temporary tables 872
reason code for timeout 775 distribution 879
SQLCODE filter factor 870
-30082 239 history catalog tables 872, 876
-510 819 materialized query tables 852
-905 689 partitioned table spaces 872
SQLSTATE trace
'08001' 239 class 4 977
'57014' 689 description 1156
SQTY column STATS privilege
SYSINDEXPART catalog table 867 description 135
SYSTABLEPART catalog table 869 STATSTIME column
SSM (subsystem member) use by RUNSTATS 864
error options 364 status
specified on EXEC parameter 363 CHECK-pending
thread reuse 708 resetting 63
COPY-pending, resetting 63
Index X-33
SYSIBM.LUNAMES table of CDB (continued) table (continued)
remote request processing 240, 253 recovery of dropped 473
sample entries 247 registration, for DDL 215, 227
translating inbound IDs 247 retrieving
translating outbound IDs 240, 253 IDs allowed to access 188
verifying attachment requests 243 plans and packages that can access 190
SYSIBM.USERNAMES table of CDB types 48
managing inbound remote IDs 243 table check constraints
remote request processing 240, 253 adding 80
sample entries for inbound translation 247 dropping 80
sample entries for outbound translation 261 table expressions, nested
translating inbound and outbound IDs 240, 253 materialization 940
SYSLGRNX directory table table space
information from the REPORT utility 448 compressing data 670
table space copying 457
description 8 creating
retaining records 478 default database 45
SYSOPR authority default name 45
description 140 default space allocation 45
usage 322 default storage group 45
Sysplex query parallelism description 44
disabling Sysplex query parallelism 965 explicitly 44
disabling using buffer pool threshold 637 implicitly 45
processing across a data sharing group 955 deferring allocation of data sets 30
splitting large queries across DB2 members 951 description 5
system dropping 69
management functions, controlling 388 EA-enabled 41
privileges 136 for sample application 1012
structures 7 loading data into 61
system administrator locks
description 171 control structures 697
privileges 175 description 781
System Management Facility (SMF) maximum addressable range 44
See SMF (System Management Facility) privileges of ownership 146
System Management Facility (SMF). quiescing 450
See SMF (System Management Facility) re-creating 69
system monitoring recovery
monitoring tools See recovery, table space
DB2 trace 1155 recovery of dropped 475
system operator reorganizing 76
See SYSOPR authority scans
system programmer 172 access path 911
system-directed access determined by EXPLAIN 892
authorization at second server 150 versions 75
SYSUTILX directory table space 8 recycling version numbers 76
table-controlled partitioning
automatic conversion to 51
T contrasted with index-controlled partitioning 51
implementing 50
table
using nullable partitioning columns 52
altering
tables used in examples 995
adding a column 71
TABLESPACE privilege
auditing 288
description 138
creating
TABLESPACESET option of REPORT utility 488
description 48
TABLESPACESTATS
description 6
contents 1167
dropping
real-time statistics table 1166
implications 89
TCP/IP
estimating storage 112
authorizing DDF to connect 279
expression, nested
keep_alive interval 704
processing 939
protocols 249
locks 781
temporary table
ownership 146
monitoring 661
populating
thread reuse 699
loading data into 61
temporary work file
privileges 134, 146
See work file
qualified name 146
re-creating 90
Index X-35
TSO (continued) unit of recovery (continued)
connections (continued) postponed
tuning 709 displaying 361
DB2 considerations 17 postponed abort 426
DSNELI language interface module rollback 394, 425
IFI 1119 SQL transaction 393
link editing 325 unit of recovery ID (URID) 1085
entering DB2 commands 319 UNLOAD utility
environment 325 delimited files 62
foreground 699 unqualified objects, ownership 145
requirement 17 unsolicited output
resource limit facility (governor) 682 CICS 321, 330
running SQL 699 IMS 321
tuning operational control 330
DB2 subsystem messages 330
active log size 666 UPDATE
catalog location 661 lock mode
catalog size 661 page 785
disk utilization 669 row 785
queries containing host variables 739 table, partition, and table space 785
speed 619 update efficiency 646
virtual storage utilization 674 UPDATE privilege
two-phase commit description 134
illustration 423 updating
process 423 registration tables for DDL 228
TYPE column UR (uncommitted read)
SYSCOLDI 873 claim class 828
SYSCOLDIST catalog table concurrent access restrictions 816
access path selection 864 effect on locking 811
SYSCOLDISTSTATS catalog table 864 effect on reading LOBs 824
page and row locking 814
recommendation 780
U URID (unit of recovery ID).
See unit of recovery
undo log records 1078
USAGE privilege
Unicode
distinct type 138
sample table 1005
Java class 138
UNION clause
sequence 138
effect on OPTIMIZE clause 756
USE AND KEEP EXCLUSIVE LOCKS option of WITH
removing duplicates with sort 938
clause 821
unit of recovery
USE AND KEEP SHARE LOCKS option of WITH clause 821
description 393
USE AND KEEP UPDATE LOCKS option of WITH
ID 1085
clause 821
illustration 393
USE OF privileges 138
in-abort
user analyst 171
backward log recovery 416
user-defined function
description 426
controlling 343
excluded in forward log recovery 415
START FUNCTION SPECIFIC command 343
in-commit
example, authorization 157
description 425
monitoring 344
included in forward log recovery 415
privileges of ownership 146
indoubt
providing access cost 985
causes inconsistent state 412
starting 343
definition 324
stopping 344
description 425
user-defined functions
displaying 360, 496
altering 96
included in forward log recovery 415
controlling 343
recovering CICS 355
user-defined table function
recovering IMS 361
improving query performance 758
recovery in CICS 495
user-managed data sets
recovery scenario 490
changing high-level qualifier 104
resolving 428, 429, 434
extending 40
inflight
name format 38
backward log recovery 416
requirements 38
description 425
specifying data class 41
excluded in forward log recovery 415
utilities
log records 1078
access status needed 345
Index X-37
work file database (continued)
minimizing I/O contention 625
problems 460
starting 334
used by sort 674
Workload Manager 705
WQAxxx fields of qualification area 1090, 1125
write claim class 828
write drain lock 829
write efficiency 646
write error page range (WEPR) 339
WRITE function of IFI 1139
WRITE TO OPER field of panel DSNTIPA 397
write-down control 194
X
XLKUPDLT subsystem parameter 805
XRC (Extended Remote Copy) 549
XRF (extended recovery facility)
CICS toleration 440
IMS toleration 440
Z
z/OS
command group authorization level (SYS) 318, 321
commands
MODIFY irlmproc 348
STOP irlmproc 348
DB2 considerations 13
entering DB2 commands 318, 321
environment 13
IRLM commands control 316
performance options 679
power failure recovery scenario 486
workload manager 705
Overall, how satisfied are you with the information in this book?
How satisfied are you that the information in this book is:
When you send comments to IBM, you grant IBM a nonexclusive right to use or distribute your comments in any
way it believes appropriate without incurring any obligation to you.
Name Address
Company or Organization
Phone No.
___________________________________________________________________________________________________
Readers’ Comments — We’d Like to Hear from You Cut or Fold
SC18-7413-03 Along Line
_ _ _ _ _ _ _Fold
_ _ _and
_ _ _Tape
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Please
_ _ _ _ _do
_ _not
_ _ staple
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _Fold
_ _ _and
_ _ Tape
______
NO POSTAGE
NECESSARY
IF MAILED IN THE
UNITED STATES
_________________________________________________________________________________________
Fold and Tape Please do not staple Fold and Tape
Cut or Fold
SC18-7413-03 Along Line
Printed in USA
SC18-7413-03