SG 248074
SG 248074
Paolo Bruni
Zhen Hua Dong
Josef Klitsch
Maggie Lin
Rajesh Ramachandran
Bart Steegmans
Andreas Thiele
ibm.com/redbooks
International Technical Support Organization
August 2013
SG24-8074-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xxvii.
This edition applies to IBM DB2 Version 10.1 for z/OS (program number 5605-DB2) and IBM WebSphere
Application Server for z/OS Version 8.5 (program number 5655-W65).
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxix
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxi
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxi
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxxi
iv DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4.3.7 IBM Data Server Driver for JDBC and SQLJ . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
4.3.8 JDBC type 2 DLL and the SDSNLOD2 library . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4.3.9 Bind JDBC packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4.3.10 UNIX System Services command line processor configuration . . . . . . . . . . . . 167
4.3.11 Using the TestJDBC Java sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4.3.12 DB2 security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4.3.13 Trusted context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.3.14 Trusted context application scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
4.3.15 DayTrader-EE6 application using JDBC connections. . . . . . . . . . . . . . . . . . . . 173
4.3.16 Data Web Service servlet with trusted context AUTHID switch . . . . . . . . . . . . 175
4.3.17 Using DB2 profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4.3.18 Using profiles to optimize and monitor threads and connections . . . . . . . . . . . 181
4.3.19 Configure thread monitoring for the DayTrader-EE6 application . . . . . . . . . . . 187
4.3.20 Using profiles to keep track of DRDA client levels . . . . . . . . . . . . . . . . . . . . . . 189
4.3.21 Using profiles to disable idle thread timeout at application level. . . . . . . . . . . . 194
4.3.22 Using profiles for remote connection monitoring. . . . . . . . . . . . . . . . . . . . . . . . 195
4.3.23 SYSPROC.ADMIN_DS_LIST stored procedure . . . . . . . . . . . . . . . . . . . . . . . . 197
4.3.24 DB2 real time statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
4.3.25 Using RTS to obtain COPY, REORG and RUNSTATS recommendations. . . . 201
4.4 Tivoli OMEGAMON XE for DB2 Performance Expert for z/OS . . . . . . . . . . . . . . . . . . 201
4.4.1 Extract, transform, and load DB2 accounting FILE and statistics information . . 202
4.4.2 Extract, transform and load DB2 accounting SAVE information . . . . . . . . . . . . . 202
4.4.3 Querying the performance database tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
4.4.4 Additional information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
4.5 DB2 database and application design considerations . . . . . . . . . . . . . . . . . . . . . . . . 204
Contents v
5.11.2 currentPackagePath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
5.11.3 pkList. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
5.11.4 keepDynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and
DB2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
7.1 Java Platform, Enterprise Edition with WebSphere Application Server and DB2 . . . . 338
7.2 Implementation version of JPA inside WebSphere Application Server . . . . . . . . . . . . 339
7.2.1 The goals of the Java Persistence API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
7.2.2 OpenJPA and JDBC interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.2.3 Agile JPA development with a WebSphere Application Server embeddable EJB
container and DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
7.2.4 Use of alternative JPA persistence providers . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
7.2.5 Usage of Non-JTA data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.2.6 Data source resource definition in applications. . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.2.7 Definition of the IBM DB2 Driver in WebSphere Application Server V8.5 Liberty
Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
7.2.8 LOB streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7.2.9 XML JPA column mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7.3 Preferred practices of Java Platform, Enterprise Edition and DB2 . . . . . . . . . . . . . . . 358
7.3.1 Using resource references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
7.3.2 Providing a JDBC driver in your application libraries . . . . . . . . . . . . . . . . . . . . . 358
7.3.3 Resetting the database for each test run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
7.3.4 Optimizing generated SQL from persistence frameworks. . . . . . . . . . . . . . . . . . 359
7.4 Known issues with OpenJPA 2.2 and DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
vi DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
8.2.2 Using client information strings to classify work in WLM and RMF reporting . . . 369
8.2.3 Other techniques to segregate/correlate work . . . . . . . . . . . . . . . . . . . . . . . . . . 375
8.3 Monitoring from WebSphere Application Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8.3.1 Using SMF 120 records . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8.3.2 WebSphere Application Server Performance Monitoring Infrastructure . . . . . . . 386
8.4 Monitoring from the DB2 side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
8.4.1 Which information to gather . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
8.4.2 Creating DB2 accounting records at a transaction boundary . . . . . . . . . . . . . . . 396
8.4.3 DB2 rollup accounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
8.4.4 Analyzing DB2 statistics data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
8.4.5 Analyzing DB2 accounting data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
8.4.6 Monitoring threads and connections by using profiles . . . . . . . . . . . . . . . . . . . . 433
8.5 Using the performance database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
8.5.1 Querying aggregated JDBC type 2 accounting information . . . . . . . . . . . . . . . . 435
8.5.2 Querying aggregated JDBC type 4 accounting information . . . . . . . . . . . . . . . . 437
8.5.3 Using RTS to identify DB2 tables that are involved in DML operations . . . . . . . 437
8.6 Monitoring from the z/OS side with RMF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
8.6.1 Workload activity when using a type 4 connection . . . . . . . . . . . . . . . . . . . . . . . 444
8.6.2 Workload activity when using a type 2 connection . . . . . . . . . . . . . . . . . . . . . . . 448
Contents vii
B.2 The DayTrader application workload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
B.2.1 The IBM DayTrader performance benchmark sample for WebSphere Application
Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
B.3 Using the DayTrader application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
Appendix F. Sample IBM Data Server Driver for JDBC and SQLJ trace . . . . . . . . . . 555
viii DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
System requirements for downloading the web material . . . . . . . . . . . . . . . . . . . . . . . 587
Downloading and extracting the web material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Contents ix
x DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figures
xii DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4-48 Local caching, CACHEDYN = NO and KEEPDYNAMIC = YES . . . . . . . . . . . . . . . . 144
4-49 Global caching, CACHEDYN = YES and KEEPDYNAMIC = NO . . . . . . . . . . . . . . . 146
4-50 Full caching, CACHEDYN = NO, KEEPDYNAMIC = YES and MAXKEEPD > 0 . . . 147
4-51 Information about the dynamic SQL statement in statistic report . . . . . . . . . . . . . . . 148
4-52 DB2 secure port and BINDSPECIFIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4-53 Display DDF alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4-54 DB2 DDF startup messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4-55 DB2 display DDF command output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
4-56 Overview applications using JDBC type 2 and type 4 . . . . . . . . . . . . . . . . . . . . . . . 162
4-57 /JDBC etc/profile changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
4-58 JDBC type 2 DLL external links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4-59 JDBC type 2 DLL in SDSNLOD2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
4-60 UNIX System Services SDSNLOD2 external link definition . . . . . . . . . . . . . . . . . . . 165
4-61 JDBC packages bound by DB2Binder utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4-62 DB2 CLP to check out JDBC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
4-63 TestJDBC samples directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4-64 Invoke TestJDBC application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
4-65 WebSphere deployment manager data source test error message . . . . . . . . . . . . . 171
4-66 WebSphere deployment manager AUTHFAIL audit report. . . . . . . . . . . . . . . . . . . . 172
4-67 WebSphere deployment manager data source test successful . . . . . . . . . . . . . . . . 172
4-68 JDBC type 4 trusted context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
4-69 DWS trusted context display thread output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
4-70 Trusted context IFCID 269 record trace with SQLCODE -20361 . . . . . . . . . . . . . . . 180
4-71 Failure of trusted user switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
4-72 DSN_PROFILE_TABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4-73 DSN_PROFILE_ATTRIBUTES table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
4-74 DSN_PROFILE_HISTORY table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
4-75 DSN_PROFILE_ATTRIBUTES_HISTORY table . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
4-76 START PROFILE command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
4-77 DSNT772I active thread monitoring warning message. . . . . . . . . . . . . . . . . . . . . . . 189
4-78 DB2 Client configuration to directly access DB2 for z/OS. . . . . . . . . . . . . . . . . . . . . 190
4-79 DB2 client configuration for DB2 direct access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
4-80 DISPLAY LOCATION with PRDID information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
4-81 Use PDB to query PRDIDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
4-82 DSNT772I PRDID monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
4-83 Message DSNT772I for threshold exceeded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
4-84 ETL accounting FILE and statistics data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
4-85 ETL accounting SAVE data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5-1 WebSphere Application Server Network Deployment configuration . . . . . . . . . . . . . . 209
5-2 WebSphere navigation window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5-3 Existing JDBC providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5-4 New JDBC provider definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5-5 Class path definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
5-6 Summary window for JDBC provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5-7 Environment window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5-8 List of WebSphere variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
5-9 Filtering variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5-10 List of DB2 related variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5-11 Variable and scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5-12 Variable for DB2UNIVERSAL_JDBC_DRIVER_PATH. . . . . . . . . . . . . . . . . . . . . . . 216
5-13 Location of the IBM Data Server Driver for JDBC and SQLJ classes. . . . . . . . . . . . 217
5-14 WebSphere navigation window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5-15 JDBC data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Figures xiii
5-16 Data source definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5-17 Selecting the JDBC provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5-18 Database properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
5-19 Security alias setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
5-20 Summary of data source definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
5-21 The administration console window of WebSphere Application Server . . . . . . . . . . 223
5-22 List of existing JDBC providers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
5-23 JDBC provider that is defined with the cell scope . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
5-24 New JDBC provider definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
5-25 Driver classes location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5-26 Summary of new JDBC provider definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5-27 Environment window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5-28 List of WebSphere variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5-29 Filter variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5-30 List of available variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
5-31 Variable cell scope mzcell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
5-32 DB2UNIVERSAL_JDBC_DRIVER_PATH variable. . . . . . . . . . . . . . . . . . . . . . . . . . 231
5-33 Location of the driver classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
5-34 Location of the native libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5-35 Administration window for WebSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5-36 List of JDBC data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5-37 Window for entering data source information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5-38 Defining the data source and JNDI names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5-39 Selecting the JDBC type 2 Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
5-40 Database properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5-41 Security aliases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5-42 Summary of the type 2 Driver setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5-43 Administrative console of the WebSphere Application Server . . . . . . . . . . . . . . . . . 239
5-44 JDBC type 2 data source selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
5-45 Selecting Custom properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
5-46 List of custom properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
5-47 General properties definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
5-48 Administrative console of the WebSphere Application Server . . . . . . . . . . . . . . . . . 244
5-49 List of existing JDBC data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
5-50 List of custom properties that are available to the data source. . . . . . . . . . . . . . . . . 245
5-51 Adding the enableSysplexWLB property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
5-52 Administrative console of the WebSphere Application Server . . . . . . . . . . . . . . . . . 247
5-53 List of existing JDBC data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5-54 TradeDatasourceXA data source is accessed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
5-55 Available properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5-56 Set a value for clientAccountingInformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
5-57 Application identification string . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5-58 Properties values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
5-59 Administration console of the WebSphere Application Server . . . . . . . . . . . . . . . . . 251
5-60 List all the WebSphere installed applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
5-61 Information of the D0ZG_WASTestClientInfo application . . . . . . . . . . . . . . . . . . . . . 253
5-62 Resource reference for the chosen application . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
5-63 Selecting the module that is used by the application . . . . . . . . . . . . . . . . . . . . . . . . 254
5-64 Extended properties panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
5-65 Entering the application properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
5-66 Rational Application Developer ClientInfo project . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
5-67 Servlet ClientInfoJDBC40API result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
5-68 Servlet ClientInfoJDBC40API display thread output . . . . . . . . . . . . . . . . . . . . . . . . . 261
xiv DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
5-69 Servlet ClientInfoJDBC30API result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5-70 Servlet ClientInfoJDBC30API display thread output . . . . . . . . . . . . . . . . . . . . . . . . . 263
5-71 Servlet ClientInfoWSAPI result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5-72 Servlet ClientInfoWSAPI display thread output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
5-73 Servlet ClientInfoWLM result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
5-74 Servlet ClientInfoWLM display thread output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
5-75 Administrative console of the WebSphere Application Server . . . . . . . . . . . . . . . . . 268
5-76 List of existing JDBC data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5-77 JDBC TradeDatasourceXA resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
5-78 Data source properties window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
5-79 WebSphere navigation window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5-80 Global security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
5-81 J2C authentication data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
5-82 J2C authentication input definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
5-83 WebSphere navigation window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
5-84 Data source and JDBC provider association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5-85 Data source and provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
5-86 Connection pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
5-87 Administration console of WebSphere Application Server . . . . . . . . . . . . . . . . . . . . 276
5-88 List of installed applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
5-89 D0ZG_WASTestClientInfo.properties definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
5-90 Resource reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5-91 Selecting the jdbc/Josef module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
5-92 Resource Authentication definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
5-93 JAAS alias trusted connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
5-94 Trusted context enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
5-95 Administration console of WebSphere Application Server . . . . . . . . . . . . . . . . . . . . 283
5-96 List of available servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
5-97 Properties of the MZSR014 server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
5-98 Server process definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
5-99 Configuring the process definition of the application server . . . . . . . . . . . . . . . . . . . 285
5-100 Java Virtual Machine for the application server . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5-101 JVM custom properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
5-102 New custom property for JVM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5-103 Application server defined. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
5-104 Administrative console of the WebSphere Application Server . . . . . . . . . . . . . . . . 288
5-105 List of existing JDBC data sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289
5-106 Data source TradeDatasourceXA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
5-107 List of the custom properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
5-108 Isolation level definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
5-109 Custom property for default isolation level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
5-110 No default for currentPackagePath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
5-111 currentPackagePath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
5-112 pkList. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5-113 Property keepDynamic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
5-114 Custom property keepDynamic. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
6-1 The flow from application to database using pureQuery. . . . . . . . . . . . . . . . . . . . . . . 303
6-2 Add data access management support to the project. . . . . . . . . . . . . . . . . . . . . . . . . 304
6-3 Generate pdqxml files with IBM Data Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
6-4 Work with jpa_db2.pdqxml after generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6-5 Create a data source connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
6-6 Check whether you can connect to the sample database . . . . . . . . . . . . . . . . . . . . . 315
6-7 Create a JPA project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Figures xv
6-8 Project structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6-9 Select a table for the generation of JPA entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
6-10 Select the DEPTtable in the DSN81010 schema . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6-11 Relationships to other classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6-12 Generated class characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6-13 Creation of the JUnit test class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6-14 Specify the JPA enhancement javaagent for the unit test . . . . . . . . . . . . . . . . . . . . 325
6-15 JUnit test success message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
6-16 Data source connection test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
7-1 Insert and delete a table row with embeddable EJB container - successful test . . . . 351
7-2 Specify an alternative default persistence provider . . . . . . . . . . . . . . . . . . . . . . . . . . 353
7-3 Non-transactional data source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
8-1 Data source Custom properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366
8-2 Specifying client information as data source custom properties . . . . . . . . . . . . . . . . . 367
8-3 Using the enableClientInformation Custom property . . . . . . . . . . . . . . . . . . . . . . . . . 368
8-4 Application Resource reference window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
8-5 Specifying client information as an extended data source property . . . . . . . . . . . . . . 369
8-6 Classifying DDF work by using the subsystem and process name. . . . . . . . . . . . . . . 371
8-7 WebSphere Application Server classification document wlm.xml. . . . . . . . . . . . . . . . 371
8-8 Selecting the application’s deployment descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8-9 DayTrader-EE6 deployment descriptor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
8-10 Setting the wlm_classification_file variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
8-11 Current wlm_classification_file that is being used at start. . . . . . . . . . . . . . . . . . . . . 373
8-12 Changing and displaying the workload classification file . . . . . . . . . . . . . . . . . . . . . 374
8-13 WebSphere work classification using transaction classes . . . . . . . . . . . . . . . . . . . . 375
8-14 Setting currentPackageSet property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376
8-15 Specifying the planName data source property . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377
8-16 Java and Process Management option. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
8-17 Adding an SMF property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
8-18 SMF recording properties that are set through the administration console. . . . . . . . 380
8-19 Start PMI collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
8-20 PMI collection that is activated for the application server . . . . . . . . . . . . . . . . . . . . . 388
8-21 Tivoli Performance Viewer - Servlet Summary Report . . . . . . . . . . . . . . . . . . . . . . . 389
8-22 JDBC Connection Pool statistics at startup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
8-23 Connection pool properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
8-24 Advisor output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
8-25 Tuning advice TUNE0201W . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
8-26 Connection pool - PrepStmtCacheDiscardCount . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
8-27 Data source statement cache size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
8-28 PrepStmtCacheDiscardCount after the change . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
8-29 Specifying the accountingInterval customer property . . . . . . . . . . . . . . . . . . . . . . . . 397
8-30 Thread activity time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
8-31 Accounting class1, 2, and 3 time reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
8-32 Message DSNT772I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8-33 IFCID 402 record trace report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
8-34 PDB query JDBC type 2 aggregated accounting data . . . . . . . . . . . . . . . . . . . . . . . 436
8-35 PDB query JDBC type 4 aggregated accounting data . . . . . . . . . . . . . . . . . . . . . . . 437
8-36 Using RTS to determine workload-related table changes. . . . . . . . . . . . . . . . . . . . . 439
8-37 Performance indicators JDBC type 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
8-38 Performance indicators JDBC type 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
9-1 Specifying JCC trace parameters at the data source level . . . . . . . . . . . . . . . . . . . . . 460
9-2 Specify only traceLevel at the data source level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
9-3 Set the log detail level in WebSphere Application Server . . . . . . . . . . . . . . . . . . . . . . 462
xvi DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
9-4 Specifying db2.jcc.propertiesFile as a custom property . . . . . . . . . . . . . . . . . . . . . . . 464
9-5 DB2SytemMonitor information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
A-1 ADMT data sharing overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
A-2 Overview admin scheduler installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
A-3 ADMT start messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
A-4 ADMT DB2 unavailable message. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
A-5 Query DB2START events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
A-6 Administrative scheduler DB2START messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
A-7 Administrative scheduler DB2START trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
A-8 Query DB2START processing status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 496
A-9 Query DB2START history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
A-10 Query DB2STOP events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
A-11 Administrative scheduler DB2STOP messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
A-12 Administrative scheduler DB2STOP trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
A-13 Query the DB2STOP processing status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
A-14 Query the DB2STOP history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
A-15 Object interactions for autonomic statistics maintenance in DB2 . . . . . . . . . . . . . . . 504
A-16 Query statistics monitoring tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
A-17 ADMIN_UTL_MONITOR ADMT trace information . . . . . . . . . . . . . . . . . . . . . . . . . . 507
B-1 Our DB2 for z/OS configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512
B-2 DayTrader overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 513
B-3 WebSphere Application Server admin console after installation . . . . . . . . . . . . . . . . 515
B-4 JDBC Provider that is defined by the configuration script. . . . . . . . . . . . . . . . . . . . . . 516
B-5 TradeDataSource from WebSphere Application Server administration console . . . . 516
B-6 Modify the data source for the type 4 connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
B-7 Finish installation by populating the DayTrader database . . . . . . . . . . . . . . . . . . . . . 518
B-8 Go Trade! window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
B-9 Verify your installation by logging in to the DayTrader application . . . . . . . . . . . . . . . 519
B-10 DayTrader Home window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
B-11 Test Trade scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
C-1 Install the WebSphere Application Server Developer Tools. . . . . . . . . . . . . . . . . . . . 525
D-1 OMEGAMON PDB ETL overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
D-2 PDB structure accounting tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
D-3 PDB structure statistics tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
D-4 Customize a create table DDL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
H-1 ClientInfo project that is shown in the Java EE perspective . . . . . . . . . . . . . . . . . . . . 574
H-2 Opening the ClientInfo.war file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575
H-3 Install ClientInfo application from local file system . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
H-4 How to install the application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576
H-5 Step 1: Select installation options window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
H-6 Step 2: Map modules to servers window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
H-7 Step 3: Map context roots for Web modules window . . . . . . . . . . . . . . . . . . . . . . . . . 577
H-8 Step 4: Metadata for modules window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
H-9 Step 5: Summary window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578
H-10 Application Clientinfo_war installed successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
H-11 Synchronize changes with nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 579
H-12 ClientInfo application installed successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
H-13 Panel 1 starting the ClientInfo Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 580
H-14 Servant region application start messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
H-15 Testing the ClientInfoJDBC30API servlet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582
H-16 Testing the ClientInfoJDBC40API servlet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 583
H-17 setClientInfo SQLFeatureNotSupportedException . . . . . . . . . . . . . . . . . . . . . . . . . . 583
H-18 Testing the ClientInfoWSAPI servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584
Figures xvii
H-19 Testing the ClientInfoWLM servlet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
xviii DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Tables
xxii DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
7-5 Example call of a stored procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
7-6 DB2 data source definitions for the WebSphere embeddable EJB container. . . . . . . 345
7-7 The Employee class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
7-8 JUnit test driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
7-9 Sample session EJB for SELECT, INSERT, and DELETE of a JPA entity. . . . . . . . . 349
7-10 Persistence.xml of the sample program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
7-11 wsenhancer command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
7-12 Data source definition with Java annotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7-13 Server and data source definitions for Liberty Profile . . . . . . . . . . . . . . . . . . . . . . . . 355
7-14 LOB streaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7-15 Applying a third-party XML mapping tool using JPA annotations . . . . . . . . . . . . . . . 357
7-16 Sample JAXB object to be included into a JPA entity . . . . . . . . . . . . . . . . . . . . . . . . 357
8-1 D SMF,O output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378
8-2 Using MVS commands to activate SMF 120 type 9 recording . . . . . . . . . . . . . . . . . . 381
8-3 Subtype 9 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
8-4 Subtype 9 detailed output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
8-5 Create a DB2 statistics report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
8-6 Statistics report - highlights section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
8-7 Statistics report - SQL DML section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
8-8 Statistics report - dynamic SQL statements section . . . . . . . . . . . . . . . . . . . . . . . . . . 400
8-9 Statistics report - subsystem services section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
8-10 Statistics report - DRDA remote locations section . . . . . . . . . . . . . . . . . . . . . . . . . . 401
8-11 Statistics report - global DDF activity section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
8-12 Statistics report - subsystem services - T2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
8-13 Statistics report - global DDF activity section - T2 . . . . . . . . . . . . . . . . . . . . . . . . . . 405
8-14 Statistics report - locking and data sharing locking sections. . . . . . . . . . . . . . . . . . . 406
8-15 Statistics report - buffer pool section. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
8-16 Statistics report - group buffer pool section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
8-17 Statistics report - CPU Times section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
8-18 Statistics report - RMF CPU and storage metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
8-19 Create a DB2 accounting report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
8-20 Accounting report - identification elapsed time and class 2 time distribution . . . . . . 419
8-21 Accounting report - highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
8-22 Accounting report - normal termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
8-23 Accounting report - SQL DML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
8-24 Accounting report - DYNAMIC SQL STMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8-25 Accounting report - LOCKING and DATA SHARING . . . . . . . . . . . . . . . . . . . . . . . . 422
8-26 Accounting report - buffer pool and group buffer pool . . . . . . . . . . . . . . . . . . . . . . . 424
8-27 Accounting report -Class 1, 2, and 3 times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
8-28 Accounting report - Distributed activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8-29 Accounting report - Package level information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
8-30 Accounting report - Class 1, 2, and 3 times for T2 . . . . . . . . . . . . . . . . . . . . . . . . . . 431
8-31 Accounting report - Normal Term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
8-32 Accounting report -Package information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8-33 .Using the SQL table UDF to query JDBC type 2 accounting information . . . . . . . . 435
8-34 JCL that is used to create the postprocessor workload activity report . . . . . . . . . . . 443
8-35 Workload activity - reporting class RTRADE0Z period 1 . . . . . . . . . . . . . . . . . . . . . 444
8-36 Workload activity - reporting class RTRADE0Z period 2 . . . . . . . . . . . . . . . . . . . . . 445
8-37 Workload activity - reporting class RTRADE0Z total. . . . . . . . . . . . . . . . . . . . . . . . . 446
8-38 Workload activity - reporting class RTRADE period 1. . . . . . . . . . . . . . . . . . . . . . . . 447
8-39 Duration report SYSIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
8-40 Workload activity - reporting class RTRADE period 1. . . . . . . . . . . . . . . . . . . . . . . . 449
8-41 Workload activity - reporting class RTRADE period 2. . . . . . . . . . . . . . . . . . . . . . . . 449
Examples xxiii
8-42 Workload activity - reporting class for trade (DRDA) . . . . . . . . . . . . . . . . . . . . . . . . 450
9-1 Example of processing an SQLWarning and SQLError . . . . . . . . . . . . . . . . . . . . . . . 452
9-2 The output of warning, error, and stack trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
9-3 Processing SQLException and format SQLCA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
9-4 The output of JDBC program SQLCA formatting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
9-5 Handling chained exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
9-6 Combined WebSphere and JCC trace to SYSOUT DD statement . . . . . . . . . . . . . . . 462
9-7 jcc.properties file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
9-8 JCC trace excerpt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
9-9 TRACE_CONNECT entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
9-10 DRDA flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468
9-11 DB2 correlation information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469
9-12 DB2 accounting record that matches the JCC trace . . . . . . . . . . . . . . . . . . . . . . . . . 470
9-13 JCC trace with SystemMonitor active . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
9-14 -DIS THREAD(*) SCOPE(GROUP) output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
9-15 Resource unavailable at ALTER TABLE time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476
9-16 -DIS DB(DSN00023) SP(ACT) USE output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
9-17 -DIS THREAD(*) SCOPE(GROUP) LUWID(140295) output . . . . . . . . . . . . . . . . . . 477
9-18 WebSphere Application Server Servant log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
9-19 Message in DB2 MSTR JOBLOG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
9-20 Deadlock lockout trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479
9-21 Identity deadlocked dynamic SQL from DSN_STATEMENT_CACHE_TABLE . . . . 481
A-1 ADMT STC JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485
A-2 ADMT TASKLIST data set - DEFINE CLUSTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
A-3 Create ADMT user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
A-4 RACF started class for ADMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
A-5 RACF program control for ADMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
A-6 RACF passtickets for ADMT STCs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488
A-7 ADMT parameter STOPONDB2STOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
A-8 Commands operating ADMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
A-9 ADMT DB2START ADMIN_TASK_ADD invocation . . . . . . . . . . . . . . . . . . . . . . . . . . 493
A-10 Define JCL data set alias using symbolicrelate . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
A-11 DB2START D0Z1STRT JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
A-12 JCL template D0Z1STRT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
A-13 ADMT DB2STOP ADMIN_TASK_ADD invocation . . . . . . . . . . . . . . . . . . . . . . . . . . 497
A-14 DB2STOP D0Z1STOP JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
A-15 JCL template D0ZASTOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
A-16 CMDIN DB2 console commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
A-17 @OSCMD REXX program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
A-18 Statistics monitoring user objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
A-19 Statistics monitoring DSNDB06.SYSTSKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505
A-20 Query for verifying the status of a task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
A-21 SQL table UDF to obtain the RUNSTATS output . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
A-22 Query for recent RUNSTATS for table space DSNADMDB.DSNADMTS . . . . . . . . 508
D-1 PDB create DB2 z/OS database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
D-2 PDB generate create table DDL data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
D-3 Create table space template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
D-4 Batch JCL PDB accounting and statistics table creation . . . . . . . . . . . . . . . . . . . . . . 535
D-5 OMPE extract and transform DB2 trace data into FILE format . . . . . . . . . . . . . . . . . 536
D-6 Extract and transform accounting SAVE format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
D-7 Merge statistics and accounting file load utility control statements . . . . . . . . . . . . . . 538
D-8 Merge accounting save load utility control statements . . . . . . . . . . . . . . . . . . . . . . . . 538
D-9 Image copy batch JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
xxiv DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
D-10 Reorg batch JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
D-11 OMPE SQL table UDF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
D-12 Starting UDF for JBC driver Type 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
E-1 Subtype 1 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
E-2 Subtype 1 detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
E-3 Subtype 3 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
E-4 Subtype 3 detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
E-5 Subtype 7 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
E-6 Subtype 7 detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
E-7 Subtype 8 summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
E-8 Subtype 8 detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551
F-1 Sample JCC trace of a single (short) transaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 555
G-1 DDL for UDF GRACFGRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
G-2 Assembler listing of GRAFGRP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 564
G-3 DDL for UDF BIGINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
G-4 COBOL listing for UDF BIGINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569
Examples xxv
xxvi DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® HiperSockets™ Resource Measurement Facility™
CICS® IBM® RMF™
DB2® IMS™ System Storage®
DB2 Connect™ iSeries® System z®
developerWorks® Language Environment® System z9®
Distributed Relational Database MVS™ Tivoli®
Architecture™ OMEGAMON® VIA®
DRDA® Optim™ VTAM®
DS8000® OS/390® WebSphere®
Enterprise Storage Server® Parallel Sysplex® z/Architecture®
eServer™ pureQuery® z/OS®
FICON® pureXML® z/VM®
FlashCopy® RACF® z/VSE®
GDPS® Rational® z9®
Geographically Dispersed Parallel Redbooks® zEnterprise®
Sysplex™ Redbooks (logo) ® zSeries®
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xxviii DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Preface
IBM® DB2® for z/OS® is a high-performance database management system (DBMS) with a
strong reputation in traditional high-volume transaction workloads that are based on relational
technology. IBM WebSphere® Application Server is web application server software that runs
on most platforms with a web server and is used to deploy, integrate, execute, and manage
Java Platform, Enterprise Edition applications. In this IBM Redbooks® publication, we
describe the application architecture evolution focusing on the value of having DB2 for z/OS
as the data server and IBM z/OS as the platform for traditional and for modern applications.
This book provides background technical information about DB2 and WebSphere features
and demonstrates their applicability presenting a scenario about configuring WebSphere
Version 8.5 on z/OS and type 2 and type 4 connectivity (including the XA transaction support)
for accessing a DB2 for z/OS database server taking into account
high-availability requirements.
DB2 database administrators, WebSphere specialists, and Java application developers will
appreciate the holistic approach of this document.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
Paolo Bruni is an ITSO Project Leader that is based in the Silicon Valley Lab in San Jose,
CA. Since 1998, Paolo authored Redbooks publications about DB2 for z/OS, IBM IMS™, and
related tools and has conducted workshops worldwide. During his many years with IBM in
development and in the field, Paulo’s work has been mostly related to database systems on
IBM System z®.
Zhen Hua Dong is an IBM Advisory Software Engineer for DB2 for z/OS Worldwide Level 2
support, China Development Lab. He has worked in the DB2 for z/OS team for six years. His
primary focus is the RDS component in DB2 for z/OS, including SQL processing, optimizer,
CCSID, and so on. He also participated in several projects for local customers, such as DB2
migration and core-banking system implementation.
Josef Klitsch is a Senior IT Specialist for z/OS Problem Determination Tools with IBM
Software Group, Switzerland. After he joined IBM in 2001, he provided DB2 consultancy and
technical support to Swiss DB2 for z/OS customers and worked as a DB2 subject matter
expert for IBM China and as a DB2 for z/OS technical resource for IBM France in Montpellier.
Before his IBM employment, Josef worked, for more than 15 years, for several European
customers as an Application Developer, Database Administrator, and DB2 Systems
Programmer with a focus on DB2 for z/OS and its interfaces. His preferred area of expertise in
DB2 for z/OS is stored procedures programming and administration. He co-authored the IBM
Redbooks publications DB2 9 for z/OS: Deploying SOA Solutions, SG24-7663 and DB2 10 for
z/OS Technical Overview, SG24-7892.
Bart Steegmans is a Consulting DB2 Product Support Specialist from IBM Belgium,
currently working remotely for the Silicon Valley Laboratory in San Jose, providing technical
support for DB2 for z/OS performance problems. Bart was on assignment as a Data
Management for z/OS Project Leader at the ITSO, San Jose Center, 2001 - 2004. He has
over 23 years of experience in DB2. Before joining IBM in 1997, Bart worked as a DB2
system administrator at a banking and insurance group. His areas of expertise include DB2
performance, database administration, and backup and recovery.
Richard Conway
Bob Haimowitz
Linda Robinson
International Technical Support Organization
Mark Rader
ATS Dallas, IBM US
Gareth Jones
IBM UK
David Follis
z/OS WebSphere Development, Poughkeepsie, IBM US
xxx DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Don Bagwell
ATS Tucson, IBM US
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xxxi
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
xxxii DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
1
The value proposition of System z, z/OS is centered around efficient sharing of resources.
Benefits can are derived by running on the platform or direct exploitation of platform qualities
and attributes by the code under the specification interfaces.
The mainframe owes much of its popularity and longevity to its inherent reliability and stability,
a result of continuous technological advances since the introduction of the IBM System/360 in
1964. No other computer architecture in existence can claim as much continuous,
evolutionary improvement, while maintaining compatibility with existing applications.
The term mainframe has gradually moved from a physical description of the IBM larger
computers to the categorization of a style of computing. One defining characteristic of the
mainframe has been continuing compatibility.
One key advantage of mainframe systems is their ability to process terabytes of data from
high-speed storage devices and produce valuable output. For example, mainframe systems
make it possible for banks and other financial institutions to perform end-of-quarter
processing and produce reports that are necessary to customers (for example, quarterly
stock statements or pension statements) or to the government (for example, financial results).
Mainframe workloads fall into one of two categories: Batch processing or online transaction
processing, which includes web-based applications:
With mainframe systems, retail stores can generate and consolidate nightly sales reports
for review by regional sales managers. The applications that produce these statements
are batch applications.
In contrast to batch processing, transaction processing occurs interactively with the user.
Typically, mainframes serve a vast number of transaction systems. These systems are
often mission-critical applications that businesses depend on for their core functions.
Transaction systems must be able to support an unpredictable number of concurrent users
and transaction types. Most transactions are ran in short time periods (fractions of a
second in some cases).
The IBM relational database management system (RDBMS) offered by System z is DB2 for
z/OS. It is a member of the DB2 family of databases and uses the strengths of that family and
the strength of the System z platform.
DB2 for z/OS data can be accessed in various ways, such as:
Transactions from IMS TM or CICS
Application servers using SQLJ or JDBC (such as WebSphere Application Server)
IBM Distributed Relational Database Architecture™ (IBM DRDA®) protocol
2 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
1.2 The System z platform
Infrastructure simplification is key to solving many IT problems. Simplification can be achieved
by resource sharing among servers. It is all about sharing data, sharing applications, and
simplified operational controls. The System z platform, along with its highly advanced
operating systems, provides standard format, protocols, and programming interfaces that
enable resource sharing among applications that are running on the mainframe or a set of
clustered mainframes.
Resource sharing is intended to help reduce redundancy that often comes from maintaining
multiple copies of duplicate data on multiple servers. Sharing can also improve privacy
management by enabling better control and enforcing privacy regulations for data sources.
Sharing data can help simplify disaster recovery scenarios because fewer servers are being
deployed; therefore, sharing data means that less data must be protected during periodic
back-up operations (for example, daily or weekly maintenance) compared to having multiple
copies. But most of all, infrastructure simplification helps a business assess its entire
computing capabilities to determine the best directions and strategy for overall, integrated
workflow, and in doing so, helps to better take advantage of existing assets and drive higher
returns on IT investments.
Processing power can also be turned on (or activated) when needed and turned off when it is
no longer needed. This is useful in cases of seasonal peaks or disaster recovery situations.
Adding processing power and centralizing applications represents one strategy to help control
the cost and complexity of an infrastructure. This approach can also provide a highly effective
way to maximize control while minimizing server sprawl, in essence, reducing the number of
single-application servers that are operating in uncontrolled environments. A number of
single-application servers can typically be deployed to support business processes in both
production and supporting test environments. Hot stand-by failover servers, quality assurance
servers, backup servers, and training, development, and test servers are some of the types of
resources that are required to support a given application. A System z server can help reduce
the numbers of those servers by its ability to scale out.
The term “scale out” describes how the virtualization technology of the System z server lets
users define and provision virtual servers that have all of the characteristics of distributed
servers, except they do not require dedicated hardware. They coexist, in total isolation,
sharing the resources of the System z server.
Virtual servers on System z can communicate between each other, using inter-server
communication that is called IBM HiperSockets™. This technology uses memory as its
transport media without the need to go out of the server into a real network, simplifying the
need to use cables, routers, or switches to communicate between the virtual servers.
Availability
One of the basic requirements for today’s IT infrastructure is to provide continuous business
operations in the event of planned or unplanned disruptions. The availability of the
installation’s mission-critical applications, which are based on a highly available platform,
directly correlates to successful business operations.
System z hardware, operating systems, and middleware elements have been designed to
work together closely, providing an application environment with a high level of availability.
The System z environment approaches application availability with an integrated and
cohesive strategy that encompasses single-server, multi-server, and multi-site environments.
The System z hardware itself is a highly available server. From its inception, all of the
hardware elements have always had an internal redundancy. Starting with the energy
components and ending with the central processors, all of these redundant elements can be
switched automatically in the event of an error. As a result of this redundancy, it is possible to
make fixes or changes to any element that is down without stopping the machine from
working and providing support for the customers.
The System z operating system that sits on top of the hardware has traditionally provided the
best protection and recovery from failure. For example, z/OS, the flagship operating system of
the System z platform, was built to mask a failure from the application. In severe cases, z/OS
can recover through a graceful degradation rather than end in a complete failure. Operating
system maintenance and release change can be done in most cases without stopping the
environment.
Middleware running on z/OS is built to take advantage of both the hardware and operating
system availability capabilities. IBM middleware such as IBM DB2 for z/OS, IBM CICS
products, IBM WebSphere Application Server, and IBM IMS can provide an excellent solution
for an available business application.
The IBM Parallel Sysplex® architecture on System z allows clustered System z servers to
provide resource sharing, workload balancing, and data sharing capabilities for the IT,
delivering ultimate flexibility when supporting different middleware applications. Although
System z hardware, operating systems, and middleware have long supported multiple
applications on a single server, Parallel Sysplex clustering enables multiple applications to
communicate across servers, and even supports the concept of a large, single application
that spans multiple servers, resulting in optimal availability characteristics for that application.
Parallel Sysplex is a cluster solution that is implemented from the IBM hardware to the
middleware layer and, as a consequence, does not have to be designed and developed in the
application layer.
With Parallel Sysplex and its ability to support data sharing across servers, IT architects can
design and develop applications that have a single, integrated view of a shared data store.
System z shared databases also provide high-quality services to protect data integrity.
4 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 1-1 shows the System z high availability family solution, from single system to the IBM
Geographically Dispersed Parallel Sysplex™ (IBM GDPS®).
GDPS technology provides a total business continuity solution for the z/OS environment.
GDPS is a sysplex that spans multiple sites, with disaster recovery capability, which is based
on advanced automation techniques. The GDPS solution allows the installation to manage
remote copy configuration and storage subsystems, automate Parallel Sysplex operation
tasks, and perform failure recovery from a single point of control.
GDPS extends the resource sharing, workload balancing, and continuous availability benefits
of a Parallel Sysplex environment. It also significantly enhances the capability of an enterprise
to recover from disasters and other failures, and to manage planned exception conditions,
enabling businesses to achieve their own continuous availability and disaster recovery goals.
IBM has introduced several “specialty engines”: Processors that can help users expand the
use of the mainframe for new workloads, while helping to lower cost of ownership.
The System Assist Processor (SAP) is standard on IBM System z servers and is a
dedicated I/O processor to help improve efficiencies and reduce the impact of I/O
processing of every IBM System z logical partition regardless of the operating system
(z/OS, IBM z/VM®, Linux, IBM z/VSE® and z/TPF).
The IBM Integrated Facility for Linux (IFL) is another processor that enables the Linux on
System z operating system to run on System z hardware.
The IBM System Integrated Information Processor (zIIP) is designed to help improve
resource optimization for running database workloads in z/OS. DB2 for z/OS can reroute
queries, DRDA activity, utilities, and asynchronous I/O to the zIIP engines.
The IBM System z Application Assist Processor (zAAP) is used by the z/OS Java virtual
machine. z/OS can shift Java workloads to this new zAAP, letting the CP focus on other
non-Java workloads. zAAP can also be used for XML parsing.
Processors such as zAAP and zIIP can lower the software cost of the platform, making it
more cost effective.
System z servers running a single z/OS image or z/OS images in Parallel Sysplex can take
advantage of the Workload Manager (WLM) function. The overall mission of these advanced
workload management technologies is to use established policy and business priorities to
direct resources to key applications when needed. These policies are set by the user based
on the needs of the individual business. These time-tested workload management features
provide the System z environment with the capability to effectively operate at average usage
levels exceeding 70% and sustained peak usage levels of 100% without degradation to
high-priority workloads.
Figure 1-2 shows the effect of processor sharing on a System z server with multiple and
different workloads running concurrently. In an environment that is not constrained for CPU,
the response time for each application is not affected by the other applications running at the
same time.
Processor Utilization Percentage
100
80
Web Serving
60
40
Business
Intelligence and
20
Data Mining
SAP Batch
0
00:00
01:00
02:00
03:00
04:00
05:00
06:00
07:00
08:00
09:00
10:00
11:00
12:00
13:00
14:00
15:00
16:00
17:00
18:00
19:00
20:00
21:00
22:00
23:00
24:00
The higher degree of workload management represents a key System z advantage. Workload
management can start at the virtual server level and drill down to the transaction level,
enabling the business to decide which transaction belonging to which customer has a higher
priority over others.
The Intelligent Resource Director (IRD) is a technology that extends the WLM concept to
virtual servers on a System z server. IRD, a combination of System z hardware and z/OS
technology that is tightly integrated with WLM, is designed to dynamically move server
resources to the systems that are processing the highest priority work.
6 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
1.2.4 Security
For a business to remain flexible and responsive, it must be able to give access to its systems
to existing customers and suppliers as well as to new customers, while still requiring the
correct authorization to access e-commerce systems and data. The business must provide
access to the data that is required for the business transaction, but also be able to secure
other data from unauthorized access. The business must prevent rogue data from being
replicated throughout the system and to protect the data of the trusted partners. In summary,
the business must be open and secure at the same time.
The System z environment, as with its previous mainframe models, has the security concept
that is deeply designed in the operating system. The ability to run multiple applications
concurrently on the same server demands isolating and protecting each application
environment. The system must be able to control access, allowing users to get to only the
applications and data that they need, not to those that they are not authorized to use.
Hardware components, such as those for the cryptographic function that is implemented on
each central processor, deliver support to the System z platform for encryption and
decryption of data, and for scaling up the security throughput of the system.
In addition, other security components such as IBM RACF® (Resource Access Control
Facility) provide centralized security functions such as user identification and authentication,
access control to specific resources, and the auditing functions that can help provide
protection and meet the business security objectives.
LANGUAGE ENVIRONMENT
Source
Fortran PL/1 COBOL C/C++ ASM
Code
Assembler
does not
require a
PL/1 C/C++ run-time
library
CEL
COBOL Fortran
Operating
Environment
Operating System
1.3.2 Java
Java is an object-oriented programming language that is developed by Sun Microsystems Inc.
Java can be used for developing traditional mainframe commercial applications as well as
Internet and intranet applications that use standard interfaces.
Java is an increasingly popular programming language that is used for many applications
across multiple operating systems. IBM is a major supporter and user of Java across all of the
IBM computing platforms, including z/OS. The z/OS Java products provide the same,
full-function Java APIs as on all other IBM platforms. In addition, the z/OS Java licensed
programs have been enhanced to allow Java access to z/OS unique file systems.
Programming languages such as Enterprise COBOL and Enterprise PL/I in z/OS provide
interfaces to programs written in Java.
8 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The various Java Software Development Kit (SDK) licensed programs for z/OS help
application developers use the Java APIs for z/OS, write or run applications across multiple
platforms, or use Java to access data that is on the mainframe. Some of these products allow
Java applications to run in only a 31-bit addressing environment. However, with 64-bit SDKs
for z/OS, pure Java applications that were previously storage-constrained by 31-bit
addressing can run in a 64-bit environment. System z processors support zAAP for running
Java applications. Using a zAAP engine adds capacity to the platform without increasing
software charges. Java programs can be run interactively through z/OS UNIX or in batch.
Enterprise Generation Language (EGL) is designed to help the traditional developer take
advantage of all of the benefits of Java and COBOL, yet avoid learning all of its details. EGL is
a simplified high-level programming language that enables you to quickly write full-function
applications that are based on Java and modern web technologies. For example, developers
write their business logic in EGL source code, and from there, the EGL tools generate Java or
COBOL code, along with all runtime artifacts needed to deploy the application to the wanted
execution platform.
EGL hides the details of the Java and COBOL platform and associated middleware
programming mechanisms. This frees developers to focus on the business problem rather
than on the underlying implementation technologies. Developers who have little or no
experience with Java and web technologies can use EGL to create enterprise-class
applications quickly and easily.
IBM Rational® COBOL Generation Extension for System z provides the ability to continue
reaping the benefits of the highly scalable, 24x7 availability of the System z platform by
enabling procedural business developers to write full-function applications quickly while
focusing on the business aspect and logic and not the underlying technology, infrastructure,
or platform plumbing.
Built on open standards, Rational COBOL Generation for System z adds valuable
enhancements to the IBM Software Development Platform so you can:
Provide an alternative path to COBOL adoption.
Construct first-class services for the creation and consumption of web Services for
service-oriented architecture.
Hide middleware and runtime complexities.
Achieve the highest levels of productivity.
Migrate from existing technologies to a modern development platform.
Deliver applications that are based on industry standards that interoperate with existing
systems.
The classic applications that are written in languages such as COBOL or PL/I run within a
classic z/OS transaction manager such as CICS or IMS.
The new applications can also be written in Java and run within CICS or IMS as well or benefit
from WebSphere Application Server for z/OS, a certified Java Platform, Enterprise Edition
application server running on the System z platform.
10 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 1-4 illustrates data consolidation on the System z platform.
App
App Data App
Data App Data
Data App
Server
App
App Data Server z/OS
App App
Data App Database
z/OS Server
Database
App
Server App App
App Data Server App
App Server
Data
With the change in virtual storage in DB2 10, more work can run in one DB2 subsystem,
allowing a consolidation of LPARs as well as DB2 members, and storage monitoring is also
reduced. The net result for this virtual storage constraint relief is reduced cost, improved
productivity, easier management, and the ability to scale DB2 much more easily.
DB2 10 increases the limits for CTHREAD, MAXDBAT, IDFORE, IDBACK, MAXOFILR
threads. Specifically, the improvement allows a 10 times increase in the number of these
threads (meaning 10 times the current supported value at your installation, not necessarily 10
times 2000). So, for example, if in your installation you can support 300-400 concurrently
active threads that are based on your workload, you might now be able to support 3000-4000
concurrently active threads.
zIIP led
b
ena
BEFORE AFTER
zIIP led
Networked Web Serving b
ena
IBM Mainframe
Integration
First Second Third First
Tier Tier Tier Tier Second
Tier
Client App Client
Server z/OS IMS
WAS DB2
Database CICS
Client and Client
Transaction Standard
App zAAP
Server CP
Client Server Client Integrated z/OS
Application and
Database Server
This situation is obvious for customers who are already running applications on the z/OS
platform and must extend them. It represents a good move in other cases where enterprises
can benefit from the portability of Java Platform, Enterprise Edition distributed applications to
WebSphere Application Server on z/OS.
This solution increases the benefits that we already stated, and adds new ones:
In this environment, the management of identities is more consistent, and the solution
enhances auditability.
The z/OS system is optimized for efficient use of the resources it is allowed to use.
Transaction processing and batch work can be done at the same time on the same data: it
improves availability and versatility.
If an issue occurs, the integrated problem determination and diagnosis tools quickly help
solve it.
Automatic recovery and rollback ensure a superior level of transactional integrity.
The Java workload that are created by Java Platform, Enterprise Edition applications can
benefit from the specialty processor System z Application Assist Processor (zAAP).
12 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
As a result, DB2 for z/OS delivers important benefits that are not possible from other
relational database management systems on other platforms. It is this integration that
enables System z servers to provide the highest levels of availability, reliability, scalability,
security, and utilization capabilities as seen by the application users. That solid foundation is
critical for data servers because they are at the center of enterprise applications. Any
weaknesses in the underlying infrastructure are reflected all the way through the applications
to users.
DB2 for z/OS Version 8, available since March 2004, was redesigned to take advantage of the
64-bit virtual addressing capabilities that are provided by the architecture of the System z
hardware platform since 2000 and of z/OS since IBM OS/390® Version 10. It benefits from a
much larger virtual storage. The internal management tasks for large databases have been
modified to take advantage of this enhanced virtual storage to again improve scalability
and availability.
Parallel Sysplex
The advanced clustering functions of the System z platform, the Parallel Sysplex, are based
on the concept of “share everything” by opposition to other clustering environments that are
based on the “share nothing” approach. In this latter approach, some processing power is tied
to a fraction of the data. In Parallel Sysplex systems, all of the DB2 data included in a DB2
group can be accessible by all of the system images participating in the cluster.
This approach is backed up by efficient locking mechanisms that allow data that is accessed
by several instances of an application running in different operating system images to be read
or modified consistently.
DB2 data sharing support allows multiple DB2 subsystems within a sysplex to concurrently
access and update shared databases. DB2 data sharing uses the coupling facility to
efficiently lock data to ensure consistency, and to buffer shared data. DB2 serializes data
access across the sysplex through locking. DB2 uses coupling facility cache structures to
manage the consistency of the shared data. DB2 cache structures are also used to buffer
shared data within a sysplex for improved sysplex efficiency.
Accessibility
Unicode handling
To handle the peculiarities of the different languages of the world (accented letters, special
characters, and so forth) computer users use different sets of characters named code
pages. It creates many difficulties to exchange data internationally.
Unicode (http://www.unicode.org) is a set of standards that provides a consistent way to
encode multilingual plain text.
DB2 for z/OS understands Unicode, and users do not have to convert existing data. DB2
can integrate newer Unicode data with existing data and handle the translations. The
synergy between DB2 and z/OS Unicode Conversion Services helps this process to be
high performing.
IBM z/Architecture® instructions exist that are designed just for Unicode conversions.
There have been significant Unicode functional and performance enhancements in the
System z platform starting with z/OS 1.4, z990, and DB2 Version 8.
Networking capabilities
The System z platform supports the TCP/IP V6 standard, which is the new de facto standard
for interactions between nodes in a network. This capability strengthens the role of this
platform as a data serving hub.
DB2 for z/OS uses the zIIP processor starting from z/OS V1R6.
The following types of workloads are eligible for the zIIP processor:
Network-connected applications
An application (running on UNIX, Linux, Intel, Linux on System z, or z/OS) might access a
DB2 for z/OS database that is hosted on a System z. Eligible work that can be directed to
the zIIP is portions of those requests that are made from the application server to the host,
through SQL calls through a DRDA over TCP/IP connection (like that with
IBM DB2 Connect™).
DB2 for z/OS gives z/OS the necessary information to direct portions of the eligible work
to the zIIP. Examples of workloads that might be running on the server that is connected
through DRDA over TCP/IP to the System z9 can include Business Intelligence, ERP, or
CRM application serving.
1
An enclave is a specific “business transaction” without address space boundaries. It is dispatchable by the
operating system. It can be of system or sysplex scope.
14 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Database workloads such as CICS, IMS, WebSphere for z/OS with local JDBC type 2
access, stored procedures, and batch have become increasingly efficient and cost
effective on the mainframe and are not concerned with zIIP. One key objective with the
zIIP is to help bring the costs of network access to DB2 for z/OS more closely in line with
the costs of running similar workloads under CICS, IMS, or Batch on the System z
platform.
Figure 1-6 illustrates the way zIIP helps reduce the workload of general processors on the
System z platform for eligible workloads.
CP CP zIIP
High utilization
DB2/DRDA
DB2/DRDA
Portions of
DB2/DRDA eligible DB2
Ent App DB2/DRDA
Reduced utilization DB2/DRDA enclave SRB
workload
DB2/DRDA DB2/DRDA
TCP/IP executed on
(via Network or DB2/DRDA DB2/DRDA zIIP
HiperSockets) DB2/DRDA DB2/DRDA DB2/DRDA
Because I/O rates are increasing, existing applications must perform according to SLA
expectations. To support existing SLA requirements in an environment of rapidly increasing
data volumes and I/O rates, DB2 for z/OS uses features in the Data Facility Storage
Management Subsystem (DFSMS) that help to benefit from performance improvements in
DFSMS software and hardware interfaces:
DB2 uses Parallel Access Volume and Multiple Allegiance features of the IBM
TotalStorage Enterprise Storage Server® (ESS) and IBM System Storage® DS8000®.
IBM FlashCopy® on ESS and DS8000 increases the availability of your data while running
DB2 utilities.
DB2 integrates with z/OS to deliver solutions applicable to recovery, disaster recovery, or
environment cloning needs.
Larger control interval sizes help performance with table space scans, and resolve some
data integrity issues.
The MIDAW function, improves FICON performance by reducing channel utilization and
increasing throughput for parallel access streams.
Support for solid-state drives and row level sequential detection algorithm help to reduce
the need for Reorgs.
Higher processor capacity requires greater I/O bandwidth and efficiency. High
Performance FICON (zHPF) enhances the IBM z/Architecture and the FICON interface
architecture to provide greater I/O efficiency. zHPF is a data transfer protocol that is
optionally employed for accessing data from an IBM DS8000 storage subsystem. Both the
DS8800 and the zHPF provide great improvements when used with DB2 for z/OS.
DB2, in combination with z/OS and System z functions, can use Extended Address
Volumes for all types of data sets, and by using Extended Addressability for the
SMS-managed catalog, can allocate DSSIZE greater than 4 GB.
16 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
This internal change benefits new and existing workloads, where distributed communications
are configured with another logical partition (LPAR) or to an application running on the
System z platform.
DB2 has extensive auditing features. For example, you can answer such questions as, “Who
is privileged to access which objects?” and “Who has accessed the data?”
The catalog tables describe the DB2 objects, such as tables, views, table spaces, packages,
and plans. Other catalog tables hold records of every granted privilege or authority. Every
catalog record of a grant contains information such as name of the object, type of privilege,
IDs that receive the privilege, ID that grants the privilege, and time of the grant.
The audit trace records changes in authorization IDs, changes to the structure of data,
changes to values (updates, deletes, and inserts), access attempts by unauthorized IDs,
results of GRANT and REVOKE statements, and other activities that are of interest to
auditors.
You can use the System z platform Security Server (also know as Resource Access Control
Facility (RACF)) or equivalent to:
Control access to the DB2 environment
Facilitate granting and revoking to groups of users
Ease the implementation of multilevel security in DB2 (see details below)
Fully control all access to data objects in DB2
DB2 defines sets of related privileges, called administrative authorities. You can effectively
grant many privileges by granting one administrative authority.
Security-related events and auditing records from RACF and DB2 can be loaded into DB2
databases for analysis. The DB2 Instrumentation Facility Component can also provide
accounting and performance-related data. This kind of data can be loaded into a standard set
of DB2 tables (definitions provided). Security and auditing specialists can query this data
easily to review all security events.
For regulatory compliance reasons (for example, Basel II, Sarbanes-Oxley, EU Data
Protection Directive), and other reasons such as accountability, audit ability, increased
privacy, and security requirements, many organizations focus on security functions when
designing their IT systems. DB2 10 for z/OS provides a large set of options that improve and
further secure access to data held in DB2 for z/OS to address these challenges.
Separating the duties of database administrators from security administrators
Protecting sensitive business data against security threats from insiders, such as
database administrators, application programmers, and performance analysts
Further protecting sensitive business data against security threats from powerful insiders
such as SYSADM by using row-level and column-level access controls
Using the RACF profiles to manage the administrative authorities
For details about DB2 security functions, see Security Functions of IBM DB2 10 for z/OS,
SG24-7959.
Data encryption
System z servers have implemented leading-edge technologies such as high-performance
cryptography, large-scale digital certificate support, continued excellence in Secure Sockets
Layer (SSL) performance, and advanced resource access control function.
DB2 ships a number of built-in functions that enable you to encrypt and decrypt data. IBM
offers an encryption tool that is called the IBM Data Encryption for IMS and DB2 Databases,
program number 5799-GWD. This section introduces both DB2 encryption and the IBM Data
Encryption tool. It also describes recent hardware enhancements that improve
encryption performance.
Data encryption has several challenges. These include changing your application to encrypt
and decrypt the data, encryption key management, and the performance impact
of encryption.
DB2 encryption is available at the column level and at the row level.
However, the removable media storage, such as cartridges, that are used for back-up copies
often contain enterprise data in readable format. If these media are stolen, enterprise data is
at risk.
The System z platform provides efficient ways to secure external media storage based on
hardware and software facilities.
Security certifications
The data-serving environment that is based on the System z platform benefits from the use of
the following security certifications.
Java applications
The Java programming language is the language of choice for portable applications that can
run on multiple platforms. The System z platform has been optimized to provide an efficient
Java virtual machine.
18 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The IBM Data Server Driver for JDBC and SQLJ is a single driver that includes JDBC type 2
and JDBC type 4 behavior. When an application loads the IBM Data Server Driver for JDBC
and SQLJ, a single driver instance is loaded for type 2 and type 4 implementations.
The driver has a common code base for Linux, UNIX, Windows, and z/OS. This largely
improves DB2 family compatibility. For example, it enables users to develop on Linux, UNIX,
and Windows, and to deploy on z/OS without having to make any change.
For more information, refer to WebSphere Application Server V8.5 Concepts, Planning, and
Design Guide, SG24-80222.
Application
Application Server
An application server provides a set of services that business applications can use, and
serves as a platform to develop and deploy these applications. The application server acts as
middleware between back-end systems and clients. It provides a programming model, an
infrastructure framework, and a set of standards for a consistent designed link between them.
As business needs evolve, new technology standards become available. Since 1998,
WebSphere Application Server has grown and adapted itself to new technologies and to new
standards. It provides an innovative and cutting-edge environment so that you can design
fully integrated solutions and run your business applications.
WebSphere Application Server is a key SOA building block, providing the role of the business
application services (circled in Figure 2-2) in the SOA reference architecture.
Business services
Supports enterprise business process and goals through businesses functional service
services
Apps &
Infrastructure services
22 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
From an SOA perspective, you can perform the following functions with WebSphere
Application Server:
Build and deploy reusable application services quickly and easily
Run services in a secure, scalable, highly available environment
Connect software assets and extend their reach
Manage applications effortlessly
Grow as your needs evolve, reusing core skills and assets
The packaging options available for WebSphere Application Server provide a level of
application server capabilities to meet the requirements of various application scenarios.
Although these options share a common foundation, each provides unique benefits to meet
the needs of applications and the infrastructure that supports them. At least one WebSphere
Application Server product fulfills the requirements of any particular project and its supporting
infrastructure. As your business grows, the WebSphere Application Server family provides a
migration path to more complex configurations.
Workload Management
Sysplex Integration
WebSphere Application Server
Job manager
WebSphere Application Server Network
Deployment (clustered, multimachine)
Administrative agent
WebSphere Application
24 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
WebSphere Application Server Community Edition is a powerful alternative to open source
application servers and has the following features:
Brings together the best related technologies across the broader open source community
to support Java EE specifications such as the following examples:
– Apache Aries
– Apache MyFaces
– Apache OpenEJB
– Apache Open JPA
– Apache ActiveMQ
– TranQL
Includes support for Java EE 6 and Java SE 6
Supports the JDK from IBM and Oracle
Can be used as a run time for Eclipse with its plug-in
Includes an open source Apache Derby database, which is a small-footprint database
server with full transactional capability
Contains an easy-to-use administrative console application
Supports product binary files and source code as no-charge downloads from the IBM
website
Provides optional fee-based support for WebSphere Application Server Community
Edition from IBM Technical support teams
Can be included in advanced topologies and managed with the Intelligent Management
functionality of WebSphere Application Server V8.5
For more information and the option to download WebSphere Application Server Community
Edition, see:
http://www.ibm.com/software/webservers/appserv/community/
Rational Application Developer for WebSphere Software includes the following functions:
Concurrent support for Java Platform, Enterprise Edition 1.2, 1.3, 1.4, Java EE 5, and Java
EE 6 specifications and support for building applications with JDK 5 and JRE 1.6
EJB 3.1 productivity features
Visual editors such as:
– Domain modeling
– UML modeling
– Web development
Web services and XML productivity features
Portlet development tools
Relational data tools
WebSphere Application Server V6.1, V7, V8, and V8.5 test servers
Web 2.0 development features for visual development of responsive Rich Internet
Applications with Ajax and Dojo
Integration with the Rational Unified Process and the Rational tool set, which provides the
end-to-end application development lifecycle
Application analysis tools to check code for coding practices
Examples are provided for preferred practices and issue resolution.
Enhanced runtime analysis tools, such as memory leak detection, thread lock detection,
user-defined probes, and code coverage
Component test automation tools to automate test creation and manage test cases
WebSphere Adapters support, including CICS, IBM IMS, SAP, Siebel, JD Edwards,
Oracle, and PeopleSoft
Support for Linux and Microsoft Windows operating systems.
For more information about Rational Application Developer for WebSphere Software V8, see:
http://www.ibm.com/software/awdtools/developer/application/
26 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Application servers
Profiles
Nodes, node agents, and node groups
Cells
Deployment manager
This section provides information about these concepts. You can find additional concepts
about WebSphere Application Server that build on these core concepts in 2.4, “Server
configurations” on page 41.
2.3.1 Applications
At the heart of WebSphere Application Server is the ability to run applications, including the
following types:
Enterprise
Business-level
Middleware
Figure 2-6 illustrates the applications that run in the Java virtual machine (JVM) of
WebSphere Application Server.
Applications
Framework Layer
WebSphere Application Server V8.5 supports the Java EE 6 specification. New and existing
enterprise applications can take advantage of the capabilities added by Java EE 6. If you
decide not to use the Java EE 6 capabilities, portable applications continue to work with
identical behavior on the current version of the platform.
The Java EE programming model has the following types of application components:
Enterprise JavaBeans (EJB)
Servlets and JavaServer Pages (JSP) files
Application clients (Java Web Start Architecture 1.4.2)
The primary development tool for WebSphere Application Server Java EE 6 applications is
IBM Rational Application Developer for WebSphere V8.5. It contains tools to create, test, and
deploy Java EE 6 applications. Java EE applications are packaged as enterprise archive
(EAR) files.
For more information about Java EE 6 supported specifications, see the JSR page on the
Java Community Process website at:
http://jcp.org/en/jsr/detail?id=316
For more information about web application specifications, see the following resources:
JSR 154, 53 and 315 (Java Servlet 3.0 specification)
http://jcp.org/en/jsr/detail?id=315
JSR 252 and127 (Apache MyFaces JSF 2.0 specification)
http://jcp.org/en/jsr/detail?id=314
JSR 318 (EJB 3.1 specification)
http://jcp.org/en/jsr/detail?id=318
28 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2.3.2 Containers
Containers are specialized to run specific types of applications and can interact with other
containers by sharing session management, security, and other attributes. Figure 2-7
illustrates applications that run in different containers inside the JVM. Containers provide
runtime support for applications.
Web Container
SIP SIP
OSGi
SIP Portlet
SIP SIP EJB
Sessions Sessions
Batch
Sessions
bundle Sessions Sessions
servlet
The following packaging options of the WebSphere Application Server family are presented in
this section:
IBM WebSphere Application Server Express V8.5, referred to as Express
IBM WebSphere Application Server V8.5, referred to as Base
IBM WebSphere Application Server Network Deployment V8.5, referred to as Network
Deployment or ND
IBM WebSphere Application Server Hypervisor Edition V7, referred to as
Hypervisor Edition
IBM WebSphere Application Server for z/OS V8.5, referred to as WebSphere Application
Server for z/OS
30 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Each member has essentially the same main architectural structure that is shown in
Figure 2-9. They are built on a common code base. The difference between the options
involves licensing terms and platform support.
Engines Services
Figure 2-9 WebSphere Application Server architecture for Base and Express
These advantages are important for mission-critical applications. You can also manage
multiple base profiles centrally, but you do not have workload management and the same
capabilities for those base profiles.
Engines Services
Workload management
and high availability
32 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Stand-alone application servers
All WebSphere Application Server packages support a single stand-alone server
environment. With a stand-alone configuration, each application server acts as a unique
entity, functioning independently from other application servers. An application server runs
one or more applications, and provides the services that are required to run these
applications. Each stand-alone server is created by defining an application server profile
(Figure 2-11).
Application Application
server server
Administrative Administrative
console console
Application
server
Administrative
console System A
A stand-alone server can be managed from its own administrative console. You can also use
the wsadmin scripting facility in WebSphere Application Server to perform every function that
is available in the administrative console application.
Multiple stand-alone application servers can exist on a system. You can either use
independent installations of the WebSphere Application Server product binary files, or create
multiple application server profiles within one installation. However, stand-alone application
servers do not provide workload management or fail over capabilities. They are isolated from
each other.
With WebSphere Application Server for z/OS, you can use workload load balancing and
response time goals on a transactional base. You can also use balancing on a special
clustering mechanism, the multiple servant regions, with a stand-alone application server.
Remember: With WebSphere Application Server V8.5, you can manage stand-alone
servers from a central point by using administrative agents and a job manager.
Application Deployment
server manager
Administrative
console
Application Application
server server
System A System B
Figure 2-12 Distributed application servers with WebSphere Application Server V8.5
With a distributed server configuration, you can create multiple application servers to run
unique sets of applications, and manage those applications from a central location. More
importantly, you can cluster application servers to allow for workload management and fail
over capabilities. Applications that are installed in the cluster are replicated across the
application servers. The cluster can be configured so when one server fails, another server in
the cluster continues processing. Workload is distributed among containers in a cluster by
using a weighted round-robin scheme.
Tip for z/OS: The weighted round-robin mechanism is replaced by the integration of
WebSphere Application Server for z/OS in the Workload Manager (WLM). The WLM is a
part of the operating system. Requests can be dispatched by using this configuration to a
cluster member according to real-time load and regardless of whether the member
reaches its defined response time goals.
34 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
With the mixed server environment and mixed node definitions, other existing server types
can be added and administered. These types include external WebSphere application
servers, Apache Server, and Custom HTTP Server.
2.3.4 Profiles
WebSphere Application Server runtime environments are built by creating set of configuration
files, named profiles, that represent a WebSphere Application Server configuration. The
following categories of WebSphere Application Server files are available, as illustrated in
Figure 2-13:
Product files are a set of read-only static files or product binary files that are shared by any
instances of WebSphere Application Server.
Configuration files (profiles) are a set of user-customizable data files. This file set
includes WebSphere configuration, installed applications, resource adapters, properties,
and log files.
Complete WebSphere
WebSphere Application + WebSphere = Application Server V8.5
Server V8.5 V8.5 user files installation
core product files (profile)
The Customization Toolbox allows you to create separate environments, such as for
development or testing, without a separate product installation for each environment. Different
profile templates are available in WebSphere Application Server V8.5 through the
Customization Toolbox Profile Management Tool (PMT):
Cell
A cell template contains a federated application server node and a deployment manager.
Deployment manager
The Network Deployment profile provides the necessary configuration for starting and
managing the deployment manager server.
Default profile (for stand-alone servers)
This server default profile provides the necessary configuration file for starting and
managing an application server, and all the resources that are needed to run
enterprise applications.
Administrative agent
This profile is used to create the administrative agent to administer multiple stand-alone
application servers.
Default secure proxy
This profile is available when you install the DMZ secure proxy server feature.
The Liberty profile: Do not confuse the Liberty profile with the concept of a profile that is
created by the PMT in previous versions of WebSphere Application Server. The Liberty
profile provides a composable and dynamic application server runtime environment on
WebSphere Application Server V8.5. The Liberty profile is a subset of base functions of
the WebSphere Application Server, which is installed separately.
You can create compressed files that contain all or subsets of the Liberty profile server
installation. You can then extract these files on other target hosts as a substitute for the
product installation.
With a simpler configuration model based on XML, you do not need to create a profile by
using the PMT to create Liberty profile application servers.
Each profile contains files that are specific to that run time (such as logs and configuration
files). You can create profiles during and after installation. After you create the profiles, you
can perform further configuration and administration by using WebSphere
administrative tools.
Each profile is stored in a unique directory path (Figure 2-14), which is selected by the user
when the profile is created. Profiles are stored in a subdirectory of the installation directory by
default, but can be located anywhere.
Figure 2-14 Profiles directory structure of WebSphere Application Server V8.5 on a Windows system
36 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
By creating various profiles, you can create a distributed server configuration by using one of
the following methods:
Create a deployment manager profile to define the deployment manager, and then create
one or more custom node profiles. The nodes that are defined by each custom profile can
be federated into the cell that is managed by the deployment manager. You can federate
these nodes during profile creation, or manually later. The custom nodes can exist inside
the same operating system image as the deployment manager or in another operating
system instance. You can then create application servers by using the administrative
console or wsadmin scripts.
This method is useful when you want to create multiple nodes, multiple application servers
on a node, or clusters.
Create a deployment manager profile to define the deployment manager. Then, create
one or more application server profiles, and federate these profiles into the cell that is
managed by the deployment manager. This process adds both nodes and application
servers into the cell. The application server profiles can exist on the deployment manager
system or on multiple separate systems or z/OS images.
This method is useful in development or small configurations. Creating an application
server profile gives you the option of having the sample applications installed on the
server. When you federate the server and node to the cell, any installed applications can
be carried into the cell with the server.
Create a cell profile. This method creates both a deployment manager profile and an
application server profile. The application server node is federated to the cell. Both profiles
are on the same system.
This method is useful in a development or test environment. Creating a single profile
provides a simple distributed system on a single server or z/OS image.
Nodes
A node is an administrative grouping of application servers for configuration and operational
management within one operating system instance. You can create multiple nodes inside one
operating system instance, but a node cannot leave the operating system boundaries. A
stand-alone application server configuration has only one node. With Network Deployment,
you can configure a distributed server environment that consists of multiple nodes that are
managed from one central administration server.
From the administrative console, you can also configure middleware nodes (defined into a
generic server cluster) to manage middleware servers by using a remote agent.
Node agent
Deployment
manager
Application
server Administrative
console
Node01
Node agent
Node agent
Remote
Node agent
Middleware
Application Application
Application
server server
server
Figure 2-15 Node concept - WebSphere Application Server Network Deployment configuration
Node agents
In distributed server configurations, each node has a node agent that works with the
deployment manager to manage administration processes. A node agent is created
automatically when you add (federate) a stand-alone application server node to a cell. Node
agents are not included in the Base and Express configurations because a deployment
manager is not needed in these architectures. In Figure 2-15, each node has its own node
agent that communicates directly or remotely with the deployment manager. The node agent
is an administrative server that runs on the same system as the node. It monitors the
application servers on that node, routing administrative requests from the deployment
manager to those application servers.
Node groups
A node group is a collection of nodes within a cell that have similar capabilities in terms of
installed software, available resources, and configuration. A node group is used to define a
boundary for server cluster formation so that the servers on the same node group host the
same applications.
A node group validates that the node can run certain functions before allowing them. For
example, a cluster cannot contain both z/OS nodes and non-z/OS nodes. In this case, you
can define multiple node groups, one for the z/OS nodes and one for non-z/OS nodes. A
DefaultNodeGroup is created automatically. The DefaultNodeGroup contains the deployment
manager and any new nodes with the same platform type. A node can be a member of more
than one node group.
38 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Sysplex on z/OS: On the z/OS platform, a node must be a member of a system complex
(sysplex) node group. Nodes in the same sysplex must be in the same sysplex node group.
A node can be in one sysplex node group only. A sysplex is the z/OS implementation of a
cluster. This technique uses distributed members and a central point in the cluster. It uses
a coupling facility for caching, locking, and listing. The coupling facility runs special
firmware, the Coupling Facility Control Code (CFCC). The members and the coupling
facility communicate with each other by using a high-speed InfiniBand memory-to-memory
connection of up to 120 Gbps.
Figure 2-16 shows a single cell that contains multiple nodes and node groups.
Deployment
Manager Node Node 3 Node 5
Node Node
Deployment agent agent
manager
Node Node
agent agent
NodeGroup1
2.3.6 Cells
A cell is a grouping of nodes into a single administrative domain. A cell encompasses the
entire management domain. In the Base and Express configurations, a cell contains one
node, and that node contains one server. The left side of Figure 2-17 on page 40 illustrates a
system with two cells that are each accessed by their own administrative console. Each cell
has a node and a stand-alone application server.
Node agent
Application Deployment
server manager
Application
Node01 server
Cell01
Node01 Node agent
Node agent
Application
Application
server
server
Application
Node01
server Node03
Cell01
Node02
Cell01
A cell configuration that contains nodes that are running on the same platform is called a
homogeneous cell.
It is also possible to configure a cell that consists of nodes on mixed platforms. With this
configuration, other operating systems can exist in the same WebSphere Application Server
cell. Cells can span z/OS sysplex environments and other operating systems. For example,
z/OS nodes, Linux nodes, UNIX nodes, and Windows system nodes can exist in the same
WebSphere Application Server cell. This configuration is called a heterogeneous cell. A
heterogeneous cell requires significant planning.
40 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 2-18 shows a heterogeneous cell, where node groups are defined for different
operating systems.
Node Node
Deployment agent agent
manager
Node Node
agent agent
Dist_NodeGroup
Figure 2-18 A heterogeneous cell with the coexistence of distributed and z/OS nodes
The configuration and application files for all nodes in the cell are centralized into the master
repository. This centralized repository is managed by the deployment manager and regularly
synchronized with local copies that are held on each of the nodes. If the deployment manager
is not available in the cell, the node agents and the application servers cannot synchronize
configuration changes with the master repository. This limitation continues until the
connection with deployment manager is reestablished.
With WebSphere Application Server, you can create two types of configurations in a single
cell environment:
Single system configurations
Multiple systems configurations
Application
server
Node
Cell
Single system is the only configuration option with Base and Express. The cell is created
when you create the stand-alone application server profile.
A node agent at each node is the contact point for the deployment manager during cell
administration. A single system configuration in a distributed environment includes all
processes in one system as illustrated in Figure 2-20.
Deployment
manager
Node Node
System A
Cell
42 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Multiple system configurations
A Network Deployment environment allows you to install the WebSphere Application Server
components on systems and locations that suit your requirements. With the Network
Deployment package, you can create multiple systems configurations.
Figure 2-21 shows the deployment manager that is installed on one system (System A) and
each node on a different system (System B and System C). The servers can be mixed
platforms or the same platform. In this example, System A can be an IBM AIX® system,
System B can be a Windows operating system, and System C can be a z/OS image.
Deployment
manager
System A
Node Node
System B System C
Cell
Using the same logic, other combinations can be installed. For example, you can install the
deployment manager and a node on one system with additional nodes installed on
separate systems.
WebSphere Application Server provides clustering support for the following types of servers:
Application server clusters
Proxy server clusters
Generic server clusters
Dynamic clusters
Application servers that are a part of a cluster are called cluster members. When you install,
update, or delete an application, the updates (changes) are distributed automatically to all
cluster members. By using the rollout update option, you can update and restart the
application servers on each node. This process can be done one node at a time, providing
continuous availability of the application to the user.
System A
Node agent
HTTP server
Cluster
Member 1
Cluster
Cluster
Member 2
Plug-in
configuration
Vertical clusters offer fail over support within one operating system image, provide processor
level fail over, and increase resource usage.
44 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2.5.2 Horizontal cluster
Horizontal scaling or horizontal clustering refers to cluster members that are spread across
different server systems and operating system types (Figure 2-23). In this topology, each
system has a node in the cell that is holding a cluster member. The combination of vertical
and horizontal scaling is also possible.
HTTP server
System A System B
Cluster Cluster
Cluster
Member 1 Member 2
Horizontal clusters increase availability by removing the bottleneck of using only one physical
system and increasing the scalability of the environment. Horizontal clusters also support
system fail over.
System A
Cluster
Node agent Member 1
Cluster
Member 2
HTTP server
Cluster
System B
Cluster
Member 3
Cluster
Node agent Member 4
46 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Deployment Manager
V8.5
Cluster
Cell
Using an HTTP traffic-handling device, such as IBM HTTP Server and the web server plug-in,
is a simple and efficient way to front end the WebSphere HTTP transport.
Instead of using a static round-robin procedure, workload management on the z/OS platform
introduces a finer granularity and the use of real-time performance data. You can use these
features to determine which member to process a transaction on.
You can classify incoming requests according to their importance. For example, requests that
come from a platinum-ranked customer can be processed with higher importance (and
therefore faster) than a silver-ranked customer.
When resource constraints exist, the WLM component can ensure that the member that
processes a higher prioritized request gets additional resources. This system protects the
response time of your most important work.
WLM changes: The WLM component can change the amount of processor, I/O, and
memory resources that are assigned to the different operating system processes (the
address spaces). To decide whether a process is eligible for receiving additional resources,
the system checks whether the process meets its defined performance targets, and
whether more important work is in the system. This technique is run dynamically so that
there is no need for manual interaction after the definitions are made by the system
administrator (the system programmer).
48 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Accessing data out of a Java EE environment (which WebSphere Application Server is)
involves a key concept that is important to understand -- the specifics of the actual data
system are hidden from the application; they are hidden behind a standardized layer of
abstraction. What the Java EE specification provides is a standardized API for accessing
data, with the vendor of the actual data system responsible for implementing the code behind
the API layer. The implementation that is offered by the vendors is called a “connector” as
shown in Figure 2-26.
Defined
Vendor
Open
Java Supplied Code Actual Data
Standards
Application to Access Data System
Data
System
Interfaces
The standardized API for relational databases is defined by what is called Java Database
Connectivity Specification (JDBC).
Type 2
Type 2 drivers are written partly in the Java programming language and partly in native code.
These drivers use a native client library specific to the data source to which they connect.
JDBC type 2 connectivity should be used only when running Java applications - whether
stand-alone Java applications or applications running in WebSphere Application Server on
z/OS - on z/OS accessing DB2 data on z/OS in the same LPAR. This type of connectivity is
recommended when the applications are deployed in WebSphere Application Server on z/OS
accessing data on DB2 for z/OS on the same LPAR.
Type 4
Type 4 drivers are written in pure Java and implement the database protocol for a specific
data source. The client connects directly to the data source. DRDA is the protocol that is used
when connecting to a DB2 system as a data source. The type 4 driver is fully portable
because it is written purely in Java.
The IBM implementation of these drivers is called IBM Data Server Driver for JDBC and
SQLJ. For details, see 3.2, “IBM Data Server Drivers and Clients” on page 87.
The DB2 Universal JDBC Driver Provider (XA) should be used only if applications running in
WebSphere Application Server meet the two following criteria:
Require global transaction support
Want to use JDBC type 4 access to DB2 for z/OS
The DB2 Universal JDBC Driver Provider should be used only if applications running in
WebSphere Application Server meet the either of the following criteria:
Applications running on WebSphere Application Server on z/OS and access DB2 for z/OS
on the same LPAR using JDBC type 2 access. This provider supports both 1-phase and
2-phase commit processing
Applications running on WebSphere Application Server (irrespective of platform) that need
access to DB2 for z/OS and do not require global transaction support
For details, see 5.2, “Configuring WebSphere Application Server for JDBC type 4 XA access”
on page 209 and 5.3, “Configuring WebSphere Application Server for JDBC type 2 access”
on page 222.
For details, see 5.2, “Configuring WebSphere Application Server for JDBC type 4 XA access”
on page 209 and 5.3, “Configuring WebSphere Application Server for JDBC type 2 access”
on page 222.
50 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2.6.4 WebSphere Application Server connection pooling
Connection pooling can improve the response time of any application requiring connections
to access a data source, especially web-based applications. To avoid the impact of acquiring
and closing connections, WebSphere Application Server provides connection pooling for
connection reuse (caching of JDBC connections). WebSphere Application Server enables
administrators to establish a pool of database connections that can be reused. They are
defined with the panel shown in Figure 2-27.
To get the most out of connection pooling, consider the following items:
If an application creates a resource, the application should explicitly close it after the
resource is no longer being used
All JDBC resources that have been obtained by an application should be explicitly closed
by the same application. These include connections, CallableStatements,
PreparedStatements, ResultSets, and others. Be sure to close resources even in the case
of a failure. For example, each PreparedStatement object in the cache can have one or
more result sets associated with it. If a result set is opened and not closed, even though
you close the connection, that result set is still associated with the prepared statement in
the cache. Each of the result sets has a unique JDBC cursor that is attached to it. This
cursor is kept by the statement and is not released until the prepared statement is cleared
from the WebSphere Application Server prepared statement cache.
Obtaining and closing the connection in the same method.
When possible, we recommend that an application obtains and closes its connection in the
same method in which the connection is requested. This keeps the application from
holding resources that are not being used, and leaves more available connections in the
pool for other applications. Additionally, it removes the temptation to use the same
connection for multiple transactions. There might be times in which this is not feasible,
such as when using BMP.
Do not reuse the statement handle without closing it first.
To prevent resource leakage, close prepared statements before reusing the statement
handle to prepare a different SQL statement with the same connection.
Set WebSphere Application Server connection Unused Timeout to a value smaller than
DB2 for z/OS idle thread timeout to avoid stale connection conditions.
Consider setting min connections to 0 (zero)
Consider setting WebSphere Application Server “aged timeout” to less than 5 min,
recommended 120 sec to reduce exposure of long living threads
52 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The core properties have to be identical such as
– Username
– Host and port
The following link in WebSphere Application Server information center explains the extended
properties that can be set for each application
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/topic/com.ibm.websphere.nd.multipla
tform.doc/ae/tdat_heteropool.html?resultof=%22%68%65%74%65%72%6f%67%65%6e%6f%75%73
%22%20%22%68%65%74%65%72%6f%67%65%6e%22%20
Applications that are deployed in WebSphere Application Server that use IBM Data Server
Driver for SQLJ and JDBC and use a JDBC type 4 connection to DB2 for z/OS can be
enabled to be sysplex aware.
JVM
DB2 Universal Driver JDBC/SQLJ
Resource
Adapter Logical
Connection
1
Transport
Application
1 Pooled connections
JCA Logical to DB2 Data Sharing
Connection
CF
Connection Group
Manager 2
Transport
2
Logical
Connection
DB 3
Connection
Pool Disconnect
at commit/rollback
54 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
When connection errors happen, the following behavior occurs:
If first SQL stmt in transaction fails and reuse OK
– No errors reported back to application
– SET statements that are associated with the logical connection are replayed with first
SQL on another transport
If subsequent SQL fails and reuse OK
– 30108 reuse error returned to application (transaction is rolled back and reconnected).
– SET statements are replayed on another transport to recover connection state
– Up to application to retry transaction
If subsequent SQL and reuse not OK
30081 connection failed error returned to application.
Connection returned to initial (default) state
Application needs to reestablish connection state and retry transaction
If all members in the member list are tried and none seems to be available, the initial data
source group DVIPA address is retried to make sure that really no member is available.
56 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 2-30 Disabling properties Reap Time, Unused Timeout, and Aged Timeout
WebSphere AS
Full DB2 z/OS
ProgramA preparedStatement prepare
object cache
c1=getConnection()
Prepared statement S
p1=c1.prepareStatement() Object SKDS
rs1=p1.execute() construction S
p1.close() preparedStatement
commit() delete
object p1
c1.close()
Short
prepare
thread1
Prepared statement S
delete
ProgramB Avoided
construction
c1=getConnection()
p1=c1.prepareStatement()
rs1=p1.execute() prepare/ Prepared statement S
p1.close() Avoided execute
construction Short
commit() prepare
p1=c1.prepareStatement()
rs1=p1.execute() prepare/ delete
p1.close() execute
commit()
c1.close()
conn1
Local cache Global cache
Figure 2-31 WebSphere Application Server: caching the prepared statement object
When the application runs the prepareStatement() JDBC API, WebSphere Application Server
looks for existence of the Java preparedStatement object in the statement cache that exists in
WebSphere Application Server. This cache is unique to each connection in the connection
pool. It must be remembered that it is a Java preparedStatement object and has nothing to do
with the prepare that happens in DB2 for z/OS. In this case, because this statement is
prepared for the first time, WebSphere Application Server cannot find it in the cache. It
creates a Java preparedStatement object and stores it in its cache.
When using JDBC type 2 connectivity to DB2 for z/OS, the driver will immediately send the
SQL statement to DB2 to be prepared. DB2 will first look in “local cache” to see whether it
can find the SQL statement. In this case, it does not exist. DB2 then looks for the
statement in global dynamic statement cache. In this case, the statement is not found in
the global dynamic statement cache. DB2 does what is called a “full prepare” during which
it checks the validity of the SQL, determines the access path to be used, and so on.
58 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
If the SQL statement is valid, the DB2 then stores the statement in the global statement
cache called “global cache” in Figure 2-31 on page 58. DB2 also stores information about
the prepared statement in thread storage that is created in DB2. Then, it returns to
WebSphere Application Server, which then returns the Java prepared statement object
back to the application. The application then runs the statement and then issues a commit.
When the commit is issued, the prepared statement artifacts that are stored in the DB2
thread storage that is known as “local cache” are also deleted and the DB2 thread is ready
for reuse
When using JDBC type 4 connectivity to DB2 for z/OS, the driver by default will not send
the SQL statement immediately to DB2 for z/OS. Instead, the WebSphere Application
Server returns a Java preparedStatement object to the application. This behavior is
controlled by a JDBC property called “deferPrepares”. By default this property is to true
and is only valid for the JDBC type 4 connectivity to DB2 on z/OS. This helps to optimize
the number of trips to DB2 on z/OS over the network. When the application issues the
preparedtStaement.execute command, the JDBC driver will then send the SQL statement
to DB2 on z/OS. DB2 will look for the statement in global dynamic statement cache. In this
case, the statement is not found in the global dynamic statement cache. DB2 does what is
called a “full prepare” during which it checks the validity of the SQL, determines the
access path to be used, and so on.
If the SQL statement is valid, DB2 then stores the statement in the global statement cache
called “global cache” in Figure 2-31 on page 58. DB2 also stores information about the
prepared statement in thread storage that is created in DB2. This is called “local cache”.
DB2 then runs the SQL statement and then returns control back to the WebSphere
Application Server and to the application. The application then issues a commit. When the
commit is issued, the prepared statement artifacts that are stored in the DB2 thread
storage (“local cache”) are also deleted and the DB2 thread is ready for reuse
Now, the next application thread comes along and the same code to prepare the SQL
statement is ran. WebSphere Application Server, looks in the statement cache in WebSphere
Application Server. In this case, it finds the preparedStatement object in the cache and the
object construction is avoided
When using JDBC type 2 connectivity to DB2 on z/OS, the driver immediately sends the
SQL statement to DB2 to be prepared. DB2 first looks in “local cache” to see whether it
can find the SQL statement. In this case, it does not exist. DB2 then looks for the
statement in global dynamic statement cache. In this case, the statement is found in the
global dynamic statement cache. DB2 does what is called a “short prepare” during which it
actually copies the artifacts from the global statement cache to the thread (“local cache”).
Then, it returns to WebSphere Application Server, which then returns the Java prepared
statement object back to the application. The application then runs the statement and then
issues a commit. When the commit is issued, the prepared statement artifacts that are
stored in the DB2 thread storage that is known as “local cache” are also deleted and the
DB2 thread is ready for reuse.
When using JDBC type 4 connectivity to DB2 for z/OS, the driver by default does not send
the SQL statement immediately to DB2 for z/OS. Instead, the WebSphere Application
Server returns a Java preparedStatement object to the application. This behavior is
controlled by a JDBC property called “deferPrepares”. By default this property is set to
true and is only valid for the JDBC type 4 connectivity to DB2 for z/OS. This helps to
optimize the number of trips to DB2 on z/OS over the network. When the application
issues the preparedtStaement.execute command, the JDBC driver will then send the SQL
statement to DB2 for z/OS. DB2 first looks in “local cache” to see whether it can find the
SQL statement.
This behavior of copying artifacts from global dynamic statement cache to local cache is also
followed when using static SQL applications. Instead of copying from global statement cache,
the artifacts are copied from the static application packages in the EDM pool to
thread storage.
This shows the benefits of having a prepared statement cache in WebSphere Application
Server and also how it works with DB2 global dynamic statement cache.
The local cache is associated with an individual thread in DB2. When an application runs a
SQL statement, the contents of the global dynamic statement cache are copied into the local
cache in DB2 thread storage. This cache is “destroyed” when the application issues a commit.
Then, the application runs the SQL statement again, the process of copying is repeated and
after the application issues a commit, the “destroy” is also repeated. This copying of contents
from the global dynamic statement cache into local cache becomes expensive (CPU time) if
this happens over and over again.
To keep the local cache across commit boundaries, the following steps must be completed:
DB2 provides a bind option called KEEPDYNAMIC. The JDBC/SQLJ packages that are
provided by IBM must bound with KEEPDYNAMIC(YES) bind option. Typically you should
bind these packages to a different collection than the ones used for applications that do
not use the KEEPDYNAMIC option. For example, let the collection name be “MYCOLL1”.
If you use SQLJ / IBM pureQuery® applications, then those application packages also
must be bound with KEEPDYNAMIC option to a different collection name. For example, let
the collection name be “MYCOLL2”.
If the application is using JDBC type 4 connectivity, the above collection names must be
specified as part of the “currentPackagePath” data source property. For example, the
value specified in the “currentPackagePath” property looks like “MYCOLL1.*,MYCOLL2.*”.
If the application is using JDBC type 2 connectivity, then the above collection names must
be specified as part of the “pkList” data source property. For example, the value specified
in the “pkList” property looks like “MYCOLL1.*,MYCOLL2.*”.
Specify the “keepDynamic” property in the data source custom property and set the value
to 1
60 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
After the above steps are completed, then WebSphere prepared statement cache, DB2 “local
cache” (which is nothing but thread storage in DB2) and the DB2 global dynamic statement
cache work together as shown in Figure 2-32.
WebSphere AS
Full DB2 z/OS
ProgramA preparedStatement prepare
object cache
c1=getConnection()
Prepared statement S
p1=c1.prepareStatement() Object SKDS
rs1=p1.execute() construction S
Avoided
p1.close() preparedStatement prepare
commit() object p1 execute
c1.close()
Avoided
prepare
thread1
execute
ProgramB Avoided
construction
c1=getConnection()
p1=c1.prepareStatement()
rs1=p1.execute()
p1.close() Avoided
construction
commit()
p1=c1.prepareStatement()
rs1=p1.execute()
p1.close()
commit()
c1.close()
conn1
Local cache Global cache
As we can see, in the first case the application issues the prepareStatement() JDBC API.
WebSphere Application Server looks for the Java preparedStatement object in the cache. If it
does not find it, a new preparedStatement object is constructed and put it in the cache. Then,
the SQL statement is sent to DB2 for z/OS. DB2 looks for the SQL statement in the thread
storage. It is not found in thread storage, DB2 then looks for the SQL statement in the global
statement cache. It is not found there as well, DB2 then does a “full prepare” which entails
validating the SQL statement and coming up with an optimal access path. DB2 then puts this
SQL statement in the global statement cache. It then also stores the artifacts in the DB2
thread storage. Then, control is returned to the application. The application then runs the
statement and then issues a commit.
Notice that because keepDynamic is enabled, the information in thread storage is not
destroyed, and the DB2 thread is still available for reuse. The same thread is used for this
work in DB2. Now the application again issues a preparedStatement with the same SQL
statement. WebSphere Application Server finds the java preparedStatement object in cache.
It then sends the SQL statement to DB2 for prepare. Because “keepDynamic” is enabled, the
SQL statement is found in thread storage and DB2 then returns control back to the
application, which then runs SQL statement.
It is not a problem if the Java application issues the prepare again, the statement is
“absorbed” by the driver and not routed DB2. This is different from other languages, like
COBOL, where you cannot issue the prepare again after the commit. If you do, the local
cached copy is destroyed and you do not get the benefit of keepDynamic.
The keepDymamic option is best for applications that have a limited number of SQL
statements that are used heavily.
WebSphere Authentication
User JAAZ Alias and
Authentication Authorization
Application DB
Interaction DB2
server
The DB2 Trusted Context support in WebSphere Application Server provides an elegant
solution for this problem. A trusted context is an object the database administrator defines
that contains a system authorization ID and a set of trust attributes. The relationship between
a database connection and a trusted context is established when the connection to the
database server is first created, and that relationship remains for the life of the database
connection. This feature allows WebSphere Application Server to use the trusted DB2
Connection under a different user without reauthenticating the new user at the database
server (assuming the Trusted Context is created without authentication option being
required).
62 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
There are two ways to set this up:
There is nothing to be done from a WebSphere Application Server. In DB2, a Trusted
Context is created with the system authid. This user ID is only granted access to connect
to DB2. This user ID is used in the JAAS alias that the data source uses to connect to
DB2. A ROLE is created in DB2, which has the required privileges to the application needs
to access data in DB2. The user ID is then granted the role.
Benefits of this approach
– The user ID and password in the JAAS alias can be used only to access DB2 data from
the WebSphere Application Server
– Nothing needs to be configured in WebSphere Application Server
Cons of this approach
– The user ID is still not available in DB2
Configure WebSphere Application Server to pass the user ID to DB2 for z/OS. In DB2, a
Trusted Context is created with the system authid. This user ID is only granted access to
connect to DB2. This user ID is used in the JAAS alias that the data source uses to
connect to DB2. In the trusted context definition in DB2, user ID/groups should be added.
Authentication requirements can be added to the trusted context definition. A ROLE is
created in DB2, which has the required privileges to the application needs to access data
in DB2. The added user ID/groups are then granted the role.
The application should use resource references. In the resource definition panel, as part
of the Modify Resource Authentication Method, Use Trusted Connection can be selected
and a JAAS alias, which has the system user ID from the trusted context definition in DB2
specified.
Benefits of this approach
– The user ID and password in the JAAS alias can be used only to access DB2 data from
the WebSphere Application Server
– The user ID is available in DB2
Cons of this approach
– The user ID must be defined in the SAF environment
The default isolation level that is used in WebSphere Application Server 8.5 when accessing
DB2 for z/OS is Read Stability (RS). To customize the default isolation level, you can use the
webSphereDefaultIsolationLevel custom property for the data source.
64 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2.7.1 WebSphere Application Server - DB2 for z/OS recommended high
availability configuration when using JDBC type 4 connectivity
WebSphere Application Server Network Deployment configuration (on all platforms) is a
recommended to be set up for high availability and scalability. We can have as many nodes as
required in a single cell, to meet availability and scalability requirements.
Table 2-1 lists the implementation steps and provides a link to where the steps are described
in this book.
Table 2-1 List of sections describing the JDBC type 4 implementation steps
JDBC type 2 - Implementation step number and definition Cross-reference to
where described
WebSphere
2. Build a DB2 for z/OS data sharing environment with at least two members that are
spread across two LPARS on z/OS.
3. Configure either a DB2 Universal JDBC Provider or DB2 Universal JDBC Provider (XA) 5.2.1, “Defining a DB2
depending on transaction requirements. Use DB2 Universal JDBC Provider (XA) if the JDBC XA provider” on
application requires global transaction support using JDBC type 4 connection only. page 210
4. During the definition of the data source, make sure to provide the group DVIPA 5.2.3, “Defining a JDBC
address for the server name property. This is important for high availability and type 4 XA data source” on
scalability. page 218
5. Configure WebSphere Application Server data source connection pool properties 2.6.4, “WebSphere
depending on application requirements. Application Server
connection pooling” on
page 51
5.8, “Configuring
connection pool sizes on
data sources in
WebSphere Application
Server” on page 273
7. Use the high performance DBAT features available in DB2 10. Bind the JDBC 5.11.2,
packages to a different collection name with the RELEASE(DEALLOCATE) option. “currentPackagePath” on
Configure the data source to use this collection. If the application uses SQLJ or page 292
pureQuery and uses static SQL, remember to bind those packages as well with the
RELEASE(DEALLOCATE) and provide those collection names as well in the data
source custom property.
WebSphere
8. Set client accounting information on the data source custom properties as shown at a 5.5.1, “Setting client
minimum. This helps identify connection on which the SQL statements come into DB2. information on a data
source” on page 247
9. Define trusted context and roles in DB2. Define only connection privilege to the user ID 2.6.7, “Trusted context
that is specified on the data source. Define the required privileges to the role in DB2. support in WebSphere
Application Server” on
page 62
5.9, “Enabling trusted
context for applications
that are deployed in
WebSphere Application
Server” on page 276
10. Set up a profile table in DB2 (if using DB2 10) to monitor and control connections Start at 4.3.17, “Using
coming into DB2 from WebSphere Application Server. DB2 profiles” on
page 180
12. Configure prepared statement cache in WebSphere Application Server 5.6, “Configuring the
prepared statement
cache in WebSphere
Application Server” on
page 268
By doing all the following steps above, one gets the following capabilities
High availability
Scalability
Workload balancing
Ability to track which SQL is coming from which application
Ability to better classify individual application workload to WLM on z/OS
Security, as the data sources user ID id cannot be misused
We set up the environment as recommended above and validated the following items:
Trusted Context
Sysplex Workload Balancing
Client connection Strings
High Availability
Figure 2-34 on page 67 represents the HA configuration that we built for JDBC type 4
connectivity following the recommendation above.
66 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
SC64 SC63
It is difficult to show that workload is truly balanced. We captured the output from the DISPLAY
DDF DETAIL command in both the DB2 members. We saw the workload was distributed
between both the data sharing members. See Example 2-1.
We had set the max connections in the data source connection pool property to be 50. We
started a workload that had 50 clients. The JDBC type 4 driver opens 50 transports to
each data sharing member. The DISPLAY LOCATION report in Example 2-2 shows you how
many transports have been created. We can see that for the member D0Z2 we created 50
connections from a client at location 9.12.4.142. All 50 connections are workload balanced
as shown by workload balancing. It also shows that all the 50 connections were coming
from an XA driver.
We looked at the DISPLAY DDF output of Example 2-3 to validate the thread information.
– The difference between ADBAT and DSCDBAT tells you how many threads are active
currently in the DB2 subsystem. For DB0Z1, we see it is 29 -23, which is 6.
– We see that the weights (WT) returned by WLM is almost the same for both the
members.
68 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DSNL100I LOCATION SERVER LIST:
DSNL101I WT IPADDR IPADDR
DSNL102I 33 ::9.12.4.138
DSNL102I 31 ::9.12.4.142
DSNL105I CURRENT DDF OPTIONS ARE:
DSNL106I PKGREL = COMMIT
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
We had set the max connections in the data source connection pool property to be 50. We
started a workload that had 50 clients. The JDBC type 4 driver opens 50 transports to
each data sharing member. The DISPLAY LOCATION report in Example 2-4 shows you
how many transports have been created. We can see that for the member D0Z1 we
created 50 connections from a client at location 9.12.4.142. Out of this 50 all 50 are
workload balanced as shown by WLB. It also shows that all the 50 connections were
coming from an XA driver.
We saw the text in Example 2-5 in the WebSphere Application Server log. The JDBC driver
automatic client reroute feature kicks in and it follows the behavior described earlier. We get
an SQL code of -30108. This tells us that the current transaction failed and the application
has the option to retry the logic (if the application was written to do so). Our DayTrader
application was not written to handle the -30108 error code and hence some transactions
failed, but the workload continued successfully to the other member.
For details about setting up trusted context, see Chapter 4, “DB2 infrastructure setup” on
page 99.
It is important to note that it will not fail back after the original failed member comes back up
again. It is also important to note that there is no workload balancing between two DB2
members on the same LPAR when we use JDBC type 2 connectivity. It is also important to
note that the IBM Data Server Driver for JDBC and SQLJ picks one member randomly and
does not have any special algorithm available.
70 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 2-35 represents what is a common highly available clustered WebSphere Application
Server and JDBC type 2 connectivity to DB2 for z/OS.
LPAR
LPAR
Figure 2-35 Highly available WebSphere Application Server with and JDBC type 2
When one of the DB2 member fails as shown in Figure 2-36, there is a potential outage as the
front end router does not know that DB2 is down.
LPAR
LPAR
The solution is to configure what is called an alternate JDBC type 4 data source. WebSphere
Application Server is smart enough to know that DB2 is down and will start to use the type 4
connection to the second DB2 member of the same data sharing group in the other LPAR
Type 2
Application Connection DB2
Data Resource Factory
Reference Type 4
Connection
Factory
LPAR
LPAR
Then Figure 2-38 shows how, when the DB2 member goes down, existing connections and
transactions fail but new connections use the alternate JDBC type 4 connection to available
DB2 data sharing members and work is not affected.
Type 2
Application Connection DB2
Data Resource Factory
Reference Type 4
Connection
Factory
LPAR
LPAR
Figure 2-38 Alternate JDBC type 4 connection used to surviving DB2 member
72 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Now when the failed DB2 member is brought up, WebSphere Application Server is smart
enough to know that DB2 is back up, and starts using the JDBC type 2 connection. In this
case, it is smart that it does not fail any existing transactions on the JDBC type 4 connection,
instead it quiesces the current work and starts to using the JDBC type 2 connection for new
work. There is a custom property resourceAvailabilityTestRetryInterval that can be configured
to tell WebSphere Application Server how often to check if the failed DB2 member is up. See
Figure 2-39.
Type 2
Application Connection DB2
Data Resource Factory
Reference Type 4
Connection
Factory
tion
LPAR
con and
n ec
bre uiesce
LPAR
ak
Request routing
Q
WebSphere Application Server z/OS
function Version 8
Type 4
Application Connection
Data Resource Factory
Reference Type 2
Connection DB2
Factory
Detailed step by step guidance can be found in the document at this url:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102033
LPAR
LPAR
LPAR
LPAR
74 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
If the failureNotificationActionCode is set to 3, then WebSphere Application Server stops all
applications that access that specific DB2 for z/OS. This means that all other applications
which do not access DB2 for z/OS are available.
A highly intelligent front end router such as the On Demand Router (a WebSphere Application
Server feature available in V8.5) is needed to recognize that the application is stopped and
stop routing work to that server. Normal HTTP servers are not smart enough to know
applications are stopped in the servers, they only know if a server is stopped. Figure 2-42
shows what happens when the failureNotificationActionCode is set to 3.
Type 2
Application Connection DB2
Data Resource Factory
Reference
Application
LPAR
LPAR
Request routing
function WebSphere Application Server z/OS
Version 8
Type 2
Application Connection DB2
Data Resource Factory
Reference
Application
These factors must be taken into consideration before selecting this options.
Information about these properties can be found in the WebSphere Application Server
information center at
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp?topic=%2Fcom.ibm.webspher
e.nd.multiplatform.doc%2Fae%2Frdat_conpoolcustprops.html
All three options are viable and depend on each customers environment and requirement.
Hence it is difficult to really pick one as the recommendation.
Table 2-2 List of sections describing the JDBC type 2 implementation steps
JDBC type 2 - Implementation step number and definition Cross reference to
where described
WebSphere
1. WebSphere Application Server Network Deployment configuration (on all platforms) is 5.1, “Configuring
a recommended to be set up for high availability and scalability. We can have as many WebSphere Application
nodes as required in a single cell, to meet availability and scalability requirements. Server Network
Build at a minimum a WebSphere Application Server Network Deployment Deployment on z/OS” on
configuration spread across 2 nodes/LPARs following the best practice information page 208
found in the following.
2. Build a DB2 for z/OS data sharing environment with at least two members spread
across 2 LPARS on z/OS.
5. Define the ssid custom property. Make sure to give the group attach name as the 5.3.4, “Configuring a
value. subsystem ID on the data
source” on page 238
6. Configure WebSphere Application Server data source connection pool properties 5.8, “Configuring
depending on application requirements. connection pool sizes on
data sources in
WebSphere Application
Server” on page 273
7. Configure WebSphere Application Server data source prepared statement cache size 5.6, “Configuring the
depending on application requirements. prepared statement
cache in WebSphere
Application Server” on
page 268
8. Set client accounting information on the data source custom properties as shown at a 5.1, “Configuring
minimum. This will help identify connection on which the SQL statements come into WebSphere Application
DB2. Server Network
Deployment on z/OS” on
page 208
9. Define trusted context and roles in DB2. Define only connection privilege to the user ID 5.9, “Enabling trusted
that is specified on the data source. Define the required privileges to the role in DB2. context for applications
that are deployed in
WebSphere Application
Server” on page 276
76 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
By doing all the steps above, you get the following capabilities
High availability
Scalability
Ability to track which SQL is coming from which application
Ability to better classify individual application workload to WLM on z/OS
Security, as the data source user ID id cannot be misused
The following validation sections used the HA configuration we built for JDBC type 2
connectivity following the recommendation above. Because we had only two data sharing
members, we brought up both on the same LPAR to validate fail over.
We issued the -DISPLAY THREAD(*) SCOPE(GROUP) command when we started the workload.
We noticed from the output listed in Example 2-7 that all JDBC type 2 connections went to a
single member (D0Z2) as expected. D0Z1 did not have any connections from the
DayTrader application.
We then brought down D0Z2. We then validated the fail over by issuing the -DISPLAY
THREAD(*) SCOPE(GROUP) command again. In Example 2-8 we see that all the connections are
now to member D0Z1.
Example 2-8 Validating the fail over with -DISPLAY THREAD(*) SCOPE(GROUP)
DSNV401I -D0Z1 DISPLAY THREAD REPORT FOLLOWS -
DSNV402I -D0Z1 ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
RRSAF T * 4888 MZSR014S RAJESH ?RRSAF 00B2 45
V437-WORKSTATION=TraderClientWorkst, USERID=TraderClientUser,
APPLICATION NAME=TraderClientApplication
RRSAF T * 6539 MZSR014S RAJESH ?RRSAF 00B2 46
V437-WORKSTATION=TraderClientWorkst, USERID=TraderClientUser,
APPLICATION NAME=TraderClientApplication
RRSAF T * 10223 MZSR014S RAJESH ?RRSAF 00B2 47
V437-WORKSTATION=TraderClientWorkst, USERID=TraderClientUser,
APPLICATION NAME=TraderClientApplication
RRSAF T * 6274 MZSR014S RAJESH ?RRSAF 00B2 48
V437-WORKSTATION=TraderClientWorkst, USERID=TraderClientUser,
APPLICATION NAME=TraderClientApplication
RRSAF T * 9043 MZSR014S RAJESH ?RRSAF 00B2 49
V437-WORKSTATION=TraderClientWorkst, USERID=TraderClientUser,
APPLICATION NAME=TraderClientApplication
RRSAF T * 6176 MZSR014S RAJESH ?RRSAF 00B2 50
V437-WORKSTATION=TraderClientWorkst, USERID=TraderClientUser,
APPLICATION NAME=TraderClientApplication
RRSAF T * 8188 MZSR014S RAJESH ?RRSAF 00B2 51
V437-WORKSTATION=TraderClientWorkst, USERID=TraderClientUser,
***
78 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Validating trusted context
We used the dayTraderEE6 application to validate trusted context. We configured it to be a
JDBC type 2 data source following the best practice configuration. The JAAS alias we used
had a user ID MZADMIN. This user ID had only connect privileges to DB2. It had a DB2 Role
assigned to it that gave it the required privileges to access the application tables. We set the
client application information to TraderClientApplication1.
In DB2 we created a trusted context. Then we ran the application. It prompted us for a user
ID. We used “wastest”. We then captured the output from a -DIS THREAD(*) command.
Example 2-9 shows the output from the command.
The data sharing group uses coupling facilities as hardware assist for efficient concurrency
and coherency control. One or more coupling facilities provide high-speed caching and lock
processing for the data sharing group. The Sysplex, together with the Workload Manager
(WLM), dynamic virtual IP address (DVIPA), and the Sysplex Distributor, allow a client to
access a DB2 for z/OS database over TCP/IP with network resilience, and distribute the work
among the DB2 subsystems within the data sharing group.
CF
Network
DB2 Connect
Server
DRDA Clients
.Net Provider ODBC Driver Type 4 Driver Type 4 Driver .Net Provider ODBC Driver
This section provides recommendations for configuring the TCP/IP network and the DB2
subsystems.
82 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3.1.1 Configuring the TCP/IP network
DB2 requires that all members of a data sharing group use the same port number to receive
incoming SQL requests. The well-known DB2 registered port 446 is the recommended DRDA
port using for SQL processing. Additionally, DB2 requires that each member of a data sharing
group has a resynchronization port number that is unique within the Parallel Sysplex. The
resync port is used by a requester in two situations. One is when the SQL connection fails
leaving in-doubt threads, and the requester and server need to resynchronize after the error.
The other one is for other connections used to interrupt SQL processing on a different
application connection. Obviously, resynchronization needs to occur with the specific DB2
member with which the requester was in session, so this member must be reachable through
a specific IP address (the member-specific DVIPA in this case).
In Figure 3-2 on page 84, there are three DB2 members DB2A, DB2B, and DB2C in the data
sharing group with the group location named DB2LOC. These resync addresses are
represented by ports 5001, 5002, and 5003, for the three DB2 members DB2A, DB2B, and
DB2C, respectively.
Example 3-1 on page 84 first shows to register the well-known DRDA port 446 and a unique
resynchronization port with TCP/IP on each member’s z/OS system as shown in the TCP/IP
PORT configuration profile statement. On each z/OS system where the DB2 member resides,
replicate the TCP/IP PORT configuration profile statement.
Secondly, it shows the VIPADYNAMIC statement to define the group DVIPA for the DB2 data
sharing group. The group DVIPA must be defined with the VIPADEFINE and
VIPADISTRIBUTE statements on the TCP/IP stacks that are associated with the z/OS
systems on which the Sysplex Distributor executes.
The group DVIPA must be defined with the VIPABACKUP statement on the TCP/IP stacks for
DVIPA takeover. Note that the VIPABACKUP statements are coded with the MOVEABLE
IMMEDIATE keywords, and that the VIPADISTRIBUTE statements are also specified on the
backup TCP/IP stacks. This allows for the group DVIPA to be activated on one of the backup
stacks if it is not active anywhere else in the Sysplex. For example, if z/OS-1 has not been
started when z/OS-2 or z/OS-3 start, then group DVIPA is activated on one of the backup
stacks. To allow for failover, the member-specific DVIPAs are defined with the VIPARANGE
statement on all TCP/IP stacks.
Figure 3-2 shows three DB2 members and configured resync addresses with unique port
numbers for location DB2LOC.
DB2 LOCATION
Figure 3-2 DB2 members and configured resync addresses with unique port numbers
Example 3-1 Port and VIPA definitions for three DB2 members
z/OS-1 TCP/IP configuration setting
PORT
446 TCP DB2ADIST SHAREPORT
446 TCP DB2BDIST SHAREPORT
446 TCP DB2CDIST SHAREPORT
5001 TCP DB2ADIST
5002 TCP DB2BDIST
5003 TCP DB2CDIST
VIPADYNAMIC
84 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
VIPARANGE 255.255.255.255 V1
VIPARANGE 255.255.255.255 V2
VIPARANGE 255.255.255.255 V3
VIPADEFINE 255.255.255.255 Vx
VIPADISTRIBUTE DEFINE Vx
PORT 446
DESTIP ALL
ENDVIPADYNAMIC
DB2 needs to enable threads to be pooled by setting the CMTSTAT to INACTIVE. An inactive
connection uses less storage and frees up DB2 resources associated with the transaction
when a thread commits a transaction. When connections are disassociated from the thread,
the thread is allowed to be pooled and reused for other connections. This provides better
resource utilization because there are typically a small number of threads that can be used to
service a large number of connections. You can allow threads to be pooled to
improve performance.
MAXDBAT constrains the total number of threads available to process remote SQL requests.
If a request for a new connection to DB2 is received and MAXDBAT has been reached, the
request is queued, waiting for a thread to become available to process the request.
MAXDBAT generally should be set conservatively. It is usually constrained by the available
DBM1 storage.
86 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Specify the maximum number of concurrent remote connections by setting the CONDBAT
installation parameter. This value must be greater than or equal to MAXDBAT. When a
request to allocate a new connection to DB2 is received, and CONDBAT has been reached,
the connection request is rejected. The value should be the largest number of pooled
connections that would connect to the DB2 member at any point in time. Active threads that
have not committed their work in a timely fashion are canceled after IDTHTOIN expires; locks
and cursors are released. Inactive connections and in-doubt threads are not subject to
time-out. Threads are checked every two minutes to see if they have exceeded the time-out
value. If the timeout value is less than two minutes, the thread might not be canceled if it has
been inactive for more than the time-out value but less than two minutes.
The quicker DB2 can detect the communication error and return the thread to the pool, the
lower the chance to reach MAXDBAT. In cases where the z/OS TCP/IP KeepAlive value in the
TCP/IP configuration is not appropriate for the DB2 subsystem, you can use the TCPKPALV
as an override.
In addition to defining the IP addresses to TCP/IP, the member and group DVIPA
corresponding host names are required to be defined prior to starting DDF. DDF recovery
processing may require the use of these names during in-doubt resolution after a subsystem
failure. You define the host names by configuring the hlq.HOSTS.LOCAL data set, the
/etc/hosts file in the hierarchical file system (HFS), or the domain name server (DNS).
For a more general description of DB2 set up, see 4.3.1, “DB2 connectivity installation
parameters” on page 138.
For adding more granular functions to the system level parameters, see 4.3.17, “Using DB2
profiles” on page 180.
With DB2 for LUW Version 9.5 Fix Pack 3 and above you can implement the DRDA requester
functions for your distributed applications with varied degrees of granularity. Instead of the
current function and large footprint of DB2 Connect, there are several types of IBM data
server clients and drivers available. Each provides a particular type of support.
The IBM data server client and driver types are as follows:
IBM Data Server Driver Package
IBM Data Server Driver for JDBC and SQLJ
IBM Data Server Driver for ODBC and CLI
IBM Data Server Runtime Client
IBM Data Server Client
In this book we discuss IBM Data Server Driver for JDBC and SQLJ for Java applications. We
describe the main connectivity options using IBM Data Server Driver for JDBC and SQLJ for
WebSphere to connect to DB2 for z/OS system.
You can download the IBM Data Server Drivers and Clients from the IBM download site
(http://www.ibm.com/support/docview.wss?rs=4020&uid=swg21385217) where you can see
Table 3-2, which can help in identifying the package you need.
Table 3-2 IBM Data Server Client Packages: Latest downloads (DB2 10)
Driver package Description
IBM Data Server Driver Package (DS Driver) This package contains drivers and libraries for various programming
language environments. It provides support for Java (JDBC and
SQLJ), C/C++ (ODBC and CLI), .NET drivers and database drivers
for open source languages like PHP and Ruby. It also includes an
interactive client tool called CLPPlus that is capable of executing
SQL statements and scripts, and can generate custom reports.
IBM Data Server Driver for JDBC and SQLJ (JCC Provides support for JDBC and SQLJ for client applications
Driver) developed in Java. Supports JDBC 3 and JDBC 4 standard. Also
called as JCC driver.
IBM Data Server Driver for ODBC and CLI (CLI This is the smallest of all the client packages and provides support
Driver) for Open Database Connectivity (ODBC) and Call Level Interface
(CLI) libraries for the C/C++ client applications.
IBM Data Server Runtime Client This package is a superset of Data Server Driver package. It
includes many DB2 specific utilities and libraries. It includes DB2
Command Line Processor (CLP) tool.
IBM Data Server Client This is the all-in-one client package and includes all the client tools
and libraries available. It includes DB2 Control Center, a graphical
client tool that can be used to manage DB2 Servers. It also includes
add-ins for Visual Studio.
IBM Database Add-Ins for Visual Studio This package contains the add-ins for Visual Studio for .NET tooling
support.
88 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3.2.1 Connectivity options for IBM Data Server Driver for JDBC and SQLJ
IBM Data Server Driver for JDBC and SQLJ supports two types of connectivity: type 4
connectivity and type 2 connectivity.
For the DriverManager interface, you specify the type of connectivity through the URL in the
DriverManager.getConnection method. For the DataSource interface, you specify the type of
connectivity through the driverType property.
Connecting to DB2 using IBM Data Server Driver for JDBC and SQLJ
type 4 connectivity
This configuration option is recommended for Java applications exist on a distributed platform
that access the DB2 data remotely.
Type 4 driver is coded entirely in Java providing portability advantage and platform
independence. It provides better performance for remote Java applications with type 4
connectivity. Also type 4 driver accesses DB2 system through TCP/IP and provides sysplex
workload balancing support.
IBM ships two streams of the type 4 driver with the IBM Data Server Driver for JDBC and
SQLJ product:
1. Version 3.5x is JDBC 3.0-compliant. It is packaged as db2jcc.jar and sqlj.zip and provides
JDBC 3.0 and earlier support.
2. Version 4.x is JDBC 3.0-compliant and supports some JDBC 4.0 functions. It is packaged
as db2jcc4.jar and sqlj4.zip.
The type 4 driver provides support for distributed transaction management. This support
implements the Java 2 Platform, Enterprise Edition (J2EE), Java Transaction Service (JTS),
and Java Transaction API (JTA) specifications, which conform to the X/Open standard for
distributed transactions (Distributed Transaction Processing: The XA Specification).
SQLJ SQLJ
runtime runtime
Java Java
T4 (only to z/OS) T4
classes classes
DRDA
z/OS T2
DRDA
D
Other DB2
D
address spaces
F
SQLJ JDBC
SQLJ z/OS
runtime
DRDA
DB2
tables
Common layer
Java T4
classes
Distributed platform
Figure 3-3 Various type 4 connectivity with IBM Data Server Driver for JDBC and SQLJ
Connecting to DB2 using IBM Data Server Driver for JDBC and SQLJ
type 2 connectivity
This configuration option is suitable especially for Java applications that run on the same
z/OS system or System z logical partition (LPAR) and access DB2 data locally.
Type 2 driver is needed for running Java stored procedures on DB2 for z/OS.
The DB2 JDBC type 2 Driver for LUW (DB2 JDBC type 2 Driver) is deprecated. Move your
Java applications to use the IBM Data Server Driver for JDBC and SQLJ.
90 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 3-4 shows types of type 2 connectivity with IBM Data Server Driver for JDBC and
SQLJ.
SQLJ JDBC
SQLJ JDBC
SQLJ
runtime
SQLJ
runtime
Common layer
Common layer
T2 Native
Libraries
T2 Native
Libraries
T2
T2
Deprecated D
Other DB2
DRDA D
address spaces
DB2 Connect F
z/OS
DB2
tables
Figure 3-4 Type 2 connectivity with IBM Data Server Driver for JDBC and SQLJ
These improvements were not available to local Java and ODBC applications that did not
always perform faster compared to the same application called remotely. These improvement
to remote Java applications were described in theDB2 9 for z/OS Performance Topics,
SG24-74733 and the DB2 Version 9.1 for z/OS Application Programming and SQL Guide,
SC18-9841. Refer to these documents for details about LOB progressive streaming and
implicit CLOSE.
With DB2 10, many of these improvements are implemented for local Java applications using
ODBC or JDBC. You can expect significant performance improvement for applications with
the following queries:
Queries that return more than 1 row
Queries that return LOBs
The number of rows returned per call depends on the buffer size (32767 to 262143 bytes with
DB2 10), which is controlled by the queryDataSize property. queryDataSize specifies a hint
that is used to control the amount of query data, in bytes, that is returned from the data
source on each fetch operation. This value can be used to optimize the application by
controlling the number of trips to the data source that are required to retrieve data.
Regression is possible for simple OLTP transactions with single row result sets. In this case,
LBF can be disabled through the configuration keyword: db2.jcc.override.enableT2zosLBF=2
A client configuration parameter can be set to ensure the list stays current. The default life
span of the cached server list is 10 seconds. This list contains the member IP address and
WLM weight for each data sharing group member. With this information, the client distributes
transactions in a balanced manner, and seamlessly reroutes work even when there is a
network failure, a member failure, a member slowdown, or when a member is quiesced
for maintenance.
92 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3.3.2 The difference between connections and transports
Sysplex workload balancing (WLB) feature supports transaction-level workload balancing for
connections accessing a DB2 data sharing group. When a client is enabled with sysplex
workload balancing, balancing decisions are performed at the start of each transaction. If just
using the z/OS Sysplex Distributor, balancing decisions are performed at the start of each
connection. Typically connection level balancing is not effective for most DB2 applications
because connections have a long life.
After sysplex WLB is enabled in the data server driver client, application connections are no
longer physical connections to DB2. Only when a connection is in use, a physical connection
to DB2 is active. While a connection is not in use, the driver pools these connections. This
pool of driver maintained connections are called transports. Transports are only associated
with an application when a new transaction is started. A single transport can be used by many
application connections. DB2 identifies unused transports as inactive connections. When a
transaction is started, DB2 associates a thread with the inactive connection and associates to
a thread called an active data base access thread (DBAT).
Figure 3-5 is an example of the Java driver but both clients have the same feature.
JVM
Type 4 Driver
Logical DB2
Thread
Connection z/OS
1
1
Transport
1
Logical CF
Thread
Connection
2
2
Transport
2
Logical DB2
Thread
Connection Pooled connections z/OS
3
3 to DB2 server
Disconnect
at commit/rollback DB2 Group
At the start of each new transaction, the client reads the cached server list to identify a
member that has unused capacity, and looks in the transport pool for an idle transport that is
tied to the member. An idle transport is a transport that has no associated connection. If an
idle transport is available, the client associates the connection with the transport. If after a
user-configurable timeout period (db2.jcc.maxTransportObjectWaitTime for a Java client or
maxTransportWaitTime for other clients), no idle transport is available in the transport pool
and no new transport can be allocated because the transport pool reached its limit, an
exception is returned to the application.
When the transaction runs, it accesses the server that is tied to the transport. When the
transaction ends, the client verifies with the server that transport reuse is allowed for the
connection. If the server identifies that the transport reuse is allowed, the server returns a list
of SET statements for special registers that apply to the execution environment for the
connection. The client caches these as SQL SET statements, which it replays to reconstruct
the execution environment when the connection is associated with a new transport.
Generally, it is recommended to have the application always review and set the client
properties. Proper setting of this information allows better isolation of problems, better
classification of work which allows workload balancing to perform more efficiently. The DBA
can quickly use client info to isolate issues to the specific client and even to the specific
transaction.
Client information properties are managed by the application and need to be set prior to
running the first SQL statement in each transaction if you want to use the client strings for
WLM classification. The more granular you set and manage these properties, the more
effective they are in managing the workload. On DB2, the client information can be used by
WLM to classify work, is displayed in DB2 messages and is included in DB2 accounting data.
It is recommended to not use client affinities when accessing DB2 for z/OS Client affinities is
not applicable to a DB2 data sharing environment, because all members of a data sharing
group can access data concurrently.
Table 3-3 shows the suggested property values for Java client enabled with Sysplex feature.
For details, see DB2 10 for z/OS Application Programming Guide and Reference for Java,
SC19-2970.
maxTransportObjects Set to the number of concurrent Maximum number of connections that the
transactions times the number requester can make to the data sharing group.
of DB2 members
94 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Property Suggested value Description
queryCloseImplicit QUERY_CLOSE_IMPLICIT_C Closes the cursor at the server after all the result
OMMIT(3) sets are exhausted.
The following is the explanation of various messages extracted from the -DISPLAY DDF DETAIL
command above:
DSNL083I STLEC1 Group location
DSNL085I IPADDR=::9.30.119.22 Group distributed DVIPA address
DSNL084I TCPPORT=446 RESPORT=5001 Group port and the resync port
DSNL088I STLEC1ALIASSUB12 5052 0 STATIC Defined DB2 location alias and port
DSNL089I MEMBER IPADDR=::9.30.119.23 Member IP address in the location alias
See 5.10, “Configuring the JCC properties file in WebSphere Application Server” on page 282
for the WebSphere properties.
96 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
ds.setPortNumber(12345);
ds.setDatabaseName("DB2ServerName");
ds.setUser("USERID");
ds.setPassword("PASSWORD");
try
{
DB2Connection con = (DB2Connection)ds.getConnection();
// Thread Utilization properties
con.setAutoCommit(false);
PreparedStatement ps;
String insertsql = "INSERT INTO TABLE1 VALUES (?,?)";
ps = con.prepareStatement(insertsql);
for (int i =1;i<=200; i++){
ps.setInt(1,i+1);
ps.setString(2,i+"Test Sample : This is a Long Test String"+i);
ps.addBatch();// Add batch processing
}
ps.executeBatch();// Execute batch processing
while (rs.next()){
//fetch to the end;
};
For setting the data source properties in WebSphere Application Server, see 5.3.3, “Defining
a JDBC type 2 data source” on page 233.
98 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4
In this part of the documentation we outline the Parallel Sysplex resources that are required
for DB2 data sharing, illustrate the Parallel Sysplex configuration used in our environment,
and discuss important aspects we want you to consider.
For more information and suggested practices on how to set up and tune DB2 data sharing,
refer to the following resources.
Part 4 DB2 sysplex best practices of System z Parallel Sysplex Best Practices,
SG24-78177.
DB2 10 for z/OS Data Sharing: Planning and Administration. SC19-2973
IBM developerWorks® DB2 for z/OS with best practices presentations available at
http://www.ibm.com/developerworks/data/bestpractices/db2zos/
100 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Coupling Facility resources
For our DB2 data sharing group we created the coupling facility (CF) structures shown in the
IBM RMF™ III Coupling Facility Activity report in Figure 4-1.
Samples: 120 Systems: 4 Date: 08/08/12 Time: 20.35.00 Range: 120 Sec
CF: CF1 Type ST System CF --- Sync --- --------- Async --------
Util Rate Avg Rate Avg Chng Del
Structure Name % Serv Serv % %
Our data sharing environment was configured for function testing rather than for performance
and scalability testing for which we were happy to accept a minimum configuration in terms of
CF structure sizes. On top of that, we implemented common best practice recommendations
such as structure duplexing and structure failure isolation to support high availability.
Under normal circumstances you would need to plan your CF structure implementation to
make sure they are appropriately sized and implemented to support your
availability requirements.
If for some reason you are not able to provide the input parameters required by the DB2
CFSIZER tool you can use the DB2 minimum structure sizing recommendations given in
Chapter 8, “Best practices:”, in DB2 for z/OS: Data Sharing in a Nutshell, SG24-7322.
In either case, the most important thing is to get the failed DB2 back up and running as
quickly as possible. The best way to achieve this is to use the IBM MVS™ Automatic Restart
Manager (ARM). Many automation products provide support for ARM. This means that they
manage DB2 for normal startup, shutdown, monitoring, and so on. However, if DB2 fails, they
understand that they must allow ARM to take responsibility for restarting DB2.
If the failure was just in DB2, and the system it was running on is still available, restart DB2 in
the same LPAR, with a normal start. DB2 automatically releases any retained locks as part of
the restart.
If the system DB2 was running on is unavailable, start DB2 for another member of the sysplex
as quickly as possible. The reason for this is that it results in DB2 coming up and cleaning up
its locks far faster than it is able to do were you would have to wait for z/OS to be IPLed and
brought back up.
Furthermore, if DB2 is started on another system in the Sysplex, you really only want it to
release any locks that it was holding. More than likely, there is another member of the
data-sharing group already running on that system. If you specify the LIGHT(YES) option on
the START DB2 command, DB2 starts with the sole purpose of cleaning up its locks. In this
mode, it only communicates with address spaces that it was connected to before the failure,
and that have indoubt units of work outstanding. As soon as DB2 completes its cleanup, the
address space automatically shuts itself down. Hopefully, the failed system is on its way back
up at this time, and the DB2 can be brought up with a normal start in its normal location.
In addition to restarting DB2 using ARM and Restart Light, also define a restart group to ARM
so that it also restarts any subsystems that were connected to DB2 prior to the failure. By
restarting all the connected subsystems, any indoubt units of recovery can be cleaned up.
Note that when the Restart Light capability was introduced by DB2 V7, it did not handle
cleanup for any INDOUBT units of work. However, in DB2 V8 the Restart Light capability was
enhanced so that it cleans up any INDOUBT units of work, assuming that the associated
address space is also restarted on the same system. If you do not want to have DB2 resolve
the INDOUBT units of work, or if you do not plan to restart the connected address spaces on
the other system, start DB2 with the NOINDOUBT option.
102 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Suggestion: Use ARM to restart DB2 following a DB2 failure.
If only DB2 failed, ARM must do a normal restart for DB2 for the same z/OS system. Do a
RESTART LIGHT of DB2 on a different system from the one on which it was running
previously.
If the system failed, ARM should do a Restart Light for DB2 on another system in the
sysplex. Also, define a restart group so that ARM can also restart any related subsystems
together with the restarted DB2.
By way of derogation from the default ARM registration, we recommend to implement the
following ARM policy changes to be in line with best practice recommendations:
Set up your location service daemons for restart in place. If the location service daemon
attempts to restart on an alternate system, it will fail.
Set up you node agents for restart in place. If the node agent restarts on the alternate
system, it will have no recovery work to do.
For more information about how to configure ARM with WebSphere Application Server for
z/OS refer to IBM WebSphere Application Serve Information Center, Automatic restart
management at http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp.
T2 T2
+ T4 T4 +
T4 T4
SC63 SC64
T2 = JDBC type 2
T4 = JDBC type 4
Figure 4-2 WebSphere Application Server for z/OS and DB2 for z/OS infrastructure
To provide the recommended level of availability for our application server environment we
configured ARM to use the restart policy shown in Example 4-1. Through this policy the DB2
member and its related application server instance will be restarted in the same LPAR in case
a subsystem failure occurs. In case of a system failure ARM restarts the DB2 member and its
related application server instance either in system SC63 or SC64 depending on system
availability.
104 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
RESTART_METHOD(ELEMTERM,PERSIST)
RESTART_METHOD(SYSTERM,STC,
'-D0Z2 STA DB2,LIGHT(YES)')
ELEMENT(MZCELLMZSR013)
RESTART_ATTEMPTS(3,)
RESTART_METHOD(ELEMTERM,PERSIST)
RESTART_METHOD(SYSTERM,STC,
'S MZACR3,'
'JOBNAME=MZSR013,ENV=MZCELL.MZNODE3.MZSR013')
ELEMENT(MZCELLMZSR014)
RESTART_ATTEMPTS(3,)
RESTART_METHOD(ELEMTERM,PERSIST)
RESTART_METHOD(SYSTERM,STC,
'S MZACR4,'
'JOBNAME=MZSR014,ENV=MZCELL.MZNODE4.MZSR014')
After the policy shown in Example 4-1 on page 104 was activated we used the operating
system command interface to confirm that our ARM policy was used for DB2 and its related
application servers. The operating system command output is provided in Figure 4-3.
After we had run one of our Java EE sample applications we verified the status of the
procedures used by the JDBC driver for metadata retrieval. To perform this verification we
executed a DISPLAY PROCEDURE command shown in Figure 4-5. For each procedure the
command output confirms the procedure status and the WLM application environment name.
-dis proc
DSNX940I -D0Z1 DSNX9DIS DISPLAY PROCEDURE REPORT FOLLOWS -
------- SCHEMA=SYSIBM
PROCEDURE STATUS ACTIVE QUED MAXQ TIMEOUT FAIL WLM_ENV
SQLCOLUMNS
STARTED 0 0 1 0 0 DSNWLMDB0Z_GENERAL
SQLCAMESSAGE
STARTED 0 0 1 0 0 DSNWLMDB0Z_GENERAL
Figure 4-5 DISPLAY PROCEDURE output
106 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Next we executed the DB2 command shown in Example 4-2 to verify the status and the JCL
procedure name of the DSNWLMDB0Z_GENERAL WLM application environment. The
command output confirms the application environment availability and the name of the JCL
procedure used by WLM to start the WLM stored procedure address space (WLM SPAS).
Response time goals are appropriate for user applications. User applications in the context of
this book are WebSphere Application Server for z/OS Java applications connecting to DB2 for
z/OS using JDBC type 2 and type 4 connections.
For the DB2 system address spaces, velocity goals are more appropriate. Only a small
amount of the work done in DB2 is counted toward this velocity goal. Most of the work done in
DB2 counts towards the user goal.
Your performance goals are implemented through WLM service classes. You create your
WLM service classes using the attributes that are required to meet your service level
agreement objective. WLM classes are categorized by subsystem types. WLM uses the
subsystem type specific classification rules to assign service classes to incoming workloads.
For simplification we use the term service classification to refer to the process of service class
assignment by WLM.
ASCH
TSO 1 CB
1 2 Subsystems follow one of
SYSH * 3 CICS
three transaction type models.
STC 1 2 DB2 Need to understand how this
TCP 2 2 DDF affects the value of figures
shown in the workload activity
OMVS 1 3 IMS report.
NETV 2 2 IWEB * SYSH is used for LPAR
load balancing
MQ 2 2 1 JES
LDAP
Allowable
Transaction Type Allowable Goal Types Number of
Periods
Response Time
Address space STC: DB2
1 Execution Velocity Multiple
oriented Address spaces
Discretionary
Response Time
DDF
Enclave 2 Execution Velocity (and DB2)
Multiple
Discretionary
CICS/IMS 3 Response Time IMS Txn (class) 1
The WLM subsystem types relevant to our WebSphere Application Server environment are
STC (started task control)
Subsystem type for the service classification of DB2 and WebSphere Application Server
system address spaces. In this part of the book we only discuss the classification of the
DB2 system address spaces.
DDF
Subsystem type for the service classification of transaction type enclave workloads that
arrives in DB2 through the DB2 DIST address space through JDBC type 4 connections.
CB
Subsystem type for service classification of transaction type enclave Java workloads that
run in WebSphere Application Server for z/OS regardless of the JDBC driver type being
used.
Service classifications for subsystem type DB2 is only relevant for workloads related to DB2
Parallel Sysplex Query Parallelism which is not being used in our scenario.
108 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Subsystem-Type Xref Notes Options Help
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 8 to 34 of 35
Command ===> ___________________________________________ Scroll ===> CSR
After we had started DB2 we used the ISPF SDSF application to verify that z/OS used our
WLM configuration for the DB2 started tasks. The SDSF display active output that we
obtained for verification is shown in Figure 4-8.
In our scenario the D0Z1 and D0Z2 DIST address spaces run in service class SCTHI which
represents a performance goal that is as high as the goal for the DB2 database services
address spaces. Classifying the DIST address spaces appropriately is important as the
service class determines how quickly the DIST address space is able to perform operations
associated with managing the distributed DB2 work load. Operations in that sense include
adding new users or removing users that have terminated their JDBC type 4 connections.
WebSphere
WLM DDF
Application
Classification
Server
DDF Enclave
Java EE SRB
Application
DB2 DB2
DDF
TCP/IP
JDBC Driver
In our environment we use two WebSphere Application Server applications to illustrate WLM
service classification. For WLM classification each application provides the DB2
clientApplicationInformation data source custom property shown in Table 4-1 on page 111
when connecting to DB2.
110 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Table 4-1 clientApplicationInformaton
Application Context root clientApplicationInformation
We used the DB2 clientApplicationInformation listed in Table 4-1 to define the service
classification rules that are shown in Figure 4-10.
1. Type SSC (subsystem collection) contains the data sharing group name which is not to be
confused with the group attach name. You can determine group name by running the
command shown in Figure 4-11.
-dis group
DSN7100I -D0Z2 DSN7GCMD
*** BEGIN DISPLAY OF GROUP(DB0ZG ) CATALOG LEVEL(101) MODE(NFM )
Figure 4-11 DB2 Display group output
2. Under data sharing group DB0ZG we use request type PC (process name) to assign WLM
service and report classes based upon DB2 client application information that are used by
our Java applications. The DB2 client information provided by our Java applications are:
– clientApplicationInformation:TraderClientApplication
• matches WLM process name Trade*
• assigns WLM service class DDFONL
• assigns WLM report class RTRADE0Z
– clientApplicationInformation: dwsClientinformationDS
• matches WLM process name dwsClie*
• assigns WLM service class DDFONL
• assigns WLM report class RDWS0Z
DDF request
DB2 client application 1 Enclave SRB
Trade* / dwsClie*
PC-call to DBM1
2
Create Enclave
Schedule SRB
DDFONL
DDF rules
PC-call to DBM1 RT=90%, 0.5 sec,
DDF default IMP 2
requests 3
SMF 72
Enclave SRB
4
non-swappable
DDFDEF
1. When our WebSphere Application Server application connects to DB2 it provides its client
application information referred to in Figure 4-10 on page 111. The DB2 distributed
address space creates an enclave and schedules an enclave SRB. The enclave SRB uses
a program call instruction to trigger processing in the DB2 database manager address
space.
2. WLM considers the client application information to assign the performance goal referred
to in service class DDFONL. WLM furthermore assigns the report class defined in the
WLM policy shown in Figure 4-10 on page 111 which is useful when it comes to creating
RMF workload activity reports.
3. All other DB2 DBATs will be classified using the data sharing group default service class
and report class configuration referred to in the WLM policy shown in Figure 4-10 on
page 111 under rule type SSC (subsystem collection).
4. Requests falling in rule type SSC will be classified using service class DDFDEF and report
class RD0ZGDEF.
5. The DB2 system address spaces are classified using classification rules defined in
subsystem type STC. For our environment these classification rules are discussed in “DB2
system service classification” on page 108.
112 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
When we tested the D0ZG_WASTestClientInfo data web service application we captured the
SDSF enclave display output shown in Figure 4-13 to confirm that the WLM classification rule
shown in Figure 4-10 on page 111 was correctly used by our runtime environment.
From the SDSF panel provided in Figure 4-13 we issued the SDSF action character shown to
obtain additional information about the enclave. This took us the panel shown in Figure 4-14.
The information that we obtained through Figure 4-14 confirmed that the following runtime
attributes were used because of our WLM service classification:
Subsystem type DDF and subsystem name D0Z1 - the enclave was managed by the
D0Z1DIST address space
Subsystem collection DB0ZG - data sharing group name
Process name dwsClientinformationDS - derived from DB2 clientApplicationInformation
data source custom property setting.
Important: Always provide a classification rule in WLM. If you do not classify your DDF
workload your DDF transactions will run unclassified in service class SYSOTHER, which
has the lowest execution priority in your system. As a consequence transaction throughput
of those applications will suffer from bad performance.
WLM CB
Classification
WebSphere
Application
Server for z/OS
Java EE
Application
JDBC Driver
DB2
DB2 RRSAF
Interface
For the WebSphere settings, see 5.3, “Configuring WebSphere Application Server for JDBC
type 2 access” on page 222
114 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
RRS is a prerequisite for DB2 for z/OS availability in a WebSphere Application Server
environment. This is why you must not shut down RRS while DB2 and WebSphere
Application Server are running, because if you do WebSphere Applications Server terminates
and cannot be restarted until RRS has been restarted. As a result uncommitted units of
recovery (UR) cannot be resolved in DB2 for as long as WebSphere Application Server is
down. For this reason you should perform RRS shutdown only after resource managers such
as DB2 and WebSphere Application Server have been quiesced. In case of RRS subsystem
failure RRS must be restarted as quickly a possible which is usually assured through z/OS
Automatic Restart Manager (ARM) or by other means of system automation.
For information about using and implementing RRS for high availability, see z/OS
Programming: Resource Recovery, SA22-7616-11:available at
http://www.ibm.com/systems/z/os/zos/bkserv/r13pdf.
Important: DB2 external stored procedures access DB2 through the RRSAF attachment
interface which requires RRS to be available.
The DB2 JDBC driver (regardless whether you use JDBC type 2 or type 4 connections)
transparently calls DB2 provided external stored procedures for metadata retrieval.
Therefore, accessing DB2 for z/OS through JDBC already causes the requirement for the
RRS subsystem to be available regardless of the JDBC connection type being used and
regardless of the runtime environment your Java application is executing in.
DB2 startup
External stored procedures and JDBC type 2 based applications communicate with RRS
through the DB2 RRSAF attachment interface which ensures data integrity in case resource
changes in other z/OS resource managers like IMS, CICS, WebSphere MQ are performed
within the same unit of recovery and always for type 2 connections. To cater for this RRS
requirement DB2 verifies the availability of RRS during startup and issues the messages
shown in Figure 4-16.
D RRS,RM,SUM
ATR602I 18.01.34 RRS RM SUMMARY 351
RM NAME STATE SYSTEM GNAME
DSN.RRSATF.IBM.D0Z1 Run SC63 SANDBOX
DSN.RRSPAS.IBM.D0Z1 Run SC63 SANDBOX
Figure 4-17 DB2 start RRS RM state
We then verified the RRS state of the DB2 resource managers shown in Figure 4-18. As you
can see in Figure 4-19 stopping DB2 member D0Z1 set the RRS resource manager state to a
value of Reset.
D RRS,RM,SUM
ATR602I 18.29.44 RRS RM SUMMARY
RM NAME STATE SYSTEM GNAME
DSN.RRSATF.IBM.D0Z1 Reset SC63 SANDBOX
DSN.RRSPAS.IBM.D0Z1 Reset SC63 SANDBOX
Figure 4-19 RRS RM state upon DB2 shut down
Stopping RRS
If you stop RRS it deregisters from ARM and issues the system message shown in
Figure 4-20.
When a DB2 RRSAF application accesses DB2 while RRS is unavailable, the DB2 Resource
Recovery Service Attachment Facility (RRSAF) interface returns error code and reason code
information for the application to cater for that situation. The reason codes an application
needs to take care of in case of RRS unavailability are shown in Table 4-2 on page 117.
116 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Table 4-2 RRSAF reason codes
RRSAF reason code Description
00C12219 The application program issued an SQL or IFI function request without
completing CREATE THREAD processing. SQL or IFI requests cannot be
issued until CREATE THREAD processing is complete.
For Java applications or DB2 DRDA workloads you might want to consider adding zIIP or
zAAP processor capacity to satisfy the additional processor requirement and to financially
benefit from using these speciality engines. During pre-production stress testing you carefully
monitor and tune your application aiming at reaching production like application throughput
rates. As a result of pre-production stress testing and tuning you know which additional
memory, processor and disk resources are required to run your application in your production
environment. After these additional resources have been allocated to your production
environment you are ready to promote your application to production level.
In the environment of our scenario we implemented DB2 and application objects of the
WebSphere Daytrader application which we downloaded from
http://www14.software.ibm.com/webapp/download/preconfig.jsp?id=2011-06-08+10%3A34%
3A22.216702R&cat=webservers&fam=&s=&S_TACT=&S_CMP=&st=52&sp=20
In the following sections we outline the major steps required for external storage
configuration. For details. see DB2 9 for z/OS and Storage Management, SG24-7823.
118 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Estimating space requirement
Before you create new tables and indexes in DB2 you should have an idea of the amount of
disk space these new objects are going to use. For capacity planning purposes these space
requirements should be discussed with your storage administrator so that your volume pool
configuration can be changed to provide the additional disk space.
After the objects have been created and are operational you can use the tooling provided by
DB2 to monitor space growth. The tools provided by DB2 supporting you in performing these
tasks are:
External stored procedure SYSPROC.ADMIN_DS_LIST
DB2 real time statistics (RTS)
External stored procedure SYSPROC.DSNACCOX
For a discussion of these tools refer to 4.3, “DB2 for z/OS configuration” on page 137
We created SMS storage group DB0ZDATA to provide a volume pool for the disk space
required to store the Daytrader DB2 tables and indexes. The other storage groups shown are
for DB2 archive log data sets, image copy data sets, active log data sets as well as for other
runtime data sets. We defined the DB0ZCPB COPY POOL BACKUP pool to provide the
infrastructure for DB2 system backup and restore. We then used the ISMF LISTVOL
command to obtain a list of volumes available in storage group DB0ZDATA.
From the volume list shown in Figure 4-23 we then issued the user command lds (list data
set) to display the data sets stored on volume 0Z9B86. lds is a user provided REXX program
that uses ISPF services to display a volume related data set list which can be useful if you
want to check out whether the volume data set placement works as planned. The REXX
source is shown in Example 4-3.
120 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DB2 storage group and data set HLQ usage
Table space and index space creation triggers VSAM LDS data set creation in DB2. For data
set creation DB2 obtains the data set HLQ from the DB2 storage group referenced in the
create table space and create index DDL statement. In our environment we use DB2 storage
group GR248074 through which DB2 uses a data set HLQ of DB0ZD for VSAM LDS data set
creation. The DDL that we used to create our DB2 storage group is shown in Example 4-4.
Our DFSMS configuration places data sets with an HLQ of DB0ZD on one of the volumes
available in DFSMS storage group DB0ZDATA. If the volume chosen becomes full DB2
automatically adds a volume to the VSAM LDS data set definition allowing the data set to
extend to the additional volume. If all volumes available in DFSMS storage group DB0ZDATA
become full DFSMS configuration options can be used to overflow to another volume pool or
to perform an online volume pool change to supply additional disk space to support high
availability.
If you want to read more about DB2 and DFSMS storage, refer to DB2 9 for z/OS and Storage
Management, SG24-7823 http://www.redbooks.ibm.com/abstracts/sg247823.html?Open.
Y
SMS SG ACS Routine SGACTIVE:
4
WHEN (&STORCLAS='DB0ZDATA') Data Set non-SMS
SET &STORGRP = 'DB0ZDATA'
managed
Figure 4-25 DFSMS ACS routine processing
1. The CREATE TABLESPACE DDL triggers the creation of a VSAM LDS cluster. The cluster
name uses the data set HLQ provided by DB2 storage group GR248074 which causes the
data class ACS routine to assign the DB0Z data class. The DB0Z data class provides data
set attributes that are required to support VSAM extended format and extended
addressability. These attributes are recommended to provide high availability and to
support new features available with modern disk technology.
2. Next the storage class ACS routine receives control and assigns storage class
DB0ZDATA. A DFSMS storage class controls data set level usage of storage performance
attributes provided by DFSMS. For instance, in our environment the DB0ZDATA storage
class assures the use of parallel access volumes (PAV) which is highly recommended to
alleviate I/O queuing in case of high I/O currency on the same physical volume. After a
non-null value has been assigned the data set to be created is going to be DFSMS
managed.
3. Next the management class ACS routine receives control. The management class
controls the actions that are to be taken by the DFHSM space management cycle. The
management class used for our table and index spaces ensures that no DFHSM space
management activity is taken that can have a negative impact on data availability and data
integrity. For instance, the management class used ensures that our table and index space
related VSAM LDS data sets are not migrated, deleted or backed up by DFHSM during
space management cycle.
4. Finally the storage group ACS routine receives control. As mentioned before the DFSMS
storage group provides a group of volumes the data set creation process can
transparently choose from.
122 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4.1.7 UNIX System Services file system configuration
If your production environment depends on the IBM Data Server Driver for JDBC to be
available on z/OS you need to design your infrastructure to provide high availability for this
infrastructure component.
SMP/E installs the DB2 command line processor (CLP) and IBM Data Server Driver for JDBC
and SQLJ related UNIX System Services files into IBM eServer™ zSeries® File ZSystem
(zFS) data sets. Copies of these SMP/E controlled zFS data sets are rolled out into target
runtime environments to provide software upgrades or to participate in rolling maintenance
processes.
Because you should not replace the IBM Data Server Driver for JDBC installation while it is
being used by applications you need to design your UNIX System Services related DB2
software update strategy to support seamless software updates for installing and backing out
JDBC driver changes. To address this problem we carried out the following activities:
Provide one file system directory structure for each JDBC driver level we want to support
Use UNIX System Services symbolic links to connect the appropriate JDBC driver level
with a logical path name our application uses to load the JDBC driver
For an up to date list of driver levels currently supported, refer to the following information:
DB2 for z/OS: http://www-01.ibm.com/support/docview.wss?uid=swg21428742
DB2 LUW: http://www-01.ibm.com/support/docview.wss?uid=swg21363866
/pp/db2v10/
+---d110809/ <---- zFS file: OMVS.DSNA10.BASE.D110809
+---d120320/ <---- zFS file: OMVS.DSNA10.BASE.D120320
+---d120719/ <---- zFS file: OMVS.DSNA10.BASE.D120719
Figure 4-26 UNIX System Services directories for JDBC driver level related rollout
Under directory /pp/db2v10 we created directories d110809, d120320, d120719 each of them
representing a different software maintenance level. We then used these directories as mount
point directories for mounting the corresponding zFS file data sets. The zFS files shown in
Figure 4-26 are data sets that we previously copied from our SMP/E environment. In our
runtime environment each of these mount point directories contains the directory structure
shown in Figure 4-27.
/pp/db2v10/
+---d120719/
+-----base <---- DB2 command line processor
+-----jdbc <---- IBM Data Server Driver for JDBC
+-----mql <---- MQ listener
Figure 4-27 DB2 product related directories
For each mounted zFS file the command output shown in Example 4-5 confirms mount
status, zFS file data set name, the mount point and the mount mode. In our environment we
mounted each of the DB2 zFS files read only because this is recommended for performance
reasons in case no write access is required.
To address this problem our applications use a data sharing group related path name to load
the JDBC driver. We run the command shown in Example 4-6 to create an UNIX System
Services symbolic link that connects the current JDBC driver installation directory with the
data sharing group related logical path name.
We than ran the command shown in Example 4-6 to verify which installation directory our
data sharing group logical path name is connected with.
ls -l /usr/lpp/db2/d0zg
lrwxrwxrwx /usr/lpp/db2/d0zg -> /pp/db2v10/d120719
Figure 4-28 Verify JDBC symbolic link
In case we need to fall back to the previous JDBC driver level we simply swap the symbolic
link as shown in Example 4-7 on page 125. z/OS JDBC applications do not need to change
their JDBC configuration because they use the data sharing group related path name for
loading the JDBC driver.
124 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example 4-7 JDBC swap symbolic link
rm /usr/lpp/db2/d0zg
ln -s /pp/db2v10/d120320 /usr/lpp/db2/d0zg
For more information about performing this configuration, refer to 5.2.2, “Defining
environment variables at the location of the IBM Data Server Driver for JDBC and SQLJ
classes for JDBC type 4 connectivity” on page 213 and 5.3.2, “Defining environment variables
to the location of the IBM Data Server Driver for JDBC and SQLJ classes for JDBC type 2
connectivity” on page 228.
If you run multiple instances of WebSphere Application Server for z/OS you might want to
consider to use application server specific symbolic links for loading the JDBC driver. This
provides the flexibility you might need in case an application server instance is bound to using
a specific JDBC driver level due to failures that were introduced by a new JDBC
driver installation.
To support DB2 system and application monitoring we implemented the following monitoring
infrastructure:
Software stack
For reporting and analysis we installed the following software:
IBM OMEGAMON XE for DB2 PE on z/OS (OMPE). We describe this topic in 4.4, “Tivoli
OMEGAMON XE for DB2 Performance Expert for z/OS” on page 201.
IBM Resource Measurement Facility™ (RMF) on z/OS
SMF Browser for WebSphere Application Server for z/OS to report on SMF type 120 records
for which you can download from
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=zosos390.
For details, see Appendix E, “SMF 120 records subtypes 1, 3, 7, and 8” on page 545.
DB2 trace IFCID 318 to be activated during DB2 member startup to enable the collection
of global statement cache statistics. We configured the administrative scheduler to issue
the DB2 command shown in Example 4-9 within DB2 start processing.
We examine:
Authentication in a three-tier architecture
Authentication in a three-tier architecture using DB2 trusted context
Figure 4-29 on page 127 visualizes the process of client authentication in a three-tier
WebSphere Application Server environment.
126 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
App server User
ID WASSRV Authorization
WASUSER and Password Checking
Authentication
WebSphere
Application DB2
End-User ID Server
WASUSER
In the scenario illustrated in Figure 4-29, end-user WASUSER has been authenticated by the
application server and is connected to DB2. The DB2 connection uses the application
server’s end-user credentials (user ID WASSRV provided in data source authentication
related properties) for DB2 authentication and authorization checking. Therefore, SQL
requests submitted by WASUSER are executed in DB2 using the application server’s user
credentials (user ID WASSRV).
Because all SQL access is performed under the middle tier’s user ID, the three-tier application
model causes the following issues:
Loss of end-user identity in DB2.
Loss of control over end-user access of the database.
Diminished DB2 accountability.
The middleware server’s authorization ID (AUTHID WASSRV) needs the privileges to
perform all requests from all end-users.
If the middleware server’s security is compromised, so is that of the database server.
Re-establishing a new connection every time the user ID changes does not provide a feasible
solution due to the high performance overhead that would cause.
2
1 3
4 5
WASSRV
WASUSER WebSphere DB2
Application for RACF
Server z/OS
7
We then created the DB2 trusted context by running the SQL DDL shown in Figure 4-31 on
page 129.
128 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
CREATE ROLE WASSRVROLE; 1
CREATE ROLE WASUSERROLE;
GRANT EXECUTE ON FUNCTION DB2R3.GRACFGRP TO ROLE WASUSERROLE; 2
CREATE TRUSTED CONTEXT CTXWASSRV
BASED UPON CONNECTION USING SYSTEM AUTHID WASSRV 3
DEFAULT ROLE WASSRVROLE
WITHOUT ROLE AS OBJECT OWNER
ENABLE
NO DEFAULT SECURITY LABEL
ATTRIBUTES (
ENCRYPTION 'NONE',
ADDRESS 'wtsc64.itso.ibm.com', 3
ADDRESS 'd0z1.itso.ibm.com',
ADDRESS 'wtsc63.itso.ibm.com',
ADDRESS 'd0z2.itso.ibm.com'
)
WITH USE FOR WASUSER ROLE WASUSERROLE WITHOUT AUTHENTICATION; 4
Figure 4-31 Create trusted context
1. The roles used in the trusted context must exist prior to trusted context creation.
2. Role WASUSERROLE is granted to execute function DB2R3.GRACFGRP, because the
UDF is invoked in our application scenario. User ID WASSRV does not require the execute
or any other DB2 object privilege, because we use WASSRV just for DB2 connection
creation. Any DB2 object privilege required within the trusted context needs to be granted
to role WASUSERROLE. Role WASSRVROLE is not supposed to access any DB2 object
and therefore holds no privilege in DB2.
3. The trusted context shown in Figure 4-31 is for WebSphere Application Server JDBC type
4 connections where the connection user ID (provided in the data source authentication
property) matches the SYSTEM AUTHID of WASSRV and where the external entity runs
on one of the IP hosts referred to by the domain names provided in the trusted context
ADDRESS attributes.
4. Because the authenticated user matches authorization ID WASUSER DB2 assigns role
WASEUSERROLE. In case the application server asks DB2 to perform an authorization
ID switch for a user that does not match one of the user IDs specified in the trusted context
WITH USE FOR clause the request fails with DB2 SQLCODE -20361. In that situation we
observed the WebSphere Application Server error message shown in Figure 4-32.
WASUSER = end-user
WASSRV = application JAAS user name
WASUSER = end-user
WASSRV = application server JAAS user name
3. The application server requests a DB2 connection using user ID WASSRV and
its password (Figure 4-35 on page 131).
130 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3
WASUSER = end-user
WASSRV = application server JAAS user name
WASUSER = end-user
WASSRV = application server JAAS user namer
WASUSER = end-user
WASSRV = application server JAAS user name
WASUSER = end-user
WASSRV = application server JAAS user name
7. DB2 looks for a trusted context with system authorization id WASSRV and validates the
attributes of the context (for instance, SERVAUTH, ADDRESS, ENCRYPTION)
(Figure 4-39). Depending on the trusted context DEFAULT ROLE attribute a role may also
be assigned.
WASUSER = end-user
WASSRV = application server JAAS user name
8. The connection with user WASSRV as connection owner has been established
(Figure 4-40 on page 133).
132 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
WASUSER WebSphere DB2
Application WASSRV
for RACF
Server z/OS
WASUSER = end-user
WASSRV = application server JAAS user name
9. WebSphere issues a switch user request using WASUSER (Figure 4-41). This requires no
application code change. It is all implemented by the application server configuration.
WASUSER = end-user
WASSRV = application server JAAS user name
10
WASUSER = end-user
WASSRV = application server JAAS user name
11
WASUSER = end-user
WASSRV = application server JAAS user name
12.The connection exit routine assigns WASUSER as the primary authorization ID and as the
CURRENT SQLID (Figure 4-44). Secondary authorization IDs may also be assigned. DB2
assigns role WASUSERROLE which is going to be used for checking DB2 o
bject authorization.
Role
12 WASUSERROLE
WASUSER = end-user
WASSRV = application server JAAS user name
13.The connection has been initialized using WASUSER as primary authorization ID and with
role WASUERROLE assigned (Figure 4-45 on page 135). From now on DB2 uses role
WASUSERROLE for checking SQL access authorization.
134 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Role
WASUSERROLE
WASUSER = end-user
WASSRV = application server JAAS user name
14.While the application was running we collected the DB2 command output shown in
Figure 4-46 to confirm that the CTXWASSRV trusted context was used by DB2 to
establish a trusted connection and that DB2 performed an authorization ID switch for user
ID WASUSER which resulted in the assignment of DB2 role WASUSERROLE. In that
respect the command output shown in Figure 4-46 confirms the following trusted context
related facts:
a. The application server successfully establishes a trusted connection using the DB2
trusted context that we create in Figure 4-31 on page 129.
b. The trusted context system authid matches with the application server JAAS provided
username of WASSRV.
c. Because WASUSER is identical with the user that has been authenticated by the
application server, DB2 performs an authorization ID switch to WASUSER and assigns
DB2 role WASUERROLE.
After we had collected the information we issued the command shown in Example 4-11 to
start the UDF which allowed for the application to successfully complete.
For more information about DB2 trusted contexts and the configuration we performed for
running the DayTrader-EE6 workload refer to 4.3.13, “Trusted context” on page 173.
136 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Capture DB2 real time statistics (RTS) before and after DayTrader-EE6 workload stress
testing. Among other things the RTS information captured are used to
– Learn about the characteristics of the DayTrader-EE6 application,
– Identify insert, update, delete hot spots,
– Identify redundant indexes,
– Learn about the REORG and RUNSTATS requirements of objects that are accessed by
frequent insert, update, delete DML statements.
– Estimate future data growth of DB2 tables and indexes and identify objects that are
candidates for table partitioning.
For more information about the implementation and usage examples of the RTS snapshot
tables refer to 4.3.24, “DB2 real time statistics” on page 198.
Configure RMF to capture SMF record type 70 to 79. For RMF and SMF monitoring, see
Chapter 8, “Monitoring WebSphere Application Server applications” on page 361.
WebSphere Application Server applications to provide unique DB2 client application
information which we use for creating application level DB2 accounting and RMF workload
activity reports for our WebSphere Application Server applications. The DB2 client
application information used by our sample applications are shown in Table 4-1 on
page 111.
Configure DB2 to monitor the maximum number of concurrent active threads used by the
DayTrader-EE6 application. The configuration steps that we took to implement DB2 profile
monitoring for the DB2 threads used by the DayTrader-EE6 application are explained in
4.3.21, “Using profiles to disable idle thread timeout at application level” on page 194.
WLM subsystem type DDF classification rules for DBATs to perform service classification
based upon DB2 client application information. For information about how we setup WLM
classification rules of DBATs refer to “JDBC type 4 service classification” on page 110.
OMPE performance database processes to load DB2 statistics and accounting
information into DB2 tables. We use these tables to run predefined SQL queries for
application profiling and key performance indicator (KPI) monitoring. For information about
implementing and using the performance database refer to 4.1.9, “WebSphere Application
Server and DB2 security” on page 126.
Because thread allocation can be a significant part of the cost in a short transaction, you
need to set related parameters carefully according to your machine size, your work load, and
other factors.
138 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
However, in some cases DB2 cannot pool database access threads. Table 4-3
summarizes whether a thread can be pooled or not. When the conditions are true, the
thread can be pooled when a COMMIT is issued, otherwise, the thread remains active.
Table 4-3 Requirements for pooled threads
If the event is... Thread can be pooled?
1 A cursor can be closed with fast implicit close. For more information, see DB2 10 for z/OS
Managing Performance, SC19-2978.
2 For more information of RELEASE(DEALLOCATE), see High Performance DBATs.
Use INACTIVE MODE threads instead of ACTIVE MODE threads whenever possible.
Inactive and indoubt threads are not subject to this time-out parameter. If CMTSTAT
subsystem parameter is set to ACTIVE, your application must start its next unit of work within
the specified time-out period, otherwise its thread is terminated.
The default value is 120. Increasing POOLINAC can potentially reduce the overhead for
creating a new DBAT, but the disadvantage would be the virtual storage used by the pooled
DBAT.
Choosing a good number for maximum threads is important to keep applications from
queuing and to provide good response time. Fewer threads than needed under utilize the
processor and cause queuing for threads. More threads than needed do not improve the
response time. They require more real storage for the additional threads and might cause
more paging and, hence, performance degradation.
When a request for a new connection to DB2 is received and MAX REMOTE ACTIVE has
been reached, If DDF THREAD is ACTIVE mode, the allocation request is allowed but any
further processing for the connection is queued waiting for an active database access thread
to terminate. If DDF THREAD is INACTIVE mode, the allocation request is allowed and is
processed when DB2 can assign an pooled idle database access thread to the connection.
Pool idle thread counts as an active thread against MAX REMOTE ACTIVE.
If a new connection request to DB2 is received, and MAX REMOTE CONNECTED has been
reached or MAX REMOTE CONNECTED is zero, the connection request is rejected.
OFF means that the depth of the connection queue is limited by the value of the CONDBAT
subsystem parameter. ON means that the depth of the connection queue is limited by the
value of the MAXDBAT subsystem parameter. A numeric value specifies the maximum
number of connections that can be queued waiting for a DBAT to process a request.
When a request is added to the connection request queue and the thresholds specified by
both the MAXDBAT and MAXCONQN subsystem parameters are both reached (unless
MAXCONQN is set to OFF) then DDF closes the client connection longest waiting client
connection in the queue. The closed connections give remote clients an opportunity to
redirect the work to other members of the group that have more resources to process the
work. The function is enabled only when DB2 subsystem is a member of a data
sharing group.
ON means that connections wait as long as the value specified by the IDHTOIN subsystem
parameter. OFF means that connections wait indefinitely for a DBAT to process requests. A
numeric value specifies a time duration in seconds that a connection waits for a DBAT to
process the request.
Each queued connection request entry is examined to see if its time waiting in the queue has
exceeded the specified value. If the time is exceeded, the client connection is closed. After all
entries in the queue have been processed or the last entry whose time in the queue exceeded
the threshold has been processed, a DSNL049I message is issued to indicating how many
client connections were closed because of the MAXCONQW value. The function is enabled
only when DB2 subsystem is a member of a data sharing group.
140 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4.3.2 Enabling DB2 dynamic statement cache
The feature dynamic statement caching was introduced with DB2 Version 5. Whenever DB2
prepares an SQL statement, it creates a control structure that is used when the statement is
executed. When dynamic statement caching is in effect, DB2 stores the control structure
associated with a prepared dynamic SQL statement in a storage pool. If that same statement
is executed again, DB2 can reuse the cached control structure, avoiding the expense of
re-preparing the statement,
When using statement caching, four different types of prepare operations can take place:
Full prepare
A full prepare occurs when the skeleton copy of the prepared SQL statement does not
exists in the global dynamic SQL cache (or the global cache is not active). It can be
caused explicitly by a PREPARE or an EXECUTE IMMEDIATE statement or implicitly by
an EXECUTE when using KEEPDYNAMIC(YES).
Short prepare
A short prepare occurs, if the skeleton copy of the prepared SQL statement in the global
dynamic SQL cache can be copied into the local storage.
Avoided prepare
A prepare can only be avoided when using full caching. Because in this case, the full
prepared statement is kept across commits, issuing a new EXECUTE statement (without a
prepare after a commit) does not need to prepare anything. The full executable statement
is still in the thread’s local storage (assuming it was not removed from the local thread
storage because MAXKEEPD was exceeded) and can be executed as such.
Implicit prepare
This is the case when an application, that uses KEEPDYNAMIC(YES), issues a new
EXECUTE after a commit was performed and a prepare cannot be avoided (the previous
case). DB2 will issue the prepare (implicitly) on behalf of the application. (The application
must not explicitly code the prepare after a commit in this case.)
Implicit prepares can result in a full or short prepare:
– In full caching mode, when a statement has been removed from the local cache
because MAXKEEPD was exceeded, but still exists in the global cache, the statement
is copied from the global cache. This is a short prepare. (If MAXKEEPD has not been
exceeded and the statement is still in the local cache the prepare is avoided.)
– In full caching mode, and the statement is no longer in the global cache either, a full
prepare is done.
– In local caching only mode, a full prepare has to be done.
Whether a full or short prepare is needed in full caching mode depends on the size of the
global cache. The bigger the size, the more likely we can do a short prepare.
Statements in plans or packages bound with REOPT(VARS) are not cached in the global
cache. The bind options REOPT(VARS) and KEEPDYNAMIC(YES) are not compatible.
In a data sharing environment prepared statements cannot be shared among the members.
As each member has its own EDM pool. A cached statement of one member is not available
to an application that runs on another DB2 member.
There are different levels of statement caching, which are explained in the following sections:
No caching
Local dynamic SQL cache only
Global dynamic statement cache only
Full caching
No caching
Figure 4-47 on page 143 helps to visualize this behavior.
Program A prepares a dynamic SQL statement S, executes the prepared statement twice,
and terminates.
Program B starts after program A has terminated, prepares exactly the same statement S as
A did, executes the prepared statement, issues a commit, tries to execute S again, receives
an error SQLCODE -514 or -518 (SQLSTATE 26501 or 07003), has to prepare the same
statement S again, executes the prepared statement, and terminates.
Each time a prepare has been executed by the programs A and B, issuing the SQL PREPARE
statement, DB2 prepared the statement from scratch. After the commit of program B, the
prepared statement is invalidated, so program B had to repeat the prepare of statement S.
142 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Program A epare Thread A
1
Full pr
PREPARE S Sqcode=0 Prepared statement S
EXECUTE S Sqcode=0
EXECUTE S Sqcode=0
EXECUTE S Sqcode=0
epare
Full pr
Let us take a look at our two example programs, shown in Figure 4-48 on page 144.
Program A prepares a dynamic SQL statement S, executes the prepared statement twice,
and terminates.
Program B starts after program A has terminated, prepares the same statement S as A did,
executes the prepared statement, issues a commit, executes S again (causing an internal
(implicit) prepare) and terminates.
Be aware that application program B has to be able to handle the fact that the implicit prepare
might fail and an error is returned. Any error that normally occur at prepare time can now be
returned on the OPEN, EXECUTE, or DESCRIBE statement issued by the application.
The prepared statement and the statement text are held in the thread´s local storage within
the DBM1 address space (outside the EDM pool). But only the statement text is kept across
commits when you only use local caching.
EXECUTE S Sqcode=0
EXECUTE S Sqcode=0
Prepared statement S
The local instance of the prepared SQL statement (the prepared statement), is kept in DBM1
storage until one of the following occurs:
The application process ends.
The application commits and there is no open cursor defined WITH HOLD for the
statement. (Because we are using only local caching, just the statement string is kept
across commits.)
A rollback operation occurs.
The application issues an explicit PREPARE statement with the same statement name.
144 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
If the application issues a PREPARE for the same SQL statement name which is kept in the
cache, the kept statement is discarded and DB2 prepares the new statement.
In a distributed environment, if the requester does not issue a PREPARE after a COMMIT, the
package at the DB2 for z/OS server must be bound with KEEPDYNAMIC(YES). If both
requester and server are DB2 for z/OS subsystems, the DB2 requester assumes that the
KEEPDYNAMIC value for the package at the server is the same as the value for the plan at
the requester.
The KEEPDYNAMIC option might have performance implications for DRDA clients that
specify WITH HOLD on their cursors:
If KEEPDYNAMIC(NO) is specified, a separate network message is required when the
DRDA client issues the SQL CLOSE for the cursor.
If KEEPDYNAMIC(YES) is specified, the DB2 for z/OS server automatically closes the
cursor when SQLCODE +100 is detected, which means that the client does not have to
send a separate message to close the held cursor. This reduces network traffic for DRDA
applications that use held cursors. It also reduces the duration of locks that are associated
with the held cursor.
When a distributed thread has touched any package which is bound with
KEEPDYNAMIC(YES), the thread cannot become inactive.
This level of caching, used without other caching possibilities, is of minor value, because the
performance improvement is limited. The only advantage is that you can avoid coding a
PREPARE statement after a COMMIT because DB2 keeps the statement string around. This
is of course most beneficial in a distributed environment where you can save a trip across the
wire this way. On the other hand, by using the DEFER(PREPARE) bind option, you can obtain
similar network message savings.
When global dynamic statement caching is active, the skeleton copy of a prepared SQL
statement (SKDS) is held in the global dynamic statement cache inside the EDM pool. Only
one skeleton copy of the same statement (matching text) is held. The skeleton copy can be
used by user threads to create user copies. An LRU algorithm is used for replacement.
If an application issues a PREPARE or an EXECUTE IMMEDIATE (and the statement has not
been executed before in the same commit scope), and the skeleton copy of the statement is
found in the global statement cache, it can be copied from the global cache into the thread´s
storage. This is called a short prepare.
Note: Without local caching (KEEPDYNAMIC(YES)) active, the application cannot issue
EXECUTEs directly after a commit. The statement returns an SQLCODE -514 or -518,
SQLSTATE 26501 or 07003.
Let us take a look at our example. The global cache case is shown in Figure 4-49 on
page 146.
Program A prepares a dynamic SQL statement S, executes the prepared statement twice,
and terminates.
The first time a prepare for statement S is issued by the program A, a complete prepare
operation is performed. The SKDS of S is then stored in the global statement cache. When
program B executes the prepare of S for the first time, the SKDS is found in the global
statement cache and is copied to the local storage of B´s thread (short prepare). After the
COMMIT of program B, the prepared statement is invalidated in B’s local storage, but the
SKDS is preserved in the global statement cache in the EDM pool. Because the statement
string or the prepared statement is not kept after the commit, program B has to repeat the
prepare of statement S explicitly. This causes another copy operation of the SKDS in the
global cache to the local storage of the thread of application B (short prepare).
Full caching
Full caching is a combination of local caching (KEEPDYNAMIC(YES)), a MAXKEEPD
DSNZPARM value > 0, and global caching (CACHEDYN=YES). It is possible to avoid
prepares, because a commit does not invalidate prepared statements in the local cache.
Let us look again at our example when full caching is active, shown in Figure 4-50 on
page 147.
146 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Program A prepares a dynamic SQL statement S, executes the prepared statement twice,
and terminates.
Program B starts after program A has terminated, prepares the same statement S as A did,
executes the prepared statement, issues a commit, executes S again, and terminates.
The first time a prepare for statement S is issued by the program A, a complete prepare is
done (full prepare). The SKDS of S is stored in the global cache. When program B executes
the prepare of S the first time, the SKDS is found in the global statement cache and is copied
to the local statement cache of B´s thread (short prepare). The COMMIT of program B has no
effect on the prepared statement. When full caching is active, both the statement string which
is also kept for local caching only, and the prepared statement are kept in the thread’s local
storage after a commit. Therefore, program B does not have to repeat the prepare of
statement S explicitly, and it was not necessary to do the prepare the statement implicitly
because the full executable statement is now kept in the thread’s local storage. This case is
called prepare avoidance.
Figure 4-50 Full caching, CACHEDYN = NO, KEEPDYNAMIC = YES and MAXKEEPD > 0
Using full caching the maximum size of the local cache across all user threads is controlled by
the MAXKEEPD DSNZPARM. A FIFO algorithm is used for replacement of statements in the
local cache.
CACHEDYN should be turned on for dynamic SQL for WebSphere applications. For Local
Dynamic Statement Cache because the statements are kept in thread storage, Sysplex
workload balancing is not available if KEEPDYNAMIC is exploited. Use BIND option
KEEPDYNAMIC(YES) for application with a limited number of SQL statements that are
executed frequently.
WebSphere Application Server prepared statement cache and DB2 dynamic statement cache
are different concepts. For how to make these two functions work together, refer to 2.6.6,
“WebSphere Application Server prepared statement cache” on page 57.
For more information about cache matching criteria, see 6.7, “Coding practices for a good
DB2 dynamic statement cache hit ratio” on page 329.
Note: DB2 10 for z/OS has largely reduced LC24 on the EDM pool by removing the areas
dedicated to cursor tables and skeleton cursor tables.
148 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
In a data sharing environment, because the deadlock detection process sends
inter-system XCF messages, the actual wait time is longer:
IRLMRWT + DEADLOCK TIME <= actual wait time <= IRLMRWT + 4 * DEADLOCK TIME
If you can afford suspended process remaining inactive for 60 seconds, use the default.
Sometimes TIMEOUT is caused by badly behaving application, you can simulate workload in
testing environment and identify it:
1. Start with the default of 60 seconds.
2. Monitor the time-out.
3. Reduce the value a few seconds if none occur. Cycle back to 2.
4. If time-outs occur, identify the cause and correct the process if possible. Cycle back to 2.
You can change the TIMEOUT value using the IRLM modify command.
This value is workload dependent. High setting or a value of 0 might result in excessive
numbers of locks, which is storage and CPU time consuming. Whereas small value can
trigger lock escalation frequently, which might lead to lock contention. Lock escalation is an
expensive process as well.
Default value is NO. Specify YES to improve concurrency if your applications can tolerate
returned data that might falsely exclude any data that would be included as the result of undo
processing. This parameter does not influence whether uncommitted data is returned to an
application because queries with isolation level RS or CS return only committed data.
You can obtain similar results by using the SQL SKIPPED LOCKED DATA clause.
Default value is NO. If your applications do not need to wait for the inserts outcome of other
transactions, specifying YES to get greater concurrency.
NO means DB2 writes an accounting record when a DDF thread becomes inactive or when
sign-on occurs for an RRSAF thread.
A value n (between 2 and 65535) means DB2 writes an accounting record every n accounting
intervals for a given user.
Note: For JDBC type 2 connections you might want to consider setting the account interval
data source property to obtain DB2 accounting information written at DB2 commit.
For more information, see 8.4.2, “Creating DB2 accounting records at a transaction
boundary” on page 396.
150 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DB2 statistics and accounting traces
We configured our DB2 members to collect DB2 statistics and accounting traces through
SMF. For this we performed the following DB2 system parameter (DSNZPARM) settings
through which DB2 accounting and statistics traces are started during DB2 member startup:
SMFACCT=(1,2,3,7,8)
The SMFACCT DSNZPARM controls the collection of DB2 accounting traces through
SMF. Besides plan level accounting information (classes 1,2,3) we also collected package
level accounting information (classes 7,8). For Java workloads you might want to consider
not to collect package level information as you cannot use the JDBC package names to
perform application level profiling and reporting.
SMFSTAT=(1,3,4,5,6,7)
The SMFSTAT DSNZPARM controls the collection of DB2 statistics traces through SMF. In
our DB2 environment we collect statistics trace classes 1, 3, 4, 5, 6, and 7.
Miscellany
Below are other DB2 installation parameters you need to note for the Java applications
running in a WebSphere Application Server environment.
If you set this parameter to blanks, DB2 will not start the administrative task scheduler.
To support separation of the object categories we created the buffer pools shown in Table 4-4
in both data sharing members:
152 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Simulate production like buffer pool sizes and catalog statistics
To tune your DB2 application or queries under production like conditions is a pre-production
environment is an important requirement which enables you to discover problems with
applications and SQL queries in time prior to application or SQL production deployment. For
this it is recommended to have your tables to reflect production like data volumes or, if this is
not an option, to configure your DB2 catalog to reflect production like statistics for the tables
against which you are going to run application workloads or you need to perform query
tuning.
For more information about cloning catalog statistics, refer to DB2 10 for z/OS Managing
Performance, SC19-2978, “Modeling your production system statistics in a test subsystem”.
When preparing an SQL statement the optimizer takes important hardware configurations
such as buffer pool sizes, CPU speed, and the number of processor into account to make the
most suitable cost based access path decision.
In cases in which a DB2 test system is constraint on CPU and real storage resources the
optimizer cannot provide you with an access path decision it would have made if the same
access path decision was taken in a DB2 production environment with more and faster CPUs
and with more real storage and bigger buffer pools. To provide help in such situations you can
use DB2 profiles to model your DB2 test environment based on the configuration of your
production environment. Without having to have the corresponding hardware resources
installed and available to your DB2 test system, you can use profiles to provide the following
parameters to emulate your production hardware and DB2 pool configuration for DB2 access
path selection:
Processor speed
Number of processors
Maximum number of RID blocks
Sort pool size
Buffer pool size
For more information about this topic refer to “Modeling a production environment on a test
subsystem”, DB2 10 for z/OS, Managing Performance, SC19-2978.
We configured the member and group DVIPA in the BSDS using the IPV4 and GRPIPV4
parameters. With this BSDS setting the TCP/IP port statements shown in “Port definition
without IP address binding” on page 157 must not have any BIND IP address configuration.
When DB2 starts it automatically binds to the IP addresses given in the BSDS. DB2 accepts
connections not only on the IP address specified in the BSDS, but on any IP address that is
active on the TCP/IP stack. Additionally, connections are accepted on both secure and
non-secure SQL ports. In contrary, using bind specific TCP/IP port statements as discussed
in “Port definition with IP address binding” on page 156 do not support secure DB2 SQL
ports.
Important: With IP addresses in the BSDS a client can connect to DB2 using IP
addresses other than the group or member IP address provided these are active on the
current TCP/IP stack. This can be useful as it provides the flexibility to choose between IP
addresses available on the current IP stack. However, DB2 clients connecting to DB2 for
z/OS using an IP address other than the DB2 group or member specific IP address might
break if a DB2 member has been moved to run in a different LPAR.
154 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
* DDF intial record setting
DDF LOCATION=DB0Z,RESPORT=39002,PORT=39000,SECPORT=0
* we are not using a VTAM LUNAME
DDF NOLUNAME
* DB2 to initialize the TCP/IP interface only
DDF IPNAME=IPDB0Z
When we initially setup our DDF configuration we setup the SSLPORT to support SSL
encryption. During startup DB2 issued the error message shown in Figure 4-52 indicating that
the TCP/IP IP address bindings on the PORT statement were not supported with DB2 secure
port configurations. As a consequence we corrected the TCP/IP port configuration to remove
the IP address bindings and defined the IP addresses in the BSDS.
Important: If you use IP address bindings on the TCP/IP port configuration in your DDF
configuration you will not be able to configure an SSL port in DB2. If you do DB2 issues the
error message DSNL512I and the DDF initialization fails.
Upon successful command completion we displayed the status of DDF on both DB2
members and obtained the command output shown in Figure 4-53.
1. By specifying the DDF address space names in the port statements we restrict port usage
to the address space given in the port statement. This prevents others from accidentally
using this port number.
2. The BIND parameter causes the specified address space to bind to the IP address given
in the same port statement.
3. SHAREPORT allows for D0Z1DIST and D0Z2DIST to share port 39000 which represents
the well-known SQL port of the data sharing group.
156 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Port definition without IP address binding
We used the port configuration shown in Example 4-18 to define the TCP/IP ports required to
support the BSDS configuration shown in “Configuration with IP address in the BSDS” on
page 154.
The DDF part of the D0Z1MSTR startup messages confirmed our customization:
1. The DB2 member is ready to accept connections on SQL port 39000 and the member
specific IP address
2. The DB2 member is ready to accept connections on SQL port 39000 and the data sharing
group IP address
3. An IBM VTAM® LU name is not required by DRDA workloads. Most DDF connections use
TCP/IP. Configuring DB2 without a VTAM LU name saves resources required for
initializing and maintaining the DB2 VTAM interface.
4. SECPORT was set to 0 to disable DDF SSL processing. We intentionally used that
configuration option as the DB2 DDF address space was placed in a secure network, front
ended by WebSphere Application Server. SSL encryption was therefore not required.
5. We set up DDF to use IPNAME to make sure the DB2 VTAM interface is not initialized
during DB2 startup.
As shown in Figure 4-55 on page 159 you can alternatively issue the “DISPLAY DDF”
command to review the DDF configuration of an active DB2 data sharing member. The
command output below additionally shows the following DB2 system parameter settings and
DDF thread management related information that are important for system monitoring and
tuning:
DT=I, DSNZPARM CMTSTAT=INACTIVE
CONDBAT=10000, DSNZPARM CONNDBAT=10000
MDBAT=200, DSNZP
MAXDBAT=200
ADBAT=0, Current number of database access threads
QUEDBAT=0, cumulative counter that is always incremented when the DSNL090I
MDBAT1 limit has been reached
INADBAT=0, Current number of inactive DBATs. This value only applies if the dt value
specified in the DSNL090I message indicates that DDF. INACTIVE support is enabled.
Any database access threads reflected here can also be observed in the DISPLAY
THREAD TYPE(INACTIVE) command report.
CONQUED=0, Current number of connection requests that have been queued and are
waiting to be serviced. This value only applies if the dt value that is specified in the
DSNL090I message indicates that DDF INACTIVE support is enabled.
DSCDBAT=0, Current number of disconnected database access threads. This value only
applies if the dt value specified in the DSNL090I message indicates that DDF. INACTIVE
support is enabled.
INACONN=0, current number of inactive connections. This value only applies if the dt
value specified in the DSNL090I message indicates that DDF. INACTIVE support is
enabled.
1
Maximum number of database access threads as determined by the “MAX REMOTE ACTIVE” value in the
DSNTIPE installation panel.
158 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
-D0Z1 DIS DDF DETAIL
DSNL080I -D0Z1 DSNLTDDF DISPLAY DDF REPORT FOLLOWS:
DSNL081I STATUS=STARTD
DSNL082I LOCATION LUNAME GENERICLU
DSNL083I DB0Z -NONE -NONE
DSNL084I TCPPORT=39000 SECPORT=0 RESPORT=39002 IPNAME=IPDB0Z
DSNL085I IPADDR=::9.12.4.153
DSNL086I SQL DOMAIN=d0zg.itso.ibm.com
DSNL086I RESYNC DOMAIN=d0z1.itso.ibm.com
DSNL089I MEMBER IPADDR=::9.12.4.138
DSNL090I DT=I CONDBAT= 10000 MDBAT= 200
DSNL092I ADBAT= 0 QUEDBAT= 0 INADBAT= 0 CONQUED= 0
DSNL093I DSCDBAT= 0 INACONN= 0
DSNL100I LOCATION SERVER LIST:
DSNL101I WT IPADDR IPADDR
DSNL102I 30 ::9.12.4.142
DSNL102I 12 ::9.12.4.138
DSNL105I CURRENT DDF OPTIONS ARE:
DSNL106I PKGREL = BNDOPT
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
Figure 4-55 DB2 display DDF command output
To enable remote DB2 clients to connect to DB2 using a group domain name we then
managed to have the group and member DVIPA addresses registered in our domain name
server (DNS). To test the set up, we configured the DB2 for LUW database directory as
shown in Example 4-19.
As shown in Example 4-20 we were then able to use the DB2 command line processor (CLP)
to connect to the database that we cataloged in Example 4-19.
In the example shown in Example 4-20 on page 159 we issued a DB2 PING command to
measure DB2 for z/OS server turnaround elapsed time. The average network turnaround time
shown is higher than 0.08 seconds indicating high network latency. Depending on your
throughput requirement you should expect to see turnaround times well below 0.001
seconds.
For more information about setting up DB2 for z/OS for a distributed load balancing and fault
tolerant configuration refer to 3.3, “High availability configuration options” on page 92 and
DB2 9 for z/OS Data Sharing: Distributed Load Balancing and Fault Tolerant Configuration,
REDP-4449.
The High Performance DBAT will be terminated after 200 (not user changeable) units-of-work
are processed by it. On the next request to start an unit-of-work by the connection, a new
DBAT is created or a pooled DBAT is assigned to process the unit-of-work. Normal idle thread
time-out detection is applied to these DBATs. IDTHDOIN will not apply if the DBAT is waiting
for the next client unit-of-work.
These are the steps when dealing with High Performance DBAT
1. BIND or REBIND packages with RELEASE(DEALLOCATE)
We recommend to bind the JDBC packages that you want to use with High Performance
DBAT into their own package collection. For information about the procedure we used to
bind the JDBC packages into their own collections refer to 4.3.9, “Bind JDBC packages”
on page 165.
160 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2. Use -MODIFY DDF PKGREL(COMMIT)
When you want to increase resource concurrency and the likelihood for your SQL DDL,
BIND operations, and utilities to be successfully executed while the application workload is
running, you can deactivate High Performance DBAT by issuing the command -MODIFY
DDF PKGREL(COMMIT).
3. Use -MODIFY DDF PKGREL(BNDOPT) command
This command enables the RELEASE bind option (COMMIT or RELEASE) that was
previously used for remote client processing for any package that is used for remote client
processing.
Example 4-21 shows the results of the MODIFY DDF PKGREL command.
Example 4-22 shows the results of the -DIS DDF command. You can check setting of
PKGREL through message DSNL106I.
Because activating High Performance DBAT for distributed applications avoids pooling of
DBATs, you might have to increase subsystem parameter MAXDBAT to avoid queuing of
distributed requests.
By using these commands, you do not need to perform REBIND to activate or deactivate High
Performance DBAT.
For more information about how we created these package collections refer to 4.3.9, “Bind
JDBC packages” on page 165.
Java JCC
App T4
DRDA 2
JDBC
2 DRDA SQLJ
DB2
3 T2 Java
Connect DB2 or App
3 Gateway DRDA JCC
T2
3 1
ODBC
C 3
.NET
App
CLI
1. A Java application running on z/OS uses JDBC type 2 to connect to DB2 for z/OS
2. A Java application uses JDBC type 4 to directly or indirectly connect to DB2 for z/OS. The
Java application can run on z/OS or non-z/OS platforms.
3. A multiplatform ODBC, .NET or DB2 call level interface (CLI) client directly or indirectly
connects to DB2 for z/OS using the IBM Data Server Driver for ODBC and CLI.
Driver configuration
As explained in 4.1.7, “UNIX System Services file system configuration” on page 123 the
JDBC driver related files have been installed by SMP/E and made available to our runtime
environment by using symbolic links that we defined to point to the appropriate zFS file
system data sets.
162 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
As a prerequisite for binding the JDBC packages using the DB2Binder utility under UNIX
System Services we need to complete the UNIX System Services JDBC configuration to
support High Performance DBAT and the DB2 command line processor.
Based on the JDBC install base we carry out the following configuration tasks:
Set DB2 subsystem parameter DESCSTAT to YES as already discussed in “DESCRIBE
FOR STATIC field (DESCSTAT)” on page 151
STEPLIB libraries
The following load libraries need to be available through STEPLIB data set allocation in
case the application used JDBC type 2 connections to access DB2.
– DB0ZT.SDSNEXIT
– DB0ZT.SDSNLOAD
– DB0ZT.SDSNLOD2
The SDSNLOD2 library contains the JDBC type 2 DLL load modules which are referred to
by UNIX System Services through external link definitions (see 4.3.8, “JDBC type 2 DLL
and the SDSNLOD2 library” on page 164 for details).
Our WebSphere Application Server environment defines these data sets in application
server STEPLIB concatenation to cater for the JDBC type 2 requirement.
Modify the global UNIX System Service profile (/etc/profile) to customize the environment
variable settings to reflect the JDBC libraries, paths, and files that the IBM Data Server
Driver for JDBC and SQLJ uses. We used the export commands shown in Figure 4-57 to
perform these changes.
export PATH=/usr/lpp/db2/d0zg/jdbc/bin:$PATH
export LIBPATH=/usr/lpp/db2/d0zg/jdbc/lib:$LIBPATH
export CLASSPATH=/usr/lpp/db2/d0zg/jdbc/classes/db2jcc.jar: \
/usr/lpp/db2/d0zg/jdbc/classes/db2jcc_javax.jar: \
/usr/lpp/db2/d0zg/jdbc/classes/sqlj.zip: \
/usr/lpp/db2/d0zg/jdbc/classes/db2jcc_license_cisuz.jar: \
$CLASSPATH
Figure 4-57 /JDBC etc/profile changes
For more information about installing and setting up the IBM Data Server Driver for JDBC
refer to Chapter 8. Installing the IBM Data Server Driver for JDBC and SQLJ of DB2 10 for
z/OS, Application Programming Guide and Reference for Java, SC19-2970.
ls -l /usr/lpp/db2/d0zg/jdbc/lib
1 2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos.so -> DSNAQJL2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos4.so -> DSNAJ3L2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos4_64.so -> DSNAJ6L2
erwxrwxrwx 8 Nov 16 2010 libdb2jcct2zos_64.so -> DSNAQ6L2
Figure 4-58 JDBC type 2 DLL external links
1. The first character of the command output (the e character) identifies the file as an
external link.
2. Following the right arrow the output shows the name of the external load module the
external link points to.
When the runtime environment loads a DLL that refers to an external load module it uses the
following search order when locating the DLL:
1. STEPLIB
2. Link Pack Area (LPA)
3. z/OS Linklist
To be able to use the JDBC type 2 driver we included the SDSNLOD2 library in the
WebSphere Application Server STEPLIB library concatenation. When we listed the members
of the SDSNLOD2 library as shown in Figure 4-59 we located the external load module
names referred to in Figure 4-58.
BROWSE DB0ZT.SDSNLOD2
Command ===>
Name Size TTR AC AM RM
_________ DSNAJ3L2 00064F68 000010 00 31 ANY
_________ DSNAJ6L2 00082FB8 00000E 00 64 ANY
_________ DSNAQJL2 00064E40 00000F 00 31 ANY
_________ DSNAQ6L2 00082E48 00000D 00 64 ANY
Figure 4-59 JDBC type 2 DLL in SDSNLOD2
2 dynamic link library
164 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
During DB2 installation SMP/E executes the UNIX System Services commands shown in
Figure 4-60 to associate the SDSNLOD2 load modules shown in Figure 4-59 on page 164
with the UNIX System Services path names shown in Figure 4-58 on page 164.
ln -e DSNAQJL2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos.so
ln -e DSNAJ3L2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos4.so
ln -e DSNAJ6L2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos4_64.so
ln -e DSNAQ6L2 /usr/lpp/db2a10/jdbc/lib/libdb2jcct2zos_64.so
Figure 4-60 UNIX System Services SDSNLOD2 external link definition
By using dedicated JDBC collections we deliberately do not change the NULLID collection ID
which is commonly used by the majority of DB2 remote applications. Globally rebinding
packages belonging to the NULLID collection with RELEASE(DEALLOCATE) is not suitable,
because some of your workload better qualifies for using bind options KEEPDYNAMIC(YES)
and RELEASE(COMMIT). See 4.3.6, “High Performance DBATs” on page 160 where we
discuss these bind options.
Important: The DB2Binder utility requires a JDBC type 4 connection. Binding the JDBC
packages therefore require the DB2 Distributed Data Facility (DDF) address space to be
operating even if you only plan to use JDBC type 2 connections which do not require DDF.
166 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Package collection JDBCHDBAT
As recommended in “JDBC bind recommendation” on page 161 we create JDBC package
collections to provide support for High Performance DBAT. JDBC applications potentially
enable themselves for High Performance DBAT processing by including the JDBCHDBAT
collection ID in their setCurrentPackagePath data source custom property setting.
To bind the JDBC packages into collection JDBCHDBAT we executed the DB2Bind command
shown in Example 4-24 under UNIX System Services.
For information about the execute privileges that we granted on these packages refer to
“Grant execute privileges on JDBC packages” on page 171
In z/OS UNIX System Services we ran the DB2Bind command shown in Example 4-25 to
bind the JDBC packages into collection JDBCNOHDBAT
For information about the execute privileges that we granted on these packages refer to
“Grant execute privileges on JDBC packages” on page 171.
As CLP is a Java application that connects to DB2 using a JDBC type 4 connection it provides
an excellent tool to check out your local JDBC configuration. You can invoke CLP from a UNIX
System Services shell and as such it can be invoked from telnet, secure shell, under TSO
from OMVS, from BPXBATCH or from the JZOS batch launcher. Because CLP connects to
DB2 through a JDBC type 4 connection it furthermore triggers zIIP offload for local database
connections.
For more information about implementing and using the DB2 UNIX System Services CLP
refer to:
GC19-2974-07, DB2 10 for z/OS, Installation and Migration Guide, Configuring the DB2
command line processor
SC19-2972-04, DB2 10 for z/OS, Command Reference, Chapter 9. Command line
processor
168 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DB2R3 @ SC64:/u/db2r3>ls -l /usr/lpp/db2/d0zg/jdbc/samples
total 66
drwxr-xr-x 2 HARJANS TTY 320 Nov 16 2010 IBM
-rw-r--r-- 2 HARJANS TTY 13783 Jun 12 15:06 TestJDBC.java
-rw-r--r-- 2 HARJANS TTY 11752 Jun 12 15:06 TestSQLJ.sqlj
Figure 4-63 TestJDBC samples directory
The TestJDBC application exercises basic JDBC functionality (by default through a Type-2
z/OS Connection) using the DB2 JDBC Driver. TestJDBC receives its parameters (JDBC
connect url either in type 2 or type 4 format) as input argument. in Figure 4-64 we use the
TestJDBC application to confirm appropriate driver installation. For more information about
the TestJDBC application and the input parameters it supports, see the inline documentation
of the TestJDBC Java program.
javac TestJDBC.java
java TestJDBC jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z db2r3 <password>
Acquiring DatabaseMetaData
successful... product version: DSN10015
Creating Statement
successful creation of Statement
We did not use DB2 SSL encryption because our DB2 data sharing members.
To allow these users to connect to DB2 through JDBC type 2 or JDBC type 4 we executed the
RACF commands shown in Example 4-26.
JDBC type 4 is not using user bound application plans, it uses DB2 packages.
With JDBC type 2 you can bind your own application plan with a package list referring to the
JDBC package collection ID that you intend to use. If you intend to use an application plan for
your JDBC type 2 connections you will have to take care of plan authorization. In our
workload scenario we do not use an application plan for JDBC type 2 connections. Instead we
use JDBC packages which we authorized as described in “Grant execute privileges on JDBC
packages” on page 171. Plan authorization for our DayTrader-EE6 JDBC type 2 workload
scenario was therefore not required.
170 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Grant execute privileges on JDBC packages
We bound the JDBC packages into the package collections JDBCHDBAT and
JDBCNOHDBAT. For these collections we ran the SQL data control language (DCL)
statements shown in Example 4-27 to revoke the execute privilege from PUBLIC and to grant
execute authorization to the packages of these collection IDs. The GRANT TO PUBLIC was
implicitly performed by the DB2Binder utility that we explained in 4.3.9, “Bind JDBC
packages” on page 165.
Figure 4-65 WebSphere deployment manager data source test error message
The error message shown in Figure 4-65 refers to mzdmnode with user MZASRU not being
authorized to execute package NULLID.SYSSTAT which we did not expect as we had
configured the data source to use package collection JDBCHBAT and to use the JAAS alias
user name for creating the connection to DB2. Instead, the data source connection request
was performed by the WebSphere Application Server deployment manager address space
user (MZASRU) trying to probe the DB2 connection using package NULLID.SYSSTAT.
After we had granted the deployment manager address space user the privilege to execute
the NULLID.SYSSTAT package as shown in Example 4-28 we successfully completed the
data source connection test using the ISC application.
Upon data source connection test completion we received the ISC message box shown in
Figure 4-67.
172 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4.3.13 Trusted context
A trusted context object is entirely defined in DB2 and is used to establish a trusted
relationship between DB2 and an external entity. An external entity includes the following
types of DB2 for z/OS clients:
DB2 allied address space locally connect to DB2 through RRSAF, TSO or the CAF
attachment facility interface. WebSphere Application Server connecting to DB2 through
JDBC type 2 use RRSAF DB2 attachment interface.
Note: APAR PM69429 adds support for Trusted Context calls for a CAF application.
During connection processing DB2 evaluates a set of trust attributes to determine if a specific
context is to be trusted. The trust attributes specify a set of characteristics about a specific
connection. These attributes include the IP address, domain name, or SERVAUTH security
zone name for remote DRDA clients and the job or task name for local clients.
In case the trusted context applies DB2 performs all authorization checking using the
authorization ID or database role that assigned by the trusted context
A trusted context is based on the system authid (in WebSphere Application Server often
referred to as the technical data source user) and a set of trust attributes. We describe the
trust attributes that we used for running the DayTrader-EE6 application in “DayTrader-EE6
JDBC type 2 related trusted context attributes” on page 174, “DayTrader-EE6 JDBC type 4
related trusted context attributes” on page 174.
Example 4-30 JDBC type 2 trusted context with system authid and job name
CREATE TRUSTED CONTEXT CTXDTRADET2
BASED UPON CONNECTION USING SYSTEM AUTHID MZADMIN 1
ATTRIBUTES (JOBNAME 'MZSR01*') 2
DEFAULT ROLE DTRADEROLE 3
WITHOUT ROLE AS OBJECT OWNER
ENABLE
1. Data source JAAS alias user name. This user name is often referred to by the data source
technical user.
2. Address space names of our WebSphere Application Server servant regions. Our STC
names start with the characters MZSR01
3. Optional: DB2 role to be assigned when the trusted context is applied
We use the trusted context shown in Example 4-30 to run the JDBC type 2 DayTrader-EE6
workload. Because database privileges are exercised by role DTRADEROLE the data source
user MZADMIN does not need to hold any privileges in DB2. This solves an important audit
concern as MZADMIN can no longer be used to access data in DB2 within or outside the
trusted context. Granting privileges to a role increases data security further as a role is
unusable outside a trusted context.
Example 4-31 JDBC type 4 trusted context with system authid and address
CREATE TRUSTED CONTEXT CTXDTRADET4
BASED UPON CONNECTION USING SYSTEM AUTHID MZADMIN 1
DEFAULT ROLE DTRADEROLE 3
WITHOUT ROLE AS OBJECT OWNER
ENABLE
NO DEFAULT SECURITY LABEL
ATTRIBUTES (
ENCRYPTION 'NONE',
ADDRESS 'wtsc64.itso.ibm.com', 2
ADDRESS 'd0z1.itso.ibm.com',
ADDRESS 'wtsc63.itso.ibm.com',
ADDRESS 'd0z2.itso.ibm.com'
) ;
174 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
1. Data source JAAS alias user name
2. Domain names the application server instance runs on
3. Optional: DB2 role to be assigned if the trusted context is to be applied
We use the trusted context shown in Example 4-31 on page 174 to run the JDBC type 4
DayTrader-EE6 workload. Because database privileges are exercised by role DTRADEROLE
the data source user MZADMIN does not need to hold any privileges in DB2. This solves an
important audit concern as MZADMIN can no longer be used to access data in DB2 within or
outside the trusted context. Granting privileges to a role increases data security further as a
role is unusable outside the trusted context.
4.3.16 Data Web Service servlet with trusted context AUTHID switch
The IBM Data Web Service servlet application that we use requires the application server
data source configuration steps described in 5.9, “Enabling trusted context for applications
that are deployed in WebSphere Application Server” on page 276, because this application
has been configured to use HTTP base authentication to enable the application server to
pass the ID of the authenticated user for trusted context AUTHID ID switch to DB2.
Example 4-32 Data Web Service query to select DB2 special registers
SELECT
CURRENT CLIENT_ACCTNG AS CLIENT_ACCTNG
,CURRENT CLIENT_APPLNAME AS CLIENT_APPLNAME
,CURRENT CLIENT_USERID AS CLIENT_USERID
,CURRENT CLIENT_WRKSTNNAME AS CLIENT_WRKSTNNAME
,CURRENT PATH AS PATH
,CURRENT SCHEMA AS SCHEMA
,CURRENT TIMESTAMP AS TIMESTAMP
,CURRENT TIMEZONE AS TIMEZONE
,CURRENT SERVER AS LOCATION
,GETVARIABLE('SYSIBM.DATA_SHARING_GROUP_NAME') AS GROUPNAME
,GETVARIABLE('SYSIBM.SSID') AS SSID
,GETVARIABLE('SYSIBM.SYSTEM_NAME') AS ATTACH
,GETVARIABLE('SYSIBM.VERSION') AS DB2VERSION
,GETVARIABLE('SYSIBM.PLAN_NAME') AS PLAN
,GETVARIABLE('SYSIBM.PACKAGE_NAME') AS PACKAGE
,GETVARIABLE('SYSIBM.PACKAGE_SCHEMA') AS COLLID
FROM SYSIBM.SYSDUMMY1;
Example 4-33 Data Web Service query to invoke the GRACFGRP external scalar UDF
SELECT T.* FROM XMLTABLE
('$d/GROUPS/GROUP'
PASSING XMLPARSE (DOCUMENT GRACFGRP()) AS "d"
COLUMNS
"RACF User" VARCHAR(08) PATH '../USER/text()',
The external UDF GRACFGRP is an assembler program that extracts the RACF groups the
current UDF caller is connected to from the RACF ACEE control block which has been
created by DB2 because of the SECURITY USER UDF attribute. GRACFGRP then returns
an XML document containing the RACF group names as a VARCHAR scalar value. Listing of
DDL and ASM is provided in
For information about how to convert SQL statements into IBM Data Web Services, refer to
IBM Data Studio V2.1: Getting Started with Web Services on DB2 for z/OS, REDP-4510.
176 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
CREATE ROLE WASTESTDEFAULTROLE;
CREATE ROLE WASTESTROLE;
For each IP address shown in Example 4-34 on page 177 we ran the UNIX System
Services command shown in Example 4-35 to determine the domain names that we had
to consider in our trusted context definition.
Create a trusted context that refers to the RACF profile created in Example 4-36 in its
WITH USE FOR clause as shown in Example 4-37.
Example 4-37 Create trusted context using RACF DSNR trusted context profile
CREATE TRUSTED CONTEXT CTXWASTESTT5
BASED UPON CONNECTION USING SYSTEM AUTHID WASSRV
DEFAULT ROLE WASTESTDEFAULTROLE
WITHOUT ROLE AS OBJECT OWNER
ENABLE
NO DEFAULT SECURITY LABEL
ATTRIBUTES (
ENCRYPTION 'NONE',
ADDRESS 'wtsc64.itso.ibm.com',
ADDRESS 'd0z1.itso.ibm.com',
ADDRESS 'wtsc63.itso.ibm.com',
ADDRESS 'd0z2.itso.ibm.com'
178 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
)
WITH USE FOR
EXTERNAL SECURITY PROFILE "D0ZG.TRUSTEDCTX.D0ZGWAS"
ROLE WASTESTROLE 1
WITHOUT AUTHENTICATION
1. The trusted context DDL shown in Example 4-37 on page 178 uses the same attributes as
the trusted context DDL shown in Figure 4-68 on page 177 except for the RACF DSNR
profile D0ZG.TRUSTEDCTX.D0ZGWAS profile which we created in Example 4-36 on
page 178.
During Data Web Service testing we collected the DB2 command output shown in
Figure 4-69 which confirms trusted context usage with exactly the attributes we defined in
Figure 4-68 on page 177.
!-----------------------------------------------------------------------
!CONNECTION TYPE: REUSED STATUS: FAILED SQLCODE: -20361
!SECURITY LABEL : N/P
!
!TRUSTED CONTEXT NAME: CTXWASTESTT4
!SYSTEM AUTHID USED : WASTEST
!REUSE AUTHID : WASUSER
!-----------------------------------------------------------------------
Figure 4-70 Trusted context IFCID 269 record trace with SQLCODE -20361
The application server log provided the corresponding runtime message shown in
Figure 4-71 indicating the auth ID switch failure.
180 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
In the workload scenario used in this book we focus on using profiles to monitor database
access threads and connections. Other use cases for using profiles are not discussed. If you
need further information about using these additional DB2 profile use cases refer to “Using
profiles to monitor and optimize performance”. DB2 10 for z/OS, Managing Performance,
SC19-2978.
This enhancement allows you to enforce the thresholds (limits) that were previously available
only at the system level using DSNZPARM, such as CONDBAT, MAXDBAT, and IDTHTOIN,
at a more granular level. Setting these limits allows you to control connections using the
following categories:
IP Address (LOCATION)
Product Identifier (PRDID)
Role and Authorization Identifier (ROLE, AUTHID)
Collection ID and Package Name (COLLID, PKGNAME)
DB2 client information (CLIENT_APPLNAME, CLIENT_USERID,
CLIENT_WORKSTNNAME)
This enhancement also provides the option to define the type of action to take after these
thresholds are reached. You can display a warning message or an exception message when
the connection, thread, and idle thread timeout thresholds are exceeded. If you choose to
display a warning message, a DSNT771I or DSNT772I message is issued, depending on
DIAGLEVEL and processing continues. In the case of exception processing, a message is
displayed to the console and the action taken (that is queuing, suspension, or rejection).
These tables are created by installation job DSNTIJSG. The profile history and attributes
history tables have the same columns as their corresponding profile and profile attributes
tables, except for the STATUS column which is added to keep track of profile status
information and except for the REMARKS column that does not exist in the profile history and
attributes history tables. The STATUS column indicates whether a profile was accepted or
why it was rejected during START PROFILE command execution.
Collection ID and / or package name Specify one or all of the following columns
COLLID
PKGNAME
For connection monitoring you can only filter on IP address or domain name for which you
provide the filter value by populating the profile table LOCATION column.
Besides the PROFILEID, which also is the profile table primary key, there are further columns
that you use to provide information about the monitoring filter criteria identifying the thread,
connection, or SQL statement you want monitoring to be performed for.
For MAXDBAT and IDTHTOIN monitoring you can enter the filter criteria using any of the
combinations shown in Table 4-5.
182 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
For CONDBAT monitoring you can only specify an IP address or a domain name in the
LOCATION column. Other combinations of criteria are not accepted for CONDBAT monitoring
function.
To provide these information you insert a row into the profile attributes table
(SYSIBM.DSN_PROFILE_ATTRIBUTES) to store the required threshold and action related
information. For illustration purpose we provide a list of the profile attributes table columns in
Figure 4-73. The table contains a PROFILEID column which corresponds to a profile table
row with the same PROFILEID column value.
PROFILEID 1 INTEGER 4
KEYWORDS 2 VARCHAR 128
ATTRIBUTE1 3 VARCHAR 1024
ATTRIBUTE2 4 INTEGER 4
ATTRIBUTE3 5 FLOAT 8
ATTRIBUTE_TIMESTAM 6 TIMESTMP 10
REMARKS 7 VARCHAR 762
Figure 4-73 DSN_PROFILE_ATTRIBUTES table
For DBAT or remote connection monitoring you can enter one of the attribute values shown in
Table 4-6 to provide monitoring threshold and actions depending on the kind of thread,
connection or IDLE thread monitoring you want to perform. Profile attribute column
ATTRIBUTE3 is not used for thread and connection monitoring.
-START PROFILE
Triggered by the START PROFILE command DB2 starts profile rows with the value Y in the
PROFILE_ENABLED profile table column (SYSIBM.DSN_PROFILE_TABLE column
PROFILE_ENABLED = Y).
In data sharing the START and STOP PROFILE commands have member scope and affect
only the data sharing member they have been issued for. You therefore need to issue these
commands for each data sharing member you want to have profile monitoring started or
stopped.
In our environment we use the administrative task scheduler to issue the START PROFILE
command at DB2 startup time. In Appendix A, “DB2 administrative task scheduler” on
page 483. we describe the administrative task scheduler (ADMT) setup to trigger batch jobs,
DB2 commands, and for autonomic statistics monitoring.
Stopping profiles
You stop profiles by issuing the STOP PROFILE command:
-STOP PROFILE
DSN_PROFILE_HISTORY table
During profile activation DB2 validates each profile to be started and documents its activation
status by inserting one row into table SYSIBM.DSN_PROFILE_HISTORY. As shown in
Figure 4-74 on page 185 the DSN_PROFILE_HISTORY table consists of column information
of the DSN_PROFILE_TABLE (except for the REMARKS column) plus a STATUS column to
provide information about the profile activation status.
184 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
AUTHID 1 VARCHAR 128
PLANNAME 2 VARCHAR 24
COLLID 3 VARCHAR 128
PKGNAME 4 VARCHAR 128
LOCATION 5 VARCHAR 254
PROFILEID 6 INTEGER 4
PROFILE_TIMESTAMP 7 TIMESTMP 10
PROFILE_ENABLED 8 CHAR 1
GROUP_MEMBER 9 VARCHAR 24
STATUS 10 VARCHAR 254
ROLE 11 VARCHAR 128
PRDID 12 CHAR 8
CLIENT_APPLNAME 13 VARCHAR 255
CLIENT_USERID 14 VARCHAR 255
CLIENT_WRKSTNNAME 15 VARCHAR 255
Figure 4-74 DSN_PROFILE_HISTORY table
DSN_PROFILE_ATTRIBUTES_HISTORY table
Profile activation that we describe in “DSN_PROFILE_HISTORY table” on page 184
furthermore triggers profile attribute validation.
During START PROFILE execution DB2 externalizes the attribute status of each profile
attribute involved by inserting corresponding rows into the profile attributes history table
(SYSIBM.DSN_PROFILE_ATTRIBUTES_HISTORY).
PROFILEID 1 INTEGER 4
KEYWORDS 2 VARCHAR 128
ATTRIBUTE1 3 VARCHAR 1024
ATTRIBUTE2 4 INTEGER 4
ATTRIBUTE3 5 FLOAT 8
ATTRIBUTE_TIMESTAM 6 TIMESTMP 10
STATUS 7 VARCHAR 254
Figure 4-75 DSN_PROFILE_ATTRIBUTES_HISTORY table
The STATUS column indicates whether the profile was accepted, and when a profile was
rejected contains information about the reason for the rejection.
To verify the profile activation status of the attributes that we defined for the profile we ran the
query shown in Example 4-39.
186 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The status returned by the queries shown in Example 4-38 on page 186 and in Example 4-39
on page 186 confirms that the profile with PROFILEID = 1 was successfully activated on
member D0Z2.
We then ran the SQL statement shown in Example 4-41 to insert a corresponding row into
DSN_PROFILE_ATTRIBUTES table.
The attributes shown in Example 4-41 on page 187 define active thread monitoring for
PROFILEID 1, allowing for a maximum of seven active threads, causing DB2 to issue warning
message DSNT772I in case this number of active threads is exceeded. Processing continues
with no thread queuing or suspension.
The output of the ADMT initiated DB2 command processing is shown in Figure 4-76.
-START PROFILE
DSNT741I -D0Z1 DSNT1SDV START PROFILE IS COMPLETED.
DSN9022I -D0Z1 DSNT1STR 'START PROFILE' NORMAL COMPLETION
DSN
-DIS PROFILE
DSNT753I -D0Z2 DSNT1DSP DISPLAY PROFILE REPORT FOLLOWS:
STATUS = ON
TIMESTAMP = 2012-10-25-14.18.04.615794
PUSHOUTS = 0 OUT OF 10000
DISPLAY PROFILE REPORT COMPLETE.
DSN9022I -D0Z2 DSNT1DSP 'DISPLAY PROFILE' NORMAL COMPLETION
Figure 4-76 START PROFILE command
188 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
We then ran the query shown in Example 4-43 to verify the status of the monitoring attributes.
The status returned by the queries shown in Example 4-42 on page 188 and Example 4-43
confirms that our thread monitoring profile was successfully activated.
This monitoring function can assist you in identifying outdated levels of DB2 client software
used in your environment. After you have identified the clients and remote locations you can
use profiles to issue warnings in case such back level clients are being used and finally
disable the use of such client levels after a planned grace period has expired.
In DB2 10 for z/OS this and many other constraints are relieved which enables DB2 for z/OS
to support a generous number of database access threads that is sufficient enough to replace
existing DB2 Connect functionality by DB2 clients that directly connect to the DB2 for z/OS
server. An illustration of that architecture is shown in Figure 4-78.
JDBC/SQLJ/
Java based Type 4 DRDA
pureQuery/
Clients DB2
ObjectGrid/ z/OS
JCC
Data Web services DRDA
DB2 Connect CF
DRDA
Server
CLI/ Ruby/ Perl
C DB2
Java based
Clients z/OS
JCC Type 4-like DRDA
.NET DRDA
DB2 Group
Figure 4-78 DB2 Client configuration to directly access DB2 for z/OS
Figure 4-78 shows Java clients directly connecting to DB2 for z/OS using JDBC type 4
connections while the DB2 Connect infrastructure still is in place. This approach allows for a
staged migration of DB2 clients in which DB2 client access is redirected from using DB2
Connect to DB2 direct access by updating the DB2 client configuration as illustrated in
Figure 4-79 on page 191.
Changing the DB2 client configuration in that situation enables you to make use of new DB2
10 for z/OS configuration options. For instance you can perform online changes to
dynamically activate DB2 location aliases allowing you to direct workloads to the data sharing
group, to a subset of data sharing members, or to a single data sharing member.
190 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Client configuration
Client connectivity information needs to change from pointing to the DB2 Connect server
to pointing to DB2 for z/OS.
If DB2 for z/OS is a data sharing group, DVIPA + location should be used.
If certain applications should only access some members in the data sharing group,
DVIPA plus location alias needs to be used.
DB2 10 supports dynamic start and stop of location alias.
ALIAS1=
y.y.y:8000/DB2P
OLD:x.x.x:446/ALIAS1
NEW:y.y.y:8000/DB2P
-dis location
DSNL200I -D0Z1 DISPLAY LOCATION REPORT FOLLOWS-
LOCATION PRDID T ATT CONNS
::9.12.6.9 JCC04130 S 1
::9.145.139.205 SQL10010 S 1
DISPLAY LOCATION REPORT COMPLETE
Figure 4-80 DISPLAY LOCATION with PRDID information
SELECT
COUNT(*) AS NO
, SUBSTR(REQ_LOCATION ,01,15) AS REQ_LOCATION
, SUBSTR(CLIENT_TRANSACTION ,01,15) AS CLIENT_TRANSACTION
, REMOTE_PRODUCT_ID
FROM DB2PMSACCT_DDF
GROUP BY
REQ_LOCATION
, CLIENT_TRANSACTION
, REMOTE_PRODUCT_ID
---------+---------+---------+---------+---------+---------+---------+-
NO REQ_LOCATION CLIENT_TRANSACTION REMOTE_PRODUCT_ID
---------+---------+---------+---------+---------+---------+---------+-
11 ::9.12.4.142 TraderClientApp JCC03640
2 ::9.12.6.9 db2jcc_applicat JCC03630
2 ::9.12.6.9 db2jcc_applicat JCC03640
15 ::9.12.6.9 TraderClientApp JCC03640
11 ::9.12.6.9 TraderClientApp JCC03630
1 ::9.30.28.118 db2jcc_applicat JCC04130
DSNE610I NUMBER OF ROWS DISPLAYED IS 6
Figure 4-81 Use PDB to query PRDIDs
192 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Activate PRDID based thread monitoring
In our scenario we illustrate how to use profiles to monitor DB2 clients using a certain JDBC
driver level. The profile tables changes we performed for this kind of monitoring are shown in
Example 4-44.
In Example 4-44 we configure DB2 profile monitoring to issue warning message DSNT772I
when the number of threads using the DB2 client level indicated by product ID SQL10010
(max 1 in our example used for illustration) is exceeded. The application itself continues
processing as we configured the profile attribute to issue a warning in case the threshold is
exceeded. If we wanted the application to receive a negative SQLCODE we would have set
profile attribute ATTRIBUTE1 to the value of EXCEPTION.
DB2 reason code 00E30505 indicates that a warning occurred because the number of
concurrent active threads exceeded the warning setting for the MONITOR THREADS
keyword in a monitor profile for one of the PRDID filtering scope.
You can use profiles to control IDTHTOIN processing at application level which gives you the
option to disable idle thread timeout processing just for the application you have to disable
timeout processing for. The subsystem wide setting for IDTHTOIN still applies to all the
DBATs not qualifying for idle thread timeout profile processing.
For instance, to disable idle thread timeout processing for the client application name
NonCommittingProgram you would have run the SQL insert statements shown in
Example 4-45. and subsequently issue the command shown in “Stopping profiles” on
page 184 to activate the profile. In this example the misbehaving application set its
clientApplicationInformation to the value of NonComittingProgram.
194 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
,'' -- GROUP_MEMBER
,'Disable IDTHTOIN timout' -- REMARKS
,NULL -- ROLE
,NULL -- PRDID
,'NonCommittingProgram' -- CLIENT_APPLNAME
,NULL -- CLIENT_USERID
,NULL -- CLIENT_WRKSTNNAM
)
;
-- ---------------------------------------------------------------------
-- SYSIBM.DSN_PROBILE_ATTRIBUTES table
-- ---------------------------------------------------------------------
INSERT INTO SYSIBM.DSN_PROFILE_ATTRIBUTES
( "PROFILEID" , "KEYWORDS" , "ATTRIBUTE1" , "ATTRIBUTE2" ,
"ATTRIBUTE3" , "ATTRIBUTE_TIMESTAMP" , "REMARKS" )
VALUES (
1 -- unique PROFILEID
,'MONITOR IDLE THREADS' -- IDHTDOIN monitoring
,'WARNING_DIAGLEVEL2' -- DB2 issues DSNT772I when threshold exceeded
, 0 -- IDTHTOIN = 0
, NULL -- ATTRIBUTE3
, CURRENT TIMESTAMP -- ATTRIBUTE_TIMESTAMP
,'disable IDTHTOIN' -- REMARKS
);
From the requesting location (in our test scenario this was a DB2 LUW client machine) we
used multiple instances of the DB2 command line processor to create the desired number of
DB2 connections. After the profile threshold entered in Example 4-46 on page 195 was
exceeded we observed the DB2 message shown in Figure 4-83.
DB2 reason code 00E30503 indicates that a warning occurred because the number of
connections exceeded the warning setting for the MONITOR CONNECTIONS keyword in a
monitor profile for the LOCATION filtering scope.
Additional information
For information about managing and implementing DB2 profile monitoring, refer to Chapter
45. Using profiles to monitor and optimize performance of DB2 10 for z/OS, Managing
Performance, SC19-2978.
196 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4.3.23 SYSPROC.ADMIN_DS_LIST stored procedure
The SYSPROC.ADMIN_DS_LIST stored procedure invokes the z/OS Catalog Search
Interface (CSI) to obtain information about data sets contained in integrated catalog facility
(ICF) catalogs. Data set entries are selected using a generic data set filter. The data set filter
can be a fully-qualified name, in which case one entry is returned, or a generic filter key
containing wild cards so that multiple entries can be returned on a single invocation. The
syntax for providing a generic filter keys is similar to providing the dsname level information in
the ISPF data set list utility.
You can use the SYSPROC.ADMIN_DS_LIST stored procedure to perform regular monitoring
on data set extends, DASD usage, VSAM high allocated and high used RBA (relative byte
address). SYSPROC.ADMIN_DS_LIST returns the data set information through a result set
cursor that it opens on the temporary table SYSIBM.DSLIST. A list of columns returned by the
result set cursor is shown in Example 4-47.
A sample on how to invoke the stored procedure to retrieve data set related information for
table and index space related VSAM LDS data sets for database DBTR8074 is provided in
Example 4-48.
The information about DASD usage (DASD_USAGE), high used (HURBA) and high allocated
RBA (HARBA) are returned as binary character string which is not useful when it comes to
performing computations using these information. For instance, you might want to subtract
the high used RBA from the high allocated RBA to calculate the real DASD usage in bytes or
to determine table or index space over or under allocation. To cast the binary character string
information to a big integer value we use the DB2 UNIX System Services command line
processor to run the SQL shown in Example 4-49.
The SQL shown in Example 4-49 on page 197 performs the following processing steps:
1. SYSPROC.ADMIN_DS_LIST stores its result in the temporary tale SYSIBM.DSLIST. The
temporary table is dropped at commit. The update command in Example 4-49 on
page 197 deactivates auto commit to make the temporary table available for processing
across the current commit scope.
2. Next we connect to DB2 using the data sharing group IP address, the SQL port and the
DB2 location name.
3. We then call the SYSPROC.ADMIN_DS_LIST stored procedure. We ignore the procedure
result because we
4. subsequently query the SYSIBM.DSLIST temporary table that was created and populated
by the stored procedure.
In the SQL select list we use the BIGINT user defined scalar function (scalar UDF) to cast
the binary character value to BIGINT which enables us to use SQL to calculate the
difference between high allocated RBA and high used RBA. This calculation determines
the amount of table or index space over or under allocation.
We provided the program source and DDL for implementing and defining the
DB2R3.BIGINT scalar UDF in Appendix G, “External user-defined functions” on page 563.
You can query DB2 RTS to obtain the following information about table space, index space,
and on partition level.
SQL DELETE, INSERT and UPDATE frequency since the last LOAD, RUNSTATS or
COPY utility. You can use this information to determine how frequently table spaces and
indexes are accessed for DELETE, INSERT and UPDATE DML operations.
Number of active pages
Number of allocated pages
Number of data set extents
Whether you should run the REORG, RUNSTATS or COPY utility.
Total number of rows stored in the table space
Total number of index entries in the index space
198 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Size of data occupied by rows. You can compare this information with the number of active
pages to review page usage efficiency.
Type of the disk (HDD or SSD) the table or index space VSAM data set resides on
High performance list prefetch facility capability indicator of the disk the VSAM data set
resides on
Number of index levels in the index tree
Number of pages containing pseudo deleted index entries
The date when the index was last used for SELECT, FETCH, searched UPDATE,
searched DELETE, or used to enforce referential integrity constraints. This information
can be useful to determine unused indexes.
The number of times the index was used for SELECT, FETCH, searched UPDATE,
searched DELETE, or used to enforce referential integrity constraints, or since the object
was created.
In the SQL sample query shown in Example 4-52 we query the RTS table space snapshot
table to determine the number of inserts, updates and deletes performed on the DayTrader
database during the workload execution that we performed between
2012-08-17-22.57.57.673670 and 2012-08-17-22.57.57.673670.
200 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4.3.25 Using RTS to obtain COPY, REORG and RUNSTATS recommendations
Rather than querying the RTS tables yourself we recommend to use the
SYSPROC.DSNACCOX stored procedure to obtain COPY, REORG and RUNSTATS utility
recommendations. DSNACCOX intelligently combines your input parameters and filters with
built in intelligence, data from the DB2 catalog and RTS to determine whether table space or
index space reorganizations, Runstats or Copy utilities are due for execution. For instance,
DSNACCOX with DB2 10 for z/OS implements specific code to reduce the reorg
requirements for table spaces residing on SSD volumes.
For information about DB2 REORG and SSD disks refer to deverloperWorks article
Solid-state drives: Changing the data world at
http://www.ibm.com/developerworks/data/library/dmmag/DMMag_2011_Issue3/Storage/ind
ex.html.
As for RUNSTATS recommendations we use the administrative scheduler to make use of the
autonomic statistics maintenance feature. Autonomic statistics maintenance internally calls
the DSNACCOX procedure to obtain its RUNSTATS recommendations. See Appendix A,
“DB2 administrative task scheduler” on page 483.
Additional information
For additional information about using the DSNACCOX stored procedure refer to Chapter 34.
Setting up your system for real-time statistics of DB2 10 for z/OS, Managing Performance,
SC19-2978.
SMF
1 2
Extract/Transform
Omegamon
format
Performance DB
Omegamon format created by: Accounting and
FPEZCRD batch program Statistics
ISPF interface Tables
Near term history sequential data sets
202 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Extract, transform, and load SAVE file
DGOPMICO
OMPE PARM=
SMF KSDS CONVERT 2
Data Set
Extract/Transform
1
Omegamon
format
Performance DB
Omegamon format created by: Accounting
FPEZCRD batch program SAVE
ISPF interface Tables
Near term history sequential data sets
See Appendix D.4, “Sample query for application profiling” on page 540.
Design and implementation best practice recommendations are extensively discussed in the
DB2 for z/OS documentation and in the DB2 for z/OS Best Practices web site. For further
information, refer to the following documentation:
Achieving the Highest Levels of Parallel Sysplex Availability in a DB2 Environment, IBM
REDP-3960.
DB2 10 for z/OS, Managing Performance, SC19-2978.
– Part 4, Improving concurrency
– Part 6, Programming applications for performance
– Part 7, Maintaining data organization and statistics
– Part 8, Managing query access paths
IBM developerWorks DB2 for z/OS Best Practices papers available at
https://www.ibm.com/developerworks/mydeveloperworks/groups/service/html/communi
tyview?communityUuid=f8b4b297-1cd7-49b6-8e7a-8bfdcc4901e7
Database migration projects do not always apply best practice recommendations. This leads
to SLA violations because of elongated application response times which often has a
negative impact on application availability and scalability. To bring the most commonly
observed issues to your attention, we provide the following list of database and application
design pitfalls that can cause such undesired application behavior:
There is a tendency to accept default configuration properties for WebSphere Application
Server data source properties, which can be extremely painful. Always review the data
source custom properties to make sure, the recommended settings in 5.11, “Configuring
data source properties (webSphereDefaultlsolationLevel, currentPackagePath, pkList, and
keepDynamic)” on page 288 are being used.
– AutoCommit
The default setting switches autocommit to ON. For read-only SQL, this can cause high
CPU on the DB2 server because of connection and DBAT management. This can
happen especially when the application designer believes no unit of work is necessary.
The attitude is “I don't care about the unit of work - all I want is the data. Why does the
database impose a unit of work on me by asking me to choose a commit point?”
204 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
– CursorHold
The default, again, is to turn this on. If the application fails to close a cursor, then the
connection cannot go inactive. This can inflate the number of threads required. Prior to
DB2 10 the major concern is virtual storage, DB2 10 onwards it is real storage.
– Default isolation level
TRANSACTION_REPEATABLE_READ (i.e. RS) with obvious consequences for
concurrency - locking conflicts, time-outs and deadlocks.
Where AutoCommit has been turned off, there can be a problem where read-only
applications fail to commit. This can cause an increase in the number of threads, and can
also make if difficult for utilities to execute concurrently with application workloads.
Some update transactions commit too infrequently. This usually happens where data
volumes exceed the application design expectations (or lack of) and can have detrimental
effects, especially in data sharing, as it affects the global CLSN making the lock avoidance
mechanism ineffective. This also occurs where the Java object is represented in a
hierarchy of tables. meaning a large number of locks might have to be taken.
As well as update transactions, there is the impact of Java Batch, where the mechanism
for calculating commit frequency either is not present or means commits are too
infrequent. The biggest challenge is those Java Batch applications which contain no
restart logic and those where the batch process is an all-or-nothing process. These latter
often occur where the data volumes exacerbate the duration of the batch window, such
processes can linger on into the online day and cause severe problems.
Unrestricted use of KEEPDYNAMIC(YES) can prevent threads from going inactive.
Numerous tables are often stored in the same table space. In DB2 for z/OS each table
should be stored in its own table space, because important tasks such as DB2 utilities, I/O
tuning and table space tuning can only be performed at table space level. For instance you
cannot backup or restore individual tables within the same table space. Instead, you can
perform the copy or recover utility at table space level which copies or recovers all tables in
the table space. The same applies to the other utilities and to table space tuning
parameters. Creating one table per table space enables you to perform such tasks at table
level.
DB2 large object (LOB) auxiliary table spaces are often defined with LOG NO. This
setting, while saving LOG space and improving performance for real large LOBs, might
compromise data integrity in case of rollback or point in time recovery processing.
The number of indexes can run out of control. For instance, an application might depend
on DB2 for Linux, UNIX, and Windows to detect and eliminate duplicate index
specifications. The DDL, as a result, has a significant number of duplicate index
specifications which are not eliminated by DB2 for z/OS. As well as impacting INSERT and
UPDATE performance, this also increases PREPARE time, and can in some cases make
access path selection less effective because of the number of choices available.
In some case, the installation DDL allows no customization of buffer pool assignment. This
means when installing into an environment which supports multiple applications, that
applications can impact each other. The DBA has to find out about this by experience and
then has to perform post-installation customization, which is likely to be undone when a fix
pack is applied.
There is also a tendency to almost random buffer pool assignment, meaning indexes, data
pages and LOBs are all staged in the same buffer pool, with inevitable consequences. As
well as separating these out, the application designers should have some understanding
of random versus sequential objects and assign them to separate buffer pools where
appropriate.
206 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
5
Customers have many questions about how best to configure a WebSphere Application
Server environment to access DB2 for z/OS. Here are some typical challenges
and questions:
What do I need to consider when I configure WebSphere Application Server, which
accesses DB2 for z/OS?
How do I configure JDBC type 2 driver access to DB2 for z/OS by using WebSphere
Application Server on z/OS?
I am a DBA. I do not know which application a SQL statement is coming from. What can I
configure in my WebSphere Application Server to help me track this statement without an
application change?
What are the preferred practices for JDBC type 4 access to DB2 for z/OS to best use
sysplex workload balancing?
Why is there an XA provider for JDBC type 4 access and nothing like that for JDBC type 2
access to DB2 for z/OS?
I do not want to grant a user ID that is used in my data source DBADM or has access to
DB2 tables. I am worried that the user ID might be compromised. How can I avoid this
situation in a WebSphere Application Server environment that is accessing DB2 for z/OS?
There are many levels of JDBC driver properties; which should I use when?
What are the preferred practices for WebSphere Application Server connection pool and
prepared statement cache settings?
What do I need to do in WebSphere Application Server to help me classify JDBC type 4
access to DB2 for z/OS in WLM?
In this chapter, we build an example environment that we use to provide the answers to
these questions.
We chose to use WebSphere Application Server on z/OS for our example. Here are the main
reasons that we chose WebSphere Application Server on z/OS:
1. WebSphere Application Server on System z has the same features and functions of
WebSphere Application Server on other platforms.
2. We want to show the features of the JDBC type 2, which is the only driver that is normally
used with WebSphere Application Server on z/OS to access the local DB2 for z/OS.
208 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
We used WebSphere Application Server V8.5. We built the WebSphere Application Server
Network Deployment topology spread across two LPARS, as shown in Figure 5-1.
SC64 SC63
The two node cell was built by following the preferred practices recommendations. These
recommendations are found in the WebSphere Application Server Information Center and
various documents, such as IBM Redbooks publications and techdocs. Here is the link to the
Information Center for WebSphere Application Server V8.5:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp
We built the application server cluster MZSR014, which is spread across two LPARS: SC63
and SC64. The Deployment Manager MZDMGR was built to run on SC64.
The application that we used is the Apache DayTrader Sample application. Information
regarding this application can be found at the following website:
https://cwiki.apache.org/GMOxDOC20/daytrader.html
For more information about our configuration, see Appendix B, “Configuration and workload”
on page 511.
2. Double-click JDBC providers and you see the window that is shown in Figure 5-3. This
window shows a list of existing JDBC providers that are defined on your server.
210 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Resources such as Java Database Connectivity (JDBC) providers, namespace bindings,
or shared libraries can be defined at multiple scopes. Resources that are defined at more
specific scopes override duplicate resources that are defined at more general scopes:
– The application scope has precedence over all the other scopes.
– For WebSphere Application Server Network Deployment, the server scope has
precedence over the node, cell, and cluster scopes.
– For WebSphere Application Server Network Deployment, the cluster scope has
precedence over the node and cell scopes.
– The node scope has precedence over the cell scope.
In this example, select a cell scope. Click New. The window that is shown in
Figure 5-4 opens.
4. The purpose of this window is to define the location of the IBM Data Server Driver for
JDBC and SQLJ classes. This is done by using variables. The usage of variables provides
flexibility so that you can define the location at a single point and use that point for many
JDBC providers that can be defined in a WebSphere Application Server. Write down the
following variables from the window that is shown in Figure 5-5:
– DB2UNIVERSAL_JDBC_DRIVER_PATH
– UNIVERSAL_JDBC_DRIVER_PATH
We show how to define these variables and their values later in this book.
Click Next. The summary window that is shown in Figure 5-6 on page 213 opens.
212 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-6 Summary window for JDBC provider
5.2.2 Defining environment variables at the location of the IBM Data Server
Driver for JDBC and SQLJ classes for JDBC type 4 connectivity
To define environment variables at the location of the IBM Data Server Driver for JDBC and
SQLJ classes for JDBC type 4 connectivity, complete the following steps:
1. In the navigation window of the administrative console of WebSphere Application Server,
expand Environment, as shown in Figure 5-7.
214 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3. By default, the variables are defined to WebSphere Application Server at all scopes. The
variables do not have specific values defined by default. To see the variables, click the filter
icon, as shown in Figure 5-9.
4. Enter DB2 in to the search terms and click Go. A window with the default list of variables
opens, as shown in Figure 5-10.
6. Double-click the variable name. The window that is shown in Figure 5-12 opens.
216 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
7. Enter the location of the IBM Data Server Driver for JDBC and SQLJ classes in the value
text box. In this example, enter /usr/lpp/db2/d0zg/jdbc/classes, as shown in Figure 5-13.
Figure 5-13 Location of the IBM Data Server Driver for JDBC and SQLJ classes
8. Click Apply and then save your configuration. Repeat the same steps for the
UNIVERSAL_JDBC_DRIVER_PATH variable.
The window that is shown in Figure 5-15 opens. This window shows a list of existing JDBC
data sources that are defined in your environment.
218 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2. In this example, select the cell scope. Click New. The window that is shown in
Figure 5-16 opens.
3. In this window, enter a name for the data source and the JNDI name. For this example,
enter TradeDataSourceXA for the data source name and jdbc/Trade for the JNDI name.
Click Next. The window that is shown in Figure 5-17 opens.
4. In this window, you need a JDBC type 4 XA connection, so select the DB2 Universal JDBC
Driver Provider (XA) that was created earlier.
220 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DSNL106I PKGREL = COMMIT
DSNL099I DSNLTDDF DISPLAY DDF REPORT COMPLETE
6. The information in this window directs WebSphere Application Server about what user ID
to use when you connect to DB2 for z/OS. Here is a brief description of what each
ID means:
Authentication alias for XA recovery: This alias is used by WebSphere
Application Server when it tries to
resolve any in-doubt transactions as
part of XA recovery.
Component-managed Authentication Alias: This is the user ID/password that is used
to access DB2 with component
managed security. The alias must be
defined beforehand.
Container-managed Authentication Alias: This is the user ID/password that is used
to access DB2 with container managed
security. The alias must be
defined beforehand.
These aliases are called J2C aliases. They can be defined by using the
administration console.
This window is a summary window, which shows all the different values that you set. Click
Finish and save the changes. You have a JDBC type 4 XA data source that is defined.
222 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
5.3.1 Defining a DB2 JDBC provider
To definite a DB2 JDBC provider, complete the following steps:
1. In the navigation window of the administration console of WebSphere Application Server,
expand Resources. Under Resources, expand JDBC and you see the window that is
shown in Figure 5-21.
224 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3. The JDBC provider must be defined with the appropriate scope. See the scope note in
Figure 5-22 on page 224. In this example, select the cell scope. Click New and the window
that is shown in Figure 5-23 opens.
Figure 5-23 JDBC provider that is defined with the cell scope
Click Next. The window that is shown in Figure 5-25 on page 227 opens.
226 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-25 Driver classes location
4. The purpose of this window is to define the location of the IBM Data Server Driver for
JDBC and SQLJ classes. The one difference with a JDBC type 2 connection on z/OS is
the need to define the native library path. All of this is done by using variables.
The usage of variables provides flexibility so that you can define the location in a single
point and use that point for many JDBC providers that can be defined in a WebSphere
Application Server. Write down the following variables from this window.
– DB2UNIVERSAL_JDBC_DRIVER_PATH
– UNIVERSAL_JDBC_DRIVER_PATH
– DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH
We show how to define these variables and their values later in this book.
You have created a DB2 Universal JDBC provider that is compatible with type 2 connectivity
on z/OS.
5.3.2 Defining environment variables to the location of the IBM Data Server
Driver for JDBC and SQLJ classes for JDBC type 2 connectivity
To define environment variables at the location of the IBM Data Server Driver for JDBC and
SQLJ classes for JDBC type 2 connectivity, complete the following steps:
1. In the navigation window of the administrative console of WebSphere Application Server,
which is shown in Figure 5-21 on page 223, expand Environment, as shown
in Figure 5-27.
228 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2. Click WebSphere variables. The window that is shown in Figure 5-28 opens.
4. Enter DB2 in the search terms and click Go. A window that shows the default list of
variables opens, as shown in Figure 5-30.
230 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The variables are defined at all possible scopes in the cell. Pick the appropriate scope. In
this example, pick the DB2UNIVERSAL_JDBC_DRIVER_PATH variable as the cell scope,
as shown in Figure 5-31.
5. Double-click the variable name. The window that is shown in Figure 5-32 opens, where no
value is set for the Type 2 driver.
7. Click Apply and then save the changes. Repeat the same steps for the
UNIVERSAL_JDBC_DRIVER_PATH variable.
8. For JDBC type 2 connectivity, you must define the path of the native libraries by assigning
a value to DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH variable, which points to the
location of the native libraries.
Double-click the DB2UNIVERSAL_JDBC_DRIVER_NATIVEPATH variable and the
window that is shown in Figure 5-34 on page 233 opens. Enter the location of the native
libraries, which in this example is /usr/lpp/db2/d0zg/jdbc/lib/.
232 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-34 Location of the native libraries
In this example, select the cell scope and click New. The window that is shown in
Figure 5-37 opens.
234 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2. In this window, enter a name for the data source and the JNDI names. In this example,
enter TradeDatasourceType2 for the data source name and jdbc/Trade DataSourceType2
for the JNDI name, as shown in Figure 5-38.
Click Next and the window that is shown in Figure 5-39 opens.
3. In this window, select the DB2 Universal JDBC Driver Provider that was created earlier
because you need a JDBC type 2 connection.
Click Next and the window that is shown in Figure 5-41 on page 237 opens.
236 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
5. In this window, enter the authentication alias that should be used by WebSphere
Application Server when it connects to DB2 for z/OS. This authentication alias must be
defined beforehand. You have two options:
– Component-managed Authentication Alias:
This is the user ID/password that is used to access DB2 with
component-managed security.
– Container-managed Authentication Alias:
This is the user ID/password that is used to access DB2 with
container-managed security.
These aliases are known as J2C aliases. They can be defined by using the
administration console.
By default, the user ID that WebSphere Application Server on z/OS runs under is used.
This is possible only for a JDBC type 2 connection, which means that the user ID under
which WebSphere Application Server runs under must have the appropriate access to the
DB2 objects that are used in the application that uses this data source.
We can also override that user ID and provide an authentication alias, which is then used.
In this example, use the TradeDataSourceAuthData authentication alias.
This property can be set to specify the DB2 subsystem identifier (not the DB2 location name)
if the DB2 system is not part of a data sharing group. If DB2 is part of a data sharing group,
then specifying the group attach name as the value is recommended because if customers
have multiple members of a data sharing group in the same LPAR, specifying the group
attach name as the value for the ssid property allows type 2 connections to fail over to the
second DB2 member of the same data sharing group in the same LPAR if the one of the DB2
members fails.
The only time when ssid should be used instead of a group attach name is if there is a
requirement that the WebSphere Application Server connect to only a specific
DB2 subsystem.
238 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
JDBC type 2 connections do not workload balance between multiple DB2 members of a data
sharing group in a single LPAR. The connections randomly pick one of the DB2 members to
use for all connections and then, if that DB2 member fails, fail over to the second DB2
member of the same data sharing group in the same LPAR. This situation happens only if you
specify a group attach name as the value of the ssid data source custom property in
WebSphere Application Server.
If you use the ssid property when it is not provided, then the driver uses the ssid that it finds in
the DSNHDECP load module. You load DSNHDECP by using the search sequence that
specified in the STEPLIB environment variable or the //STEPLIB DD name concatenation. If
that DSNHDECP load module does not accurately reflect the correct subsystem, or multiple
subsystems are using a generic DSNHDECP, then there might be problems in connecting
to DB2.
Another reason to use the ssid property for JDBC type 2 connections to DB2 from
WebSphere Application Server on z/OS is so that a single WebSphere Application Server can
connect to multiple DB2 subsystems. Then, different applications that are deployed in the
same WebSphere Application Server can connect to different DB2 subsystems in the same
LPAR if they use different data sources and the ssid is set on each data source.
To configure the ssid on the data source, complete the following steps:
1. In the navigation window of the administrative console of the WebSphere Application
Server, expand Resources, and click Data sources, as shown in Figure 5-43.
240 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3. The window that is shown in Figure 5-45 opens. Click Custom properties under the
Additional properties section. The window that is shown in Figure 5-46 opens, which lists
all the custom properties that are available to the data source.
4. The property ssid is not defined by default. You can define it. Click New and a new window
opens. For the ssid, enter the group attach name or the subsystem ID. In this example,
enter the group attach name of the DB2 data sharing group D0ZG.
Example 5-3 DISPLAY GROUP command to verify that the group attach name
DSN7100I -D0Z2 DSN7GCMD
*** BEGIN DISPLAY OF GROUP(DB0ZG ) CATALOG LEVEL(101) MODE(NFM )
PROTOCOL LEVEL(2) GROUP ATTACH NAME(D0ZG)
--------------------------------------------------------------------
DB2 DB2 SYSTEM IRLM
MEMBER ID SUBSYS CMDPREF STATUS LVL NAME SUBSYS IRLMPROC
-------- --- ---- -------- -------- --- -------- ---- --------
D0Z1 1 D0Z1 -D0Z1 ACTIVE 101 SC63 I0Z1 D0Z1IRLM
D0Z2 2 D0Z2 -D0Z2 ACTIVE 101 SC64 I0Z2 D0Z2IRLM
--------------------------------------------------------------------
SCA STRUCTURE SIZE: 8192 KB, STATUS= AC, SCA IN USE: 4 %
LOCK1 STRUCTURE SIZE: 8192 KB
NUMBER LOCK ENTRIES: 2097152
NUMBER LIST ENTRIES: 9324, LIST ENTRIES IN USE: 7
SPT01 INLINE LENGTH: 32138
*** END DISPLAY OF GROUP(DB0ZG )
DSN9022I -D0Z2 DSN7GCMD 'DISPLAY GROUP ' NORMAL COMPLETION
***
242 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Linking to the DB2 libraries
WebSphere Application Server on z/OS, when it is configured to use a JDBC type 2
connection to DB2 for z/OS, also requires access to three DB2 libraries:
DB2xx.SDSNEXIT
DB2xx.SDSNLOAD
DB2xx.SDSNLOD2
This example uses the STEPLIB approach and adds the libraries to the servant region
proclibs of the Deployment Manager and the WebSphere Application Server, as shown in
Example 5-4. You must add to it to the Deployment Manager to test the connection.
You have completed all the required steps to configure WebSphere Application Server for
JDBC type 2 access to DB2.
The window that is shown in Figure 5-49 on page 245 opens. This window shows a list of
existing JDBC data sources that are defined in your environment.
244 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-49 List of existing JDBC data sources
2. Click TradeDatasourceXA and the window that is shown in Figure 5-50 opens. This
window lists all the custom properties that are available to the data source.
Figure 5-50 List of custom properties that are available to the data source
246 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
There are two places in WebSphere Application Server where you can set the client
information properties.
Data source custom properties
Resource Reference extended data source properties
These properties can also be set in the application by using the JDBC 4.0 API setClientInfo.
This action requires an application change and is typically required in situations in which
client information settings can be determined and set only at run time. In all other situations,
use data source custom properties or Resource Reference extended data source properties
for easier system administration.
The following sections demonstrate how to set these properties in WebSphere Application
Server. The approach is the same regardless of whether the application uses a JDBC type 2
or 4 connection (XA or non-XA).
2. Click TradeDatasourceXA and the window that is shown in Figure 5-54 opens.
248 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3. Click Custom properties. The panel that opens lists all the custom properties that are
available. By default, the properties that are available are the ones that are shown
in Figure 5-55.
4. By default, these properties do not have any values that are specified. You can set all the
properties or any combination of them. In this example, set values for all of them. For
example, to set a value for clientAccountingInformation, click the
clientAccountingInformation property. The window that is shown in Figure 5-56 opens.
6. Click Apply and then save the changes. Figure 5-58 shows all the values of the
properties, which we set by repeating the steps in this section.
250 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
WebSphere Application Server requires your code to reference application server resources
(such as data sources or J2C connection factories) through logical names, rather than access
the resources directly in the Java Naming and Directory Interface (JNDI) name space. These
logical names are called resource references.
WebSphere Application Server requires the usage of resource references for the
following reasons:
If application code looks up a data source directly in the JNDI naming space, every
connection that is maintained by that data source inherits the properties that are defined in
the application. Then, you create the potential for numerous exceptions if you configure
the data source to maintain shared connections among multiple applications. For example,
an application that requires a different connection configuration might attempt to access
that particular data source, resulting in application failure.
It relieves the programmer from having to know the name of the actual data source or
connection factory at the target application server.
You can set the default isolation level for a data source through resource references. With no
resource reference, you get the default for the JDBC driver that you use.
The extended properties are described in the WebSphere Application Server Information
Center, which can be found at the following URL:
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=%2Fcom.ibm.webspher
e.nd.multiplatform.doc%2Finfo%2Fae%2Fae%2Ftdat_heteropool.html
252 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3. Click the application on which you want to set the properties. In this example, click
D0ZG_WASTestClientInfo. The window that is shown in Figure 5-61 opens and displays
information about the application and all the artifacts that it uses.
5. The example application uses jdbc/Josef, as shown in Figure 5-62. Select the module by
selecting the Select check box, as shown in Figure 5-63.
6. Click Extended Properties. The window that is shown in Figure 5-64 on page 255 opens.
254 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-64 Extended properties panel
8. Click Apply and then OK. Save the changes. The application is configured and it is easy
to identify the application in DB2 for z/OS.
If any applications set the DB2 client information fields, the values are not reset when the
connection is returned to the connection pool. Applications must set these values at the
beginning of the transaction to correctly collect and report data based on these fields.
The Rational Application Developer ClientInfo project can be downloaded from the web. For
more information, see Appendix H, “ClientInfo dynamic web project” on page 573.
The servlets that are illustrated in Figure 5-66 use the same program structure. Each servlet
provides a setClientInformationFromJava subroutine to implement the particular code for
setting DB2 client information by using the setClientInfo API, the Java interfaces provided by
the DB2Connection class, or the WLM_SET_CLIENT_INFO stored procedure.
Example 5-5 General servlet structure set DB2 client information sample
package setClientInfoJDBC40API;
import java.io.IOException;
import java.io.PrintWriter;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
256 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.sql.DataSource;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
/**
* Servlet implementation class ClientInfoJDBCAPI
*/
@WebServlet("/JDBC40API")
public class ClientInfoJDBC40API extends HttpServlet {
private static final long serialVersionUID = 1L;
/**
* @see HttpServlet#HttpServlet()
*/
public ClientInfoJDBC40API() {
super();
}
/**
* @see HttpServlet#doGet(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
PrintWriter pw = response.getWriter();
response.setContentType("text/html");
pw.println("Hello from ClientInfoJDBC40API Servlet <br/><br/> " );
InitialContext ic = null;
DataSource ds = null;
try {
ic = new InitialContext();
ds = (DataSource) ic.lookup("jdbc/Josef"); 1
pw.println("Successfully looked up jdbc/Josef JNDI entry<br/><br/>");
} catch (NamingException e) {
e.printStackTrace();
}
/**
* @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse
response)
*/
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
}
private String returnSQL() { 8
String sql = "SELECT CURRENT CLIENT_ACCTNG, " +
" CURRENT CLIENT_APPLNAME ," +
" CURRENT CLIENT_USERID, " +
" CURRENT CLIENT_WRKSTNNAME " +
"FROM SYSIBM.SYSDUMMY1";
return sql;
}
258 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
" COLUMNS " +
" RACFUser VARCHAR(08) PATH '../USER/text()'," +
" RACFGroup VARCHAR(08) PATH './text()' " +
" ) AS T ) " +
" SELECT ROWNUMBER() OVER () AS ROWNO, Q2.* FROM Q2 " ;
return sqlfunc;
}
public void setClientInformationFromJava(Connection conn, PrintWriter pw)
throws Exception 10
{ ....... Java code specific to the method for setting the DB2 client
information goes here ....
}
conn.setClientInfo("ClientAccountingInformation","JDBC40API_clientaccounting");
pw.println("successfully invoked setClientInfo JDBC 4.0 API for setting DB2
Client Info to the following values <br/><br/>" ); 2
pw.println(" ClientUser=JDBC40API_clientuser<br/>" );
pw.println(" ClientHostname=JDBC40API_clientworkstation<br/>" );
pw.println(" ApplicationName=JDB40CAPI_clientapplication<br/>" );
pw.println("
ClientAccountingInformation=JDBC40API_clientaccounting<br/><br/>" );
}
The ClientInfoJDBC40API servlet returned the processing result that is shown in Figure 5-67.
During servlet execution in our example, we used the display thread output that is shown in
Figure 5-68 on page 261 to confirm the DB2 client information settings.
260 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DSNV401I -D0Z1 DISPLAY THREAD REPORT FOLLOWS -
DSNV402I -D0Z1 ACTIVE THREADS -
NAME ST A REQ ID AUTHID PLAN ASID TOKEN
SERVER SW * 12 db2jcc_appli DB2R3 DISTSERV 0083 45
V437-WORKSTATION=JDBC40API_clientwo, USERID=JDBC40API_client,
APPLICATION NAME=JDBC40API_clientapplication
V429 CALLING FUNCTION=F.GRACFGRP,
Figure 5-68 Servlet ClientInfoJDBC40API display thread output
IBM Data Server Driver for JDBC and SQLJ Java API
The IBM Data Server Driver for JDBC and SQLJ combines type 2 and type 4 JDBC
implementations. The driver is packaged in the following way:
IBM Data Server Driver for JDBC and SQLJ Version 3.5x, JDBC 3.0 compliant. The
db2jcc.jar and sqlj.zip files are available for JDBC 3.0 and earlier support.
IBM Data Server Driver for JDBC and SQLJ Version 4.x, compliant with JDBC 4.0 or later.
The db2jcc4.jar and sqlj4.zip files are available for JDBC 4.0 or later, and JDBC 3.0 or
earlier support.
You control the level of JDBC support that you want by specifying the appropriate JAR files in
the JDBC provider, as shown in Figure 5-25 on page 227. Both JAR files contain the
DB2Connection class to support the following Java APIs for setting DB2 client information:
setDB2ClientUser(String paramString)
setDB2ClientWorkstation(String paramString)
setDB2ClientApplicationInformation(String paramString)
setDB2ClientAccountingInformation(String paramString)
Because these Java APIs are deprecated in JDBC 4.0, you might want to use the
setClientInfo Java API in case your JDBC provider is configured to use the db2jcc4.jar file.
For more information about how to use the setClientInfo API, see “JDBC 4.0 setClientInfo
Java API” on page 259.
Example 5-7 Using IBM Data Server Driver for JDBC and SQLJ set client information Java APIs
public void setClientInformationFromJava(Connection conn, PrintWriter pw) throws
Exception
{
setWorkStationName(conn, "JDB30CAPI_clientworkstation")1;
setApplicationName(conn,"JDBC30API_clientapplication");
setAccounting(conn,"JDBC30API_clientaccounting");
setEndUser(conn,"JDBC30API_clientuser");
pw.println("successfully invoked JDBC 3.0 API for setting DB2 Client Info to
the following values <br/><br/>"); 5
pw.println(" ClientUser=JDBC30API_clientuser<br/>");
pw.println(" ClientHostname=JDB30CAPI_clientworkstation<br/>");
pw.println(" ApplicationName=JDBC30API_clientapplication<br/>");
The ClientInfoJDBC30API servlet returned the processing result that is shown in Figure 5-69
on page 263.
262 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-69 Servlet ClientInfoJDBC30API result
During servlet execution, we used the display thread output that is shown in Figure 5-70 to
confirm the DB2 client information settings.
com.ibm.websphere.rsadapter.WSConnection.setClientInformation(Properties arg0)
The ClientInfoWSAPI servlet returned the processing result shown Figure 5-71 on page 265.
264 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-71 Servlet ClientInfoWSAPI result
During servlet execution, we used the display thread output that is shown in Figure 5-72 to
confirm the DB2 client information settings.
Choosing the correct option for setting the DB2 client information can be difficult because the
Java API you use depends on the JDBC driver level that your application is using. You can
ignore this dependency by using the WLM_SET_CLIENT_INFO external stored procedure for
setting the DB2 client information.
The WLM_SET_CLIENT_INFO external stored procedure load module DSNADMSI uses the
RRS DSNRLI SET_CLIENT_ID function to set the client information that is associated with
the current connection at the DB2 server. Using this method does not depend on the JDBC
driver level, the JDK level, or the type or version of the application server that you are using.
266 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The ClientInfoWLM servlet returned the processing result that is shown in Figure 5-73.
During servlet execution, we used the display thread output that is shown in Figure 5-74 to
confirm the DB2 client information settings.
The window that is shown in Figure 5-76 on page 269 opens. This window shows a list of
the existing JDBC data sources that are defined in your environment.
268 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-76 List of existing JDBC data sources
2. Click TradeDatasourceXA and the window that is shown in Figure 5-77 opens.
The Statement cache size specifies the number of statements that can be cached per
connection. The default size is 10. We used the default in our environment. Applications
should configure the value based on how many SQL statements are used.
270 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-79 WebSphere navigation window
2. Click Global security and the window that is shown in Figure 5-80 opens.
4. Click New and the window that is shown in Figure 5-82 opens.
272 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
5.8 Configuring connection pool sizes on data sources in
WebSphere Application Server
To configure connection pool sizes on data sources in WebSphere Application Server,
complete the following steps:
1. In the navigation window of the administrative console of the WebSphere Application
Server, expand Resources and click Data sources, as shown in Figure 5-83.
2. Click TradeDatasourceXA and the window that is shown in Figure 5-85 opens.
274 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-86 Connection pool properties
276 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2. Click WebSphere enterprise applications. The window that is shown in Figure 5-88
opens and shows all the installed applications in your environment.
4. Click Resource references. The window that is shown in Figure 5-90 on page 279 opens.
The window lists all the different resource references that are used by the applications. In
our example, we use only a data source reference.
278 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-90 Resource reference
5. The example, application uses jdbc/Josef, as shown in Figure 5-90. Select the module by
selecting the Select check box, as shown in Figure 5-91.
280 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
7. Select the Use trusted connections radio button. Then, select a JAAS alias in the
drop-down menu, as shown in Figure 5-93. The user ID in the JAAS alias should have only
connect privileges to DB2 for z/OS and should be defined as part of the trusted context
definition in DB2. In our example, we created a JAAS alias named trustedcontext.
282 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Complete the following steps:
1. In the navigation window of the administration console of WebSphere Application Server,
expand Server Types, as shown in Figure 5-95.
2. Click WebSphere Application servers and the window Figure 5-96 opens and displays
the servers that are defined in the environment. In the example environment, we had three
servers. We focus on the MZSR014 server.
284 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4. Expand Java and Process Management. Click Process definition. The window that is
shown in Figure 5-98 opens. This window is specific to WebSphere Application Server on
z/OS.
5. Click Servant and the window that is shown in Figure 5-99 opens.
7. Click Custom properties and the window that is shown in Figure 5-101 opens.
286 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
8. Click New and the window that is shown in Figure 5-102 opens. In the name field, enter
db2.jcc.propertiesFile. In the value field, enter the location of the properties file. In our
example, the properties file is named jcc.properties. It is stored in /u/rajesh.
Click Apply, then OK, and then save the changes. The window that is shown in
Figure 5-103 opens.
9. Now enter any required properties in the jcc.properties file and restart the server. You
can validate that the jcc.properties file was acquired by looking at the following server
log:
Trace: 2012/10/04 00:37:31.677 02 t=7E3AE8 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws390.orb.CommonBridge.printProperties
ExtendedMessage: BBOJ0077I: db2.jcc.propertiesFile = /u/rajesh/jcc.properties
When you see the message, you know that the server acquired the jcc.properties file.
5.11.1 websphereDefaultIsolationLevel
Complete the following steps:
1. In the navigation window of the administrative console of the WebSphere Application
Server, expand Resources and click Data Sources, as shown in Figure 5-104.
288 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The window that is shown in Figure 5-105 opens. This window shows a list of existing
JDBC data sources that are defined in your environment.
3. Click Custom properties. The window that is shown in Figure 5-107 on page 291 opens
and lists all the custom properties that are available.
290 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 5-107 List of the custom properties
5. Enter 2 for the value, which sets cursor stability in DB2. Click Apply, then OK, and
then save the changes.
5.11.2 currentPackagePath
The currentPackagePath custom property is also available by default in WebSphere
Application Server. It does not have any value, as shown in Figure 5-110. This property
should be used under the following conditions:
JDBC type 4 connectivity is used to connect to DB2 for z/OS.
The application has multiple packages that must be accessed and those packages are
bound to different collections.
292 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Click currentPackagePath and the window that is shown in Figure 5-111 opens. Enter a
comma-separated collection of names. In this example, the application used packages that
were bound to collections MYCOLL1 and MYCOLL2.
5.11.3 pkList
The pkList custom property is not available by default in WebSphere Application Server. This
property should be used under the following conditions:
JDBC type 2 connectivity is used to DB2 for z/OS.
The application has multiple packages that must be accessed and those packages are
bound to different collections.
5.11.4 keepDynamic
The keepDynamic custom property is not available by default in WebSphere Application
Server. The default behavior in WebSphere Application Server is to not use this property. This
property should be used when you want to use a local cache in DB2, as shown in
Figure 5-113.
For more information about keepDynamic, see “WebSphere Prepared Statement Cache and
DB2 KEEPDYNAMIC option” on page 60.
294 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Click keepDynamic and the window that is shown in Figure 5-114 opens. Enter a value of 1
to use the keepDynamic feature in DB2.
This chapter also provides a short running sample of a stand-alone application that uses DB2
with JPA and explains the differences of stand-alone Java applications and applications in
managed environments.
This chapter demonstrates how to get a good dynamic statement cache hit ratio and
describes locking.
JDBC drivers are client-side adapters (although they could be clients in a server) that convert
requests from applications through the usage of the API to a protocol that the
database understands.
The other JDBC types are not important regarding Java development. The type of a driver
should not be confused with the specification level it implements. Type 4 means network
driver and JDBC 4.0 means specification level 4.0.
IBM Data Server Driver for JDBC and SQLJ is a single driver that includes JDBC type 2 and
JDBC type 4 behavior and that implements JDBC 4.0 and JDBC 3.0. Which type or version is
used depends solely on the configuration options that are made while opening the connection
to the database.
From an application point of view, there is no difference between the two types. The API is the
same. The Java part of both drivers must be available to application clients in the class path.
The application can make type 2 and type 4 connections by using this single driver instance.
Type 2 and type 4 connections can be made concurrently.
To work with DB2 for z/OS, the license file db2jcc_license_cisuz.jar must be in
the class path.
More information about the driver architecture and its configuration options can be found in
Chapter 3, “DB2 configuration options for Java client applications” on page 81.
298 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
With the sqlj4.zip file in the class path, the IBM Data Server Driver for JDBC and SQLJ
provides SQLJ functions that include JDBC 4.0 and later functions, and JDBC 3.0 and
earlier functions.
Whether hardcoded or generated, the result is always a string with an SQL statement that
then is given to the appropriate JDBC API. Both methods are considered dynamic because
the database does not know the SQL in advance either way. The generation process can
encompass the generation parameters as well or it can include parameter markers (question
marks) that can be substituted later through an API call.
In the Java community, programming with dynamic SQL is the prevailing method. JDBC
implements the dynamic SQL model. A major advantage is that application development is
faster than with other techniques. All database vendors include a JDBC driver in their
databases, making JDBC a universal technique that is known to almost every programmer.
Although JDBC always uses the same programming principles, it does not allow a fully
portable program. Among other advantages, persistency frameworks such as Hibernate or
JPA address the portability problem. But even in the form of that new persistency layer, the
underlying design schema remains dynamic SQL handled by a JDBC driver.
Although dynamic SQL with raw JDBC API statements is being used less often, there are
some situations where it is the most suitable solution:
The table structure is too complex for JPA entities.
No entities are involved (for example, in mass updates).
You are using maintenance or administrative programs.
When the persistency framework is not powerful enough.
A short code snippet shows the principles of JDBC API coding. We do not go into too much
detail because JDBC programming is widely known. Instead, we describe some important
design issues that are relevant to other parts of this book.
As you can see in Example 6-1, the DriverManager.getConnection method with its
parameters connects to the database. We could have used the Datasource interface as well,
if we had used a predefined data source. We then ask the connection object to return a
preparedStatement. Afterward, we present the SQL to the statement, leaving two places
unclear. We code a “?” as a parameter marker instead. The parameter markers are filled
afterward with a concrete value.
In addition, caches are filled. If you ran the statement inside WebSphere Application Server,
the JVM’s prepared statement cache is filled or, if the statement was already ran, you get a
created statement object. The statement object is built around the statement "UPDATE
EMPLOYEE SET PHONENO=? WHERE EMPNO=?". The statement can have different parameters and
a different case, but it remains the same statement. The cache includes only the dynamically
created Java object for that specific statement. It does not include any DB2
related information.
On the DB2 side, a cache entry is also created if DB2 is defined that way. No Java object is
stored, but access strategy-related information is stored. Both caches complement
each other.
As an alternative, you can generate a complete SQL string without placeholders for the
parameters. It would look like the following string:
This string results in a new Java statement object and new objects for other employees or
phone numbers because you have a new statement instead of a parameter substitution. You
can give this string to a prepareStatement for execution, But using a simple createStatement
is sufficient, as shown in Example 6-2.
Only parameter markers allow DB2 to use the dynamic statement cache. Otherwise, a
dynamic rebind for the mini-plan must be made. As of DB2 10, there are some additional
capabilities to caching, as described in 6.7.4, “Literal replacement” on page 330.
SQLJ, the name for static SQL in Java, is based on JDBC APIs by using embedded SQL to
access the database. The database normally uses static SQL, but can use dynamic SQL in
some cases. Because static SQL prepared in advance, performance is better compared to
dynamic SQL. By contrast, dynamic SQL is not known by the system at compile time; parsing,
validation, preparation of statements, and determination of the access path in the database is
done only at run time. Errors or poorly performing statements might remain undetected until
problems in production occur.
300 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
With SQLJ, the SQL statements are not part of the Java language. They are marked with #sql
in the Java source code, but must be extracted before the Java compiler sees them or they
cause Java compile errors. Therefore, the Java class is edited in a <name>.sqlj file that is
then processed by the SQLJ translator. How an SQLJ statement is embedded in to the Java
source code is illustrated in Example 6-3.
There are some SQLJ sample programs that come with the DB2 product. They are in
<install root>\SQLLIB\samples\java\sqlj for DB2 on Windows or in
/usr/lpp/db2/samples/java on z/OS.
Despite all these advantages, only a few Java projects use SQLJ. The more complex
application build process might be one reason why this is so. Another reason is that SQLJ
remains basically JDBC and has no support for object-relational mapping (ORM). A
programmer’s productivity and application maintainability seem to be more important for
many projects than advantages in performance and security.
WebSphere Application Server offers support for static SQL for Enterprise Java Beans (EJB)
2.x and later entity beans with the ejbdeploy SQLJ option. In EJB 3.0 and later, container-
managed (CMP) Enterprise beans are replaced by JPA entities. Although EJB 2.x could be
used in later versions of WebSphere Application Server, it is unlikely.
With JPA, this feature is offered through pureQuery and offers you the advantages of both
dynamic and static SQL.
All three features can be used or omitted independently from each other.
pureQuery provides an alternative set of APIs. They are similar in concept to JPA and can be
used instead of Java Database Connectivity (JDBC) to access DB2. Even if these APIs are
not used in your application, the pureQuerys Client optimization feature makes it possible to
take advantage of static SQL for existing JDBC applications without modifying existing
dynamic source code.
Figure 6-1 on page 303 shows the flow between pureQuery and the database.
The general concept is to collect all dynamic SQL statements of your application at
development or deployment time by using pureQuery. The application developer does not
need to be involved in this process. The collected statements are then bound into packages in
the database. At execution time, the pureQuery run time uses the static SQL from the
packages instead of the dynamic SQL to work with DB2. Where dynamic SQL statements
cannot be collected or converted, the run time continues to use dynamic SQL.
302 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Generation time Execution time
JPA application
Persistence.xml Persistence.xml
Generated SQL (pu.pdqxml)
Static generator utility
(wsdb2gen.bat)
JPA
(com.ibm.ws.jpa.jar)
Generated SQL (pu.pdqxml)
pureQuery (pdq.jar)
Static binder
JDBC (db2jcc.jar)
DB2
packages
DB2
To understand the functionality, look at the way SQL is collected for JPA. Either a command or
IBM Data Studio can be used.
The wsdb2gen command is in the /bin directory of WebSphere Application Server. To run it,
extend the WebSphere class path by using the pdq.jar, pdqmgmt.jar and db2jcc4.jar files
that come with IBM Data Studio. A sample command is shown in Example 6-4.
The utility uses the persistence unit name as input, along with other parameters, and
generates an output file that contains SQL statements that are required by all entity
operations, including persist, remove, update, and find. It also generates the SQL statements
that are needed in the running of JPA named queries. Other dynamic SQL cannot be found
and us not included in the output.
The output of the command is a file that contains the persistence unit name followed by a
suffix of .pdqxml. The pdqxml file is written to the same directory as your persistence.xml
file. Alternatively, by using IBM Data Studio, pureQuery tools can be added to your
JPA project.
To enable pureQuery support for your project in IBM Data Studio, go to the Java Perspective
and right-click the jpa_db2_web project. Then, select Data Access Management Add
Data Access Development support. The window that is shown in Figure 6-2 opens.
Select the Add pureQuery support to project check box, which adds the pureQuery
runtime libraries to your build path. The run time has five JAR files with names that start with
pdq. The WebSphere Application Server run time must be in the class path as well.
You must define a database connection to the SAMPLE database in this window. It is used to
check the SQL statements and prefix table names with the provided schema name in the
generated output statements.
304 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The pdqxml file then is generated by right-clicking the persistence.xml file of your project in
the Java Perspective. Then, select Data Access Development Generate pureQueryXML
File, as shown in Figure 6-3. A file named jpa_db2.pdqxml, which is named after the
persistence-unit name used in that project, is generated.
The pdqxml file must be packaged inside your archive file in the same location as the
persistence.xml configuration file, usually the META-INF directory of the module.
The application can now be deployed to the server. However, it works with dynamic SQL
unless you bind the database packages. To bind the packages, in the WebSphere Application
Server console, click WebSphere enterprise applicationsclick the application name, and
click SQLJ profiles and pureQuery bind files. Alternatively, you can use the AdminTask
command, as shown in Example 6-5.
Be sure that you grant execution authority on the package to public or to the user that is
defined for the data source in WebSphere Application Server.
306 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The pureQuery integration that is delivered with WebSphere Application Server requires the
addition of the Data Studio pureQuery run time to the JDBC provider, as shown in
Example 6-6. It must be purchased separately. In the WebSphere environment, you place the
pureQuery JAR files pdq.jar and pdqmgmt.jar in to the DB2 JDBC Driver
Provider class path.
Example 6-6 Add the pureQuerey run time to JDBC providers class path
${DB2_JCC_DRIVER_PATH}/db2jcc4.jar
${UNIVERSAL_JDBC_DRIVER_PATH}/db2jcc_license_cu.jar
${DB2_JCC_DRIVER_PATH}/db2jcc_license_cisuz.jar
${PUREQUERY_PATH}/pdq.jar
${PUREQUERY_PATH}/pdqmgmt.jar
In WebSphere Application Server, you must use a JPA for the WebSphere Application Server
persistence provider. Only this JPA uses static SQL support by using the DB2 pureQuery
feature. This is the default in WebSphere. The original Apache OpenJPA driver does not
support pureQuery optimization. Be sure not to overwrite this default with a provider
statement in your persistence.xml file.
If you ran your application in server MZSR015, you could verify that your SQL is static by
activating a trace in the server:
You can find another pureQuery optimization example at the following website:
http://www.ibm.com/developerworks/websphere/techjournal/0812_wang/0812_wang.html#r
esources.
Java stand-alone applications are used often. On z/OS, the traditional batch job often is
developed in Java.
As an example, Java development cannot be done without frequent JUnit tests, which are
Java stand-alone applications. Today, every Java class has a corresponding test class that
checks all the methods of the class. A framework that is called JUnit (http://www.junit.org)
organizes the tests. After development, the program must build, normally after all its
components are checked out of a source code version control system, such as Concurrent
Versions System (CVS). The build process includes the creation of Java archives in which the
application is packaged. Java archives (JARs), web application archives (WARs), and
enterprise archives (EARs) must be built. Many dependencies to other Java archives must be
resolved during that process. Then, the application then is deployed automatically to a
Java Platform, Enterprise Edition server.
During the development cycle, database definitions must be provided at several points. Unit
tests must check data that comes from a database or the packaging or deployment process
must include the preconfigured JDBC driver.
This section shows you some ways of dealing with different configuration options for the
usage of IBM DB2 Driver for JDBC and SQLJ for stand-alone applications.
For example, the currentSchema property is often defined as a JDBC driver property outside
of the Java program. This way, the Java class can be used for multiple database schema
without having to change the code. This situation also applies to the
defaultIsolationLevel property.
308 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Use setXXX methods, where XXX is the unqualified property name, with the first character
capitalized when using subclasses of com.ibm.db2.jcc.DB2BaseDataSource. For example,
to change the defaultIsolationLevel property, you use the method
ds.setDefaultIsolationLevel() before establishing the connection. In this case, the
class is no longer portable because you are using the IBM Data Server Driver for JDBC
and SQLJ interfaces directly.
You can find a full list of the properties in DB2 10 for z/OS Application Programming Guide
and Reference for Java, SC19-2970.
The isolation level constant in the java.sql.Connection class is an integer. For the properties
dictionary, it must be converted to a string.
The IBM Data Server Driver for JDBC and SQLJ supports a number of isolation levels, which
correspond to database server isolation levels. Table 6-1 shows the equivalency of standard
JDBC and DB2 isolation levels.
As of WebSphere Application Server V8.5, the default isolation level is read stability. For a
stand-alone JPA, the default is cursor stability.
In the connection url string, all text after the last “:” is treated as JDBC driver properties,
which are optional. If you provide JDBC driver properties in the connection string, do not
forget the last “;” because otherwise it will not work.
For our Java stand-alone example, we use the Apache OpenJPA implementation
(http://openjpa.apache.org) because the IBM JPA implementation in WebSphere
Application Server is based on OpenJPA. In Example 6-10, you see a persistence.xml file for
use in a Java SE environment, as indicated by transaction-type="RESOURCE_LOCAL". In
contrast, in a persistence.xml file for use in WebSphere Application Server, it is
transaction-type="JTA". In WebSphere Application Server, almost no property is defined,
but in Java SE, you must specify connection parameters.
310 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
<property name="openjpa.Log" value="DefaultLevel=ERROR, SQL=TRACE" />
<property name="openjpa.DataCache" value="false" />
<property name="openjpa.QueryCache" value="false" />
<property name="openjpa.jdbc.DBDictionary" value="db2(batchLimit=100)" />
<property name="openjpa.jdbc.QuerySQLCache" value="false" />
<property name="openjpa.ConnectionFactoryProperties"
value="PrettyPrint=true, PrettyPrintLineLength=72"/>
</properties>
</persistence-unit>
</persistence-unit>
</persistence>
The default isolation level here is part of the connection url. During our tests, the definition
openjpaConnectionProperty had no effect.
Here are the connection parameters with a short description of each one:
javax.persistence.jdbc.driver: Fully qualified name of the driver class
javax.persistence.jdbc.url: Driver-specific connection URL
javax.persistence.jdbc.user: User name that is used by the connection
javax.persistence.jdbc.password: Password that is used for the connection
But when you plan a batch process with millions of database updates, there are things to
consider. OLTP is triggered by a user with a direct response. To initiate OLTP, users typically
complete an entry form or perform other actions through a user interface application
component. The user interface component then initiates the associated online transaction
with the business logic in the background. When the transaction is complete, the same user
interface or other user interface component presents the result of the transaction to the user.
The response can be data or can be a message regarding the success or failure of the
processing of the input data. The transaction has high priority in the system and normally gets
system resources at once. Data is committed after every transaction.
A checkpoint is one of the key features that distinguishes bulk jobs from OLTP applications, in
that data is committed in chunks, along with other required housekeeping to maintain
recovery information for restarting jobs. An extreme example is doing a checkpoint after every
record, which equates to how OLTP applications typically work. At the other extreme is doing
a checkpoint at the end of the job step, which might not be feasible in most cases because
recoveries can be expensive and too many locks can be held in the database for too much
time. A checkpoint is somewhere between these two extremes. The checkpoint interval can
vary depending on a number of factors, such as whether jobs are run concurrently with OLTP
applications, how long locks can be held, the amount of hardware resources available, the
SLAs to be met, and the time that is available to meet deadlines. Depending on the
technology that is used, there are static ways of setting these checkpoint intervals, but ideally
checkpoint intervals can be set dynamically as well.
The application logic should take the following items into consideration:
Database commits should, if possible, not occur after a single update but only after a
group of updates.
Plan checkpoints at which an application restart can occur.
If transactions must be made in an OLTP server from a batch program, use WLM service
classes that prevent the normal online transactions from being constrained.
A JDBC batch statement.
Consider using a WebSphere embeddable EJB container for your batch. It is especially
useful if you can then avoid connecting to the WebSphere Application Server that is used
for online work. The batch can be assigned to a special WLM service class. All database
services such persistence service with JPA, transactions with EJBs, and bean validation
are available.
Consider using a WebSphere Extended Deployment Compute Grid. You can process
business transactions cost-effectively by sharing resources and development skills
between batch and online transactions (OLTP).
6.5.3 Portability
When you search for a sample of a JDBC program, you many of them that start with the
following string:
// Load the driver
Class.forName("com.ibm.db2.jcc.DB2Driver");
This string couples the Java class unnecessarily to a specific implementation and prevents
portability. As of JDBC 4, you do not need to load the drive if you have the driver
implementation classes in your class path; in DB2, these are in db2jcc4.jar. The
java.sql.DriverManager methods find the implementation classes that are using the service
location mechanisms. If the connection URL starts with jdbc:db2, the IBM Data Server Driver
for JDBC and SQLJ is found.
312 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
JDBC 4.0 Drivers must contain the META-INF/services/java.sql.Driver file. This file points
to the correct implementation class; for DB2, it is com.ibm.db2.jcc.DB2Driver.
Later in the book, you see a more complex application that runs inside a Java Platform,
Enterprise Edition container (see “A short Java Platform, Enterprise Edition example” on
page 346). There you can find more background for programming with JPA.
To run the example yourself, you need a local DB2 or a DB2 on z/OS system with an installed
SAMPLE database, the IBM DB2 Driver for JDBC and SQLJ (db2jcc4.jar), the license JAR
for the specific platform, the OpenJPA implementation openjpa-all-2.2.0.jar, and the
logging framework slf4j-simple-1.6.6.jar. For more information about obtaining these
items, see Appendix I, “Additional material” on page 587. Some of the JAR files can be found
in a WebSphere Application Server installation. You can get one, for example, if you augment
IBM Data Studio with the WebSphere Application Server test environment described
Appendix C, “Setting up a WebSphere Application Server test environment on IBM Data
Studio” on page 523.
Use the valid connection parameters for your system. The data source connection inside
IBM Data Studio is needed so that you can use the Data Studio tools for the generation of
Java JPA entities. If you defined the Java code by typing the class definitions, this step is
not needed.
314 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2. Check whether the sample DB is present by scrolling through the hierarchy that opens
after you establish the connection, as shown in Figure 6-6. You need the DB to create
entities and for the test runs.
Figure 6-6 Check whether you can connect to the sample database
3. In the Java Perspective in the Package Explorer window, create a JPA project named
jpa_db2, as shown in Figure 6-7. You can use the default location, and do not need to
select a target run time. Check whether the configuration shows Minimal JPA 2.0
configuration. The project does not need to be added to an EAR.
In the JPA Perspective window, the Project Explorer provides a special view of the
persistence.xml file that is in the META-INF directory. There are default contents that are
already generated for the still empty persistence-unit that is named after your project
name. In the Java Perspective window, it is shown only as a normal file in META-INF.
Example 6-11 shows the generated JPA definition file.
316 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
We are now going to generate a Java JPA entity from an existing database table. In JPA
terms, this is what is known a bottom-up approach. Complete the following steps:
1. Right-click the project name and select JPA Tools Generate Entities from Tables, as
shown in Figure 6-9.
2. Select the correct database connection. It is the one that created in step 1 on page 314.
Next, you must select the schema under which the sample database is defined. In this
example, it is DSN81010. The table names then display.
Leave the settings in the Table Associations window at the defaults. If there are
relationships among other tables, you can define their associations here. Because you
have only one table in our sample, you do not need to specify anything, as shown in
Figure 6-11.
318 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4. In the Customize Default Entity Generation window, set Key generator to auto. This
inserts the annotation @GeneratedValue(strategy=GenerationType.AUTO) in to your
generated Java class for the key field deptno. Specify com.ibm.itso.entities in the
Packages field, as shown in Figure 6-12.
You do not have to specify a class name because the table name is used as a class name
by default. The default behavior can be changed afterward by using special
Java annotations.
The Dept.java file still has syntax errors because the class path is missing some
important libraries. We are now going to fix this.
a. Switch to the Java perspective, right-click the project name, and select Build Path
Add External Archives. Add the following archives:
• slf4j-simple-1.6.6.jar
• openjpa-all-2.2.0.jar
• db2jcc4.jar
• db2jcc_license_cu.jar or if you test with DB2 on z/OS db2jcc_license_cisuz.jar
Even after the class path contains the correct libraries, the project cannot be built because
one error remains:
Class "com.ibm.itso.entities.Dept" is included in a persistence unit, but is not
mapped.
This seems to be an Eclipse-related error and it can be fixed easily.
b. Click Project Clean and clean the whole workspace or your project.
c. Click Project and verify that Build Automatically is selected so that the project is
compiled and rebuilt after the cleaning.
The generated Java source for the Dept entity is shown in Example 6-13. The names for
the class and the fields are all taken from the table and column names of the database.
import java.io.Serializable;
import javax.persistence.*;
/**
* The persistent class for the DEPT database table.
*
*/
@Entity
public class Dept implements Serializable {
private static final long serialVersionUID = 1L;
@Id
320 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
@GeneratedValue(strategy=GenerationType.AUTO)
private String deptno;
public Dept() {
}
Now you are ready to create the test class. Complete the following steps:
1. Right-click the project and select New Class.
2. For the package name, specify com.ibm.itso.jpa.tests, and for the class name, AllTests.
3. Replace the contents of the Java source for AllTests.java with the contents
of Example 6-14.
322 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
import javax.persistence.TypedQuery;
import org.junit.Before;
import org.junit.Test;
import com.ibm.itso.entities.Dept;
@Before
public void initEmfAndEm() {
emf = Persistence.createEntityManagerFactory("jpa_db2");
em = emf.createEntityManager();
}
@Test
public void getDeptResultListSize() {
@Test
public void getListOfDepartements() {
4. Replace the contents for the META-INF/persistence.xml file with the contents
of Example 6-15.
<provider>org.apache.openjpa.persistence.PersistenceProviderImpl</provider>
<class>com.ibm.itso.entities.Dept</class>
<properties>
<property name="openjpa.RuntimeUnenhancedClasses" value="unsupported"
/>
<property name="openjpa.ConnectionDriverName"
value="com.ibm.db2.jcc.DB2Driver" />
This action adds connection-specific information properties to the file. You must use your
own names.
324 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 6-14 Specify the JPA enhancement javaagent for the unit test
4. Right-click AllTests.java in the Package Explorer again. Click Run AS JUnit Test.
The JUnit run time now inspects the Alltests class for methods that are annotated with @Test
and runs them. Because the DEPT table has 14 rows, the assertEquals(results.size(),
14) statement in the getDeptResultListSize() method succeeds. The second test in the
getListOfDepartements() method is not a real test (it has no assert). It prints only the
DEPTNAME column of the result set just to show that the objects are created from
the database.
In addition, you should see a list of Department Names in the Console window, as shown in
Example 6-16.
Java programs that have this information hardcoded into their classes can run in managed
environments because any Java class can use the full capability of the JVM and bypass the
server provided functions. This is not a preferred practice, though.
326 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
In a managed environment such as WebSphere Application Server, resources such as data
sources are predefined in the server environment. They are assigned with a JNDI Name. The
database connection in an application program is done by first looking up this JNDI name in
the server. The server then gives back a Datasource object that is used by the application or
the persistency framework to make the connection. This name (a string) must be coded in the
Java program and should be a logical name that is used only inside Java. It should not directly
use the JNDI name that is defined in a specific server for the data source, although it works.
Instead, it should be a reference to this name that must be mapped at deployment time. This
act of association is called binding the resource reference to the
data source.
The real data source to be used by this application is declared only at deployment time or, as
a special case, in an embedded configuration file. This file, for which you can see an example
in Example 6-18, contains the required binding information. Unless it is embedded in the
application package, this file is normally generated at deployment time. Its name is
ibm-web-bnd.xml or ibm-ejb-jar-bnd.xml and contains the binding-name attribute. It comes
from an administrator who defined the server resources.
As of Java EE5, resources can be injected into your program by using Java annotations. The
annotation that is used is javax.annotation.Resource. The process of binding references to
data sources remains basically the same. Instead of scanning the web.xml file during
deployment in search for unresolved references, the server examines the annotations. The
reference does not need to be declared in web.xml any more.
Although this is a well-known feature, there are some implications to using it with WebSphere
Application Server on z/OS. While in normal operation the respective servant connects to the
database itself, connection tests are sometimes done by other WebSphere Application Server
address spaces, depending on the scope that is defined for the data source. The correlation
of data source and test connection locality is shown in Table 6-2.
Table 6-2 Correlation of data source scope with the test connection JVM
Data source scope JVM where the test connection operation occurs
328 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The application server issues the following exception for a test connection at the node level:
Therefore, when you create these data sources at the node scope or cluster scope, you might
want to temporarily create the same configurations at a server scope for testing purposes.
Run the test connection operation at the server level to determine whether the data source
settings are valid for your overall configuration.
If JDBC packages are bound with REOPT(ALWAYS), statements cannot be saved in the cache. If
JDBC packages are bound with REOPT(ONCE) or REOPT(AUTO), statements can be saved in the
cache.
Statements that are sent to an accelerator server cannot be saved in the cache.
The following types of SQL statement text with SQL comments can be saved in the dynamic
statement cache:
SQL statement text with SQL bracketed comments within the text.
SQL statement text that begins with SQL bracketed comments that are unnested. No
single SQL bracketed comment that begins the statement can be greater than 258 bytes.
DB2 10 introduces a way for users to get higher cache reuse from dynamic statements that
reference literal constants. You can specify the PREPARE ATTRIBUTES clause CONCENTRATE
STATEMENTS WITH LITERALS, or set the JDBC driver connection property
statementConcentrator=YES to enable it.
If DB2 prepares a SQL statement and CONCENTRATE STATEMENT is enabled, DB2 replaces
certain literal constants in the SQL statement text with the ampersand character ('&'), and
inserts the modified statement into the dynamic statement cache.
330 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
When DB2 runs subsequent dynamic SQL statements, if the first search of the cache does
not find an exact match by using the original statement text, DB2 substitutes the ampersand
character ('&') for literal constants in the SQL statement text and searches the cache again to
find a matching cached statement that also has '&' substituted for the literal constants. If that
statement text comparison is successful, DB2 determines whether the literal reusability
criteria between the two statements allows for the new statement to share the
cached statement.
The reusability criteria includes, but is not limited to, the immediate usage context, the literal
data type, and the data type size of both the new literal instance and the cached literal
instance. If DB2 determines that the statement with the new literal instance cannot share the
cached statement because of incompatible literal reusability criteria, DB2 inserts, into the
cache, a new statement that has both '&' substitution and a different set of literal reusability
criteria. This new statement is different from the cached statement, even though both
statements have the same statement text with ampersand characters ('&'). Now, both
statements are in the cache, but each has different literal reusability criteria that makes these
two cached statements unique.
Here is an example:
Assume that DB2 prepares the following SQL where column X is data type decimal:
SELECT X, Y, Z FROM TABLE1 WHERE X < 123 (no cache match)
After the literals are replaced with '&', the cached statement is as follows:
SELECT X, Y, Z FROM TABLE1 WHERE X < & (+ lit 123 reuse info)
Assume that the following new instance of that statement is now being prepared:
SELECT X, Y, Z FROM TABLE1 WHERE X < 1E2
According to the literal reusability criteria, the literal value 1E2 does not match the literal data
type reusability of the cached statement. Therefore, DB2 does a full cache prepare for this
SELECT statement with literal 1E2 and inserts another instance of this '&' SELECT statement into
the cache as follows:
SELECT X, Y, Z FROM TABLE1 WHERE X < & (+ lit 1E2 reuse info)
Now, given the two '&' SELECT statements that are cached, attempt to prepare the same
SELECT statement again but with a different literal value instance from the first two cases:
SELECT X, Y, Z FROM TABLE1 WHERE X < 9
DB2 fails to find an exact match for the new SELECT statement with literal '9', replaces literal '9'
in the SELECT statement with '&', and does a second search. Both of the two cached
statements are reusable with literal value '9', therefore, simply by order of statement insertion
into the cache, cached statement for literal 123 is the first cached statement found that
satisfies the literal reusability criteria for the new literal value '9'.
6.8 Locking
Here are the factors that influence locking:
Isolation level
Lock avoidance
Optimistic locking
The isolation level settings are listed below in order from most to least restrictive. In
combination with the executed SQL, these modes determine the lock mode and duration of
the locks that are acquired for the transaction.
TRANSACTION_SERIALIZABLE (Repeatable Read) acquires locks on all rows read by an SQL
statement whether they qualify for the result set or not. The locks are held until the
transaction is ended through a commit or rollback. Other transactions cannot insert,
delete, or update rows that are accessed by an SQL statement executing with RR.
TRANSACTION_REPEATABLE_READ (Read Stability) acquires locks on all stage 1 qualifying
rows and maintains those locks until the application issues a commit or rollback. With RS,
other transactions cannot update or delete rows that qualified (during stage 1 processing)
for the statement because locks are held. If the application attempts to re-reference the
same data later in the transaction, the results will not have been updated or deleted.
However, other applications can insert more rows, which is known as a phantom read
because subsequent selects against the same data within the same transaction might
result in extra rows being returned.
TRANSACTION_READ_COMMITTED (Cursor Stability) ensures that all data that is returned is
committed. When SELECTing from the table, locks are not held for rows or pages for
which a cursor is not positioned. DB2 tries to avoid taking locks on non-qualifying rows. If
an application attempts to re-reference the same data later in the transaction, there is no
guarantee that data has not been updated, inserted, or deleted.
TRANSACTION_READ_UNCOMMITTED (Uncommitted Read) means that locks are not acquired
for queries (SELECT), and the application may return data from another transaction that has
not yet been committed or rolled back.
332 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
If a cursor is defined with the clauses FOR FETCH ONLY or FOR READ ONLY, it is a read-only
cursor. If a cursor is defined with the clause FOR UPDATE OF, it is an updatable cursor. A cursor
is considered ambiguous if DB2 cannot tell whether it is used for update or read-only
purposes. For more information about these three types of cursors, see DB2 9 for z/OS:
Resource Serialization and Concurrency Control, SG24-4725.
In a JDBC application, the declaration and processing of a cursor occurs with a different
syntax, but the concept is basically the same. Instead of processing a cursor, a
PreparedStatement is created and a ResultSet is used to process the results. Example 6-19
shows an updatable cursor, which is a cursor that is not eligible for lock avoidance.
If you use a read-only result set with CURRENTDATA(NO), the stability of the qualifying rows is
not protected by the lock. When the row qualifies under the protection of a data page latch,
the row is passed to the application, and the latch is released. Therefore, the content of the
qualified row might have changed immediately after it was passed to the application. To
continue processing further rows in a page, DB2 must latch the page again.
YES Lock avoidance is not considered for Lock avoidance is not considered for
ISOLATION(CS) applications. ISOLATION(CS) applications.
I/O and CP parallelism are not allowed. I/O and CP parallelism are allowed.
Block fetching does not apply. Block fetching applies.
If your business logic allows, use the CONCUR_READ_ONLY result set (this is the JDBC equivalent
for the DB2 'FOR READ ONLY' clause) if there is no update that is intended, along with
ISOLATION(CS) and CURRENTDATA(NO).
To ensure data integrity and reduce locking, you can use optimistic concurrency control.
When an application uses optimistic concurrency control, locks are obtained immediately
before the read operation and released immediately after the read. The update locks are
obtained immediately before an update operation and held until the end of the process. It
minimizes the time for which a resource is unavailable for use by other transactions.
Optimistic concurrency control uses the RID and a row change token to test whether data was
changed by another transaction since the last read operation, so it can ensure data integrity
while limiting the time that locks are held.
334 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
In general, optimistic concurrency control is appropriate for application processes that do not
have concurrent updates on the same resource, such as information only (read-only) web
applications, single user applications, or pseudo-conversational OLTP applications, where the
data is read from the tables and presented to the users before performing the updates.
Optimistic concurrency control is also appropriate for applications accessing tables that are
defined with page level locking or higher level lock size when the concurrently running
processes are accessing different sets of data.
After you establish a row change time stamp column, DB2 maintains the contents of this
column. When you want to use this change time stamp as a condition when making an
update, you can specify an appropriate predicate for this column in a WHERE clause, as shown
in Example 6-21.
Example 6-21 Implement optimistic concurrency control by using ROW CHANGE TIMESTAMP
ALTER TABLE DSN81010.ACT ADD COLUMN RCT
NOT NULL GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP;
--REORG TABLESPACE
SELECT ACTDESC, ROW CHANGE TIMESTAMP FOR ACT INTO :desc, :rct FROM DSN81010.ACT
WHERE ACTKWD = 'DOC';
-- Other processing
UPDATE DSN81010.ACT SET ACTDESC = 'MAKE DOCUMENT'
WHERE ROW CHANGE TIMESTAMP FOR ACT = :rct AND ACTKWD = 'DOC';
-- Other processing
COMMIT;
Note: You can use ROW CHANGE TOKEN instead of ROW CHANGE TIMESTAMP in SQL. It takes the
last 8 bytes of the DB2 time stamp and returns it as BIGINT.
336 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
7
This chapter gives a database administrator (DBA) enough background information so that
the DBA can assess what is behind the newer Java enterprise concepts. For the Java
programmer, this chapter provides samples that could serve as a starting point to working
with DB2 on z/OS from inside and outside of managed application server environments.
The first generation of Java applications commonly uses driver-specific JDBC statements for
dynamic SQL or, following a more traditional development path, static SQLJ in a similar way
to how you include SQL into COBOL programs. Because the driver is associated to a
particular database and has database-specific statements, your code is tied to that database.
The EJB 2.0 specification was a trial run to hide all the platform-specific details and delegate
the arduous task of mapping the database information in Java objects to a standardized
application server. Container-managed persistence (CMP) EJBs help the programmer by
supporting automatic transaction handling and security services.
EJBs were not well received by the Java community for numerous reasons, mostly to do with
the shortcomings of the specifications. Unit tests of EJB entities are nearly impossible
because EJBs need an enterprise container to run in. The mapping of the state of Java
objects to a relational representation is insufficient in the EJB model. It misses important
aspects of object-oriented programming, such as inheritance.
Other approaches were more successful than the EJB 2.0 persistency specification, which is
part of Java 2 Enterprise Edition 1.4. Hibernate, iBATIS, and EclipseLink are examples of
successful persistency frameworks that often are used in enterprise applications instead of
the EJBs that are offered by standard Java Platform, Enterprise Edition application servers.
Things have changed since the advent of Java Platform, Enterprise Edition 5, though. This
Enterprise Java specification now includes the Java Persistence API (JPA). JPA 1.0 is part of
Java Platform, Enterprise Edition 5, and JPA 2.0 is part of Java Platform, Enterprise Edition 6.
The concepts that made Hibernate and the other persistency frameworks successful are now
included in the Java enterprise standard. EJB Container-managed persistence (CMP) beans
are replaced by JPA entity beans. EJBs now provide transaction support only; they do not
provide persistency any more.
JPA was defined within the Java EE specification for Enterprise JavaBeans (EJB) 3.0. With
JPA 2.0, the JPA specification is defined separately in Java Specification Request (JSR) 317:
Java Persistence API, Version 2.0.
WebSphere Application Server V8.5 conforms to Java Platform, Enterprise Edition 6 and
supports JPA 2.0. The JPA implementation inside WebSphere Application Server is based on
the Apache OpenJPA project. Although you can use this implementation directly in
WebSphere Application Server, the WebSphere Application Server default is to use the JPA
for WebSphere Application Server persistence provider. There are some enhancements in
the WebSphere Application Server version of the JPA provider over the original Apache
version. The support of pureQuery client optimization is one example.
338 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
7.2 Implementation version of JPA inside WebSphere
Application Server
You can see the version of both implementations by running wsjpaversion, as shown in
Example 7-1.
OpenJPA 2.2.1-SNAPSHOT
Versions-ID: openjpa-2.2.1-SNAPSHOT-r422266:1325904
Überarbeitung der Apache-Unterversion: 422266:1325904
java.version: 1.6.0
java.vendor: IBM Corporation
java.class.path:
C:\Program Files\ibm\WebSphere\AppServer\dev\JavaEE\j2ee.jar
C:\Program Files\ibm\WebSphere\AppServer\plugins\com.ibm.ws.jpa.jar
C:\Program
Files\ibm\WebSphere\AppServer\plugins\com.ibm.ws.prereq.commons-collections.jar
C:\Program Files\IBM\WebSphere Studio Workload
Simulator\jsoap\iwlJSoap.jar
Strictly speaking, JPA is every thing that you need for persistence for new projects. JPA is now
considered the standard approach for Object to Relational Mapping (ORM) and can replace
all the preceding ORM frameworks.
In addition, there is the relational model, which is based on tables. Their correlation to each
other is mathematically verified and optimized in a normalization process. Special skills are
necessary to accomplish this task. It requires a deep knowledge of the database and its
organization to accomplish this task effectively. Table design and definition are normally not a
task the Java developer wants to deal with. It does not directly solve his problem. Both models
must be coordinated with each other and mapped.
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 339
To leave the Java programmer free to work with his object model, the task of mapping his
model to the relational model is delegated to the JPA infrastructure. Ideally, the Java
programmer does not have to know which database is used and how the data in this
database is dealt with.
JPA implementations allow simple Java classes or Plain Old Java Objects (POJOs) to be
persisted. A POJO is considered a simple Java class because there is nothing that it depends
on (not even on the code that makes nearly automatic persistence possible). To add the
persistence behavior, Java annotations are added to the Java class. Java annotations do not
change the program logic of the class but only give information to runtime environments that
need this information. Only JPA uses javax.persistence.* annotations. Otherwise, runtime
annotations are ignored. Thus, the Java class remains a POJO.
Conversely, the EJB 2 CMP specification requires classes to implement interfaces or methods
that makes the class dependent on other classes or a server run time.
Because JPA has no dependencies on other containers, run times, or servers, it can be used
as a stand-alone POJO persistence layer or it can be integrated in to any Java EE compliant
container and many other lightweight frameworks.
This situation cannot be reached in more complex situations. In practice, the Java
programmer and the DBA must communicate with each other and adjust their respective
models. In many cases, most parts of the data exist, even for new applications or applications
that are supposed to be migrated to JPA. Most companies have their data organized in to
databases. Here, the Java programmer must follow the structure of existing tables because
they are used by other programs as well and cannot easily be changed for the new one. JPA
entities must be designed according to the relational data. This is called a bottom-up ORM
approach. To accomplish this task, JPA gives you a rich set of annotations (or the XML
equivalent) that allows you to customize each part of the mapping.
The JPA solution for WebSphere Application Server provides several tools that help with
developing JPA applications. Combining these tools with IBM Rational Application Developer
or IBM Data Studio provides a solid development environment for either Java EE or Java SE
applications. IBM Rational Application Developer or IBM Data Studio include GUI tools to
insert annotations, a customized persistence.xml file editor, a database explorer, and
other features.
340 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Relationship mapping rules
– Collections of Java types and their representation in tables (through foreign key or
join tables)
– Unidirectional and bidirectional mapping
– One-to-one, one-to-many, many-to-one, many-to-many mapping
Inheritance mapping
– Single-table-per-class hierarchy strategy
– Joined-subclass strategy
– Table-per-concrete-class strategy
You can see from the volume of options that you need much experience and a good
knowledge of theory and background of the mapping patterns to successfully work with ORM.
Most persistence providers, including OpenJPA, allow you to generate the database
automatically from the entities. Automatic table creation by JPA does not need a great deal of
configuration, though. JPA follows a configuration-by-exception mapping strategy. Nearly
everything is taken from the existing definitions in the Java class.
However, sticking to the defaults and relying only on the automatic generation of the database
tables might lead to problems in more complex situations. The generated relational model
should be reviewed because a normalized schema with too many tables might be the result.
Bad performance could be a consequence, and maintenance might become
more difficult.
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 341
The Entity Manager can deal with four types of commands:
Dynamic query
A string with JPQL statements is given as an argument to the Entity Manager for
execution. The string can be a simple select or a more complex query by using joins and
other selection criteria. It also can be an update or delete statement that is given to the
createQuery method. Example 7-2 is an example in which an array of employee objects
are returned. The query selects Java objects and not table rows with JPQL.
Static query
This query must not be confused with static SQL. It still translates to dynamic SQL in the
JDBC driver. Static here means that the query is already coded at build time and
inspected by the JPA run time before the program actually uses it. It can have variable
parameters. Query templates can be statically declared by using the NamedQuery
annotation, as shown in Example 7-3. They are coded in the same Java source as the
class they deal with. Many NamedQuery templates are a sort of table with statements that
are prepared for later use by other parts of the program.
Native query
Similar to the JDBC method prepareStatement(), a SQL string is given as a parameter
with optional arguments. In addition, the second parameter says that the result list is
expected to be of the type Magazine, as shown in Example 7-4.
342 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
You can use native queries and stored procedure calls in cases where the JPA defaults are
not enough and a generated table model does not fit your demands. The usage of JPA native
queries can help you migrate raw JDBC applications to JPA or help you avoid raw JDBC in
cases where the JPA defaults lead to problems. Generated SQL sometimes cannot use the
full potential that the database normally provides. With native queries, you are able to use the
database power inside JPA.
OpenJPA Entity Manager handles all the communication that is needed with the JDBC driver,
for example, when the JDBC driver is requested to provide a connection to the database.
OpenJPA obtains JDBC connections on an as needed basis and releases them as fast as
possible. A connection is made for each query. The connection is closed and given back to
the pool. The connection is open only during a data store transaction or if a JDBC ResultSet
is still active.
All this is transparent to the programmer and the Java program and normally this is the best
behavior. In rare cases, you can configure OpenJPA's usage of JDBC connections through
the openjpa.ConnectionRetainMode configuration property.
The developer runs unit tests often, for example, once a minute or after even minor changes
of a class. This ensures that the application remains in a consistent state.
The tests run inside a Java stand-alone test-driven environment. They must provide every
service that the test depends on. Often, these services are configured as part of the test
environment itself. For example, many databases and their JDBC driver are written in Java
and can be included in to the application class path of the test run. Similar to the concept of
the embeddable EJB container, these databases are embeddable databases. They start in
the same JVM with the application and define their databases and tables at run time. Often,
they are defined as in-memory databases. They and their contents vanish after the test run.
One advantage is that every programmer has his own database; no coordination with other
programmers is necessary. The database is reset to a known state for each test. The Apache
Derby embeddable JDBC driver has such a capability. DB2 does not have an
embeddable database.
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 343
The downside of dynamic databases is that you must define the database infrastructure and
the test data for every run on your own. There are tools that help in that situation. JPA can
generate the required tables automatically based on the definition of the Java classes. In
addition, DbUnit (http://www.dbunit.org) is a JUnit extension that puts your database into a
known state between test runs. It can be used for in-memory databases and for normal data
stores. Provided that the Java programmer has sufficient access rights, the Java programmer
can use DbUnit to reset the DB2 test database.
Problems with test runs arise when the Java class under test requires special services that
are only provided in a full-blown Java Platform, Enterprise Edition server. Examples are
security, transaction, or persistency services, which normally cannot be included in the tests.
As a circumvention, these services are delegated to serve as mock-ups of objects that
typically return hardcoded values from method invocations.
The EJB 3.1 specification now includes an JSE-friendly embeddable container that is ideally
suitable for agile Java development. As of WebSphere Application Server V8.0, this
embeddable container is available. It does have some limitations, but can speed up
development in a Java Platform, Enterprise Edition environment.
The WebSphere Application Server embeddable EJB container is a container for enterprise
beans that does not require a Java Platform, Enterprise Edition server to run. The EJB
programming model and the EJB container services are now available for Java Platform,
Standard Edition (Java SE) servers.
344 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
WebSphere Application Server 8.5 adds the following features to that EJB Lite subset:
– Java Database Connectivity (JDBC) data source configuration, usage, and
dependency injection.
– Bean validation: To use bean validation with the embeddable EJB container, the
javax.validation classes must exist in the class path. This can be achieved by including
com.ibm.ws.jpa.thinclient_8.0.0.jar in the class path.
Here are the limitations when you use the embeddable container:
Inbound RMI/IIOP calls are not supported, which means that all EJB clients must exist
within the same Java virtual machine (JVM) as the embeddable container.
Message driven beans (MDB) are not supported.
The embeddable container cannot be clustered for high availability.
In this file, you define data sources, as shown in Example 7-6. The example shows two data
sources that are bound to the JNDI-namespace under the names jdbc/TxDSz and
jdbc/NoTxDSz at container startup.
Example 7-6 DB2 data source definitions for the WebSphere embeddable EJB container
# JPA Transactional data source definition
DataSource.db2_1.name=jdbc/TxDSz
DataSource.db2_1.className=com.ibm.db2.jcc.DB2XADataSource
DataSource.db2_1.driverType=4
DataSource.db2_1.databaseName=DB0Z
DataSource.db2_1.serverName=d0zg.itso.ibm.com
DataSource.db2_1.portNumber=39000
DataSource.db2_1.user=DB2R1
DataSource.db2_1.password=db2r1pw
For a JPA application, it is preferred practice to define both data sources to allow the full JPA
functionality, such as automatic entity identity generation. This is done in the no-transactional
data source.
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 345
The number of configuration parameters are limited compared to the number of configuration
options you have with WebSphere Application Server. The following WebSphere Information
Center contains a list of all data source definitions:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/topic/com.ibm.websphere.zseries.doc
/ae/rejb_emconproperties.html
The Java equivalent of the EMPLOYEE table is the Java class Employee, as shown in
Example 7-7.
The server and JPA run time knows at class load time that this class corresponds to the
database table EMPLOYEE because it is annotated with the @Entity tag. By default, the Java
names for the class and the fields are directly taken by the JPA run time as names for use
with the database. In addition, the source file defines a @NamedQuery for later use by other
parts of the application.
import java.io.Serializable;
import javax.persistence.*;
import java.math.BigDecimal;
import java.util.Date;
@Entity
@NamedQuery(name="DeleteEmpAThiele", query="DELETE FROM Employee e
where e.lastname = 'Thiele'")
@Id
private String empno;
@Temporal( TemporalType.DATE)
private Date birthdate;
private BigDecimal bonus;
private BigDecimal comm;
private short edlevel;
private String firstnme;
@Temporal( TemporalType.DATE)
private Date hiredate;
346 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
private String job;
private String lastname;
private String midinit;
private String phoneno;
private BigDecimal salary;
private String sex;
private String workdept;
public Employee() {
}
The application shows how Java Platform, Enterprise Edition components, such as
transactional EJBs and JPA entities, can be included in agile development.
For that reason, the application is called from a JUnit test inside IBM Data Studio. Test1 does
a SELECT on the EMPLOYEE table and checks whether all 42 Employee objects are returned
in the result list. Test2 does an INSERT of a new Employee into the table and checks afterward
that the number of table rows has increased to 43. Test 3 deletes the added row and checks
the correct number of rows afterward. The tests that are shown in Example 7-8 do not belong
to the application, and are only for development.
import org.junit.After;
import org.junit.Before;
import org.junit.Ignore;
import org.junit.Test;
import ibm.itso.ejbs.EmpBean;
import java.util.List;
import javax.ejb.embeddable.EJBContainer;
import javax.naming.NamingException;
import com.ibm.itso.entities.Employee;
EJBContainer ec = null;
Employee Emp1 = null;
EmpBean EmpBean = null;
@Before
public void initEmbeddableContainerAndTestData() throws NamingException {
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 347
EmpBean = (EmpBean) ec.getContext().lookup(
"java:global/bin/EmpBean!ibm.itso.ejbs.EmpBean");
// Create some test data
Emp1 = new Employee();
Emp1.setFirstnme("Andreas");
Emp1.setLastname("Thiele");
Emp1.setMidinit("A");
Emp1.setEmpno("999999");
Emp1.setWorkdept("A00");
}
@Test
public void testNumberOfEmployeeRows() {
@Test
public void insertNewEmployeeBean() {
try {
EmpBean.persistNewEmployee(Emp1);
}
catch (Exception e) {
System.out.println("Exception persisting Employee:\n" +e);
}
// Number of rows has increased to 43
List<Employee> Emps = EmpBean.getEmployeeResultList();
assertEquals(43, Emps.size());
}
//@Ignore
@Test
public void deleteInsertedEmpAgain() {
try {
EmpBean.deleteInsertedEmpAgain();
}
catch (Exception e) {
System.out.println("Exception deleting Employee:\n" +e);
}
// Number of rows should be 42 again
List<Employee> Emps = EmpBean.getEmployeeResultList();
assertEquals(42, Emps.size());
}
@After
public void shutDown() {
ec.close();
}
}
The JUnit tests use a stateless session EJB, EmpBean, that is part of the application.
348 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
As you can see in Example 7-9, the EJB consists of three transactional methods for SELECT,
INSERT, and DELETE for JPA entity objects that are mapped to the EMPLOYEE table. The work
with the database is done by the javax.persistence.EntityManager by using the persistence
unit EmpPU. No JDBC statement is used to do the work, and no column name used. Even a
table name is not given. All this is derived by the JPA container at run time from the Java class
that the entity manager is asked to deal with, for example, em.persist(employee);.
In the EJB, only resource references that must be mapped to real names inside the server are
used.
Example 7-9 Sample session EJB for SELECT, INSERT, and DELETE of a JPA entity
package ibm.itso.ejbs;
import java.util.List;
import javax.annotation.Resource;
import javax.annotation.Resources;
import javax.ejb.Stateless;
import javax.ejb.TransactionAttribute;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.Query;
import javax.persistence.TypedQuery;
import javax.sql.DataSource;
import com.ibm.itso.entities.Employee;
@Stateless
public class EmpBean {
@PersistenceContext(unitName = "EmpPU")
private EntityManager em;
@TransactionAttribute(SUPPORTS)
public List<Employee> getEmployeeResultList() {
@TransactionAttribute(REQUIRED)
public void persistNewEmployee(Employee employee) {
em.persist(employee);
}
@TransactionAttribute(REQUIRED)
public void deleteInsertedEmpAgain() {
Query delete1 = em.createNamedQuery("DeleteEmpAThiele");
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 349
delete1.executeUpdate();
}
}
The persistence unit is defined in a short persistence.xml file, as shown in Example 7-10. It
shows a transaction-type=”JTA”, declaring that everything is handled within the server. This
is different from a persistence-unit with transaction-type="RESOURCE_LOCAL", where all the
database connection definitions must be made. Only resource references are used, so the file
remains portable.
The references are resolved in the definition of the container definition file
embeddable.properties. For every EJB that uses database resources, a
Bean<bean_name>.ResourceRef.BindingName.jdbc statement must be included. This definition
is then assigned to a bean by the container after an EJB is found.
At start, the container looks for enterprise beans in the class path, that is, it looks for Java
classes that are annotated, for example, with the @Stateless annotation. The EJBs found are
then further examined for resource references. They are declared in the EJBs by annotations
like @Resource(name = "jdbc/TxDSref", type = DataSource.class), which are resource
references.
Resource references must be bound to names in the servers namespace at deployment time.
The name in @Resource(name = "jdbc/TxDSref" resolves to java:comp/env/jdbc/TxDSref, as
any resource reference would be named in Java. This reference is likewise defined in the
persistence.xml file for the JPA container.
Because there is no deployment in this case, the relationship between the resource reference
and the real JNDI name for the resource in the server must be defined in the embeddable
containers definition file. The following example shows how to accomplish this task:
Bean.#bin#EmpBean.ResourceRef.BindingName.jdbc/TxDSref=jdbc/TxDSz
The bean named EmpBean in the /bin directory uses a resource reference named
jdbc/TxDSref that resolves to the servers JNDI name jdbc/TxDSz.
350 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The beans must be registered in the namespace as well so that they can be looked up by
clients, such as the TestEmpBean.java JUnit test driver. The embeddable container does this
task, like any other Java Platform, Enterprise Edition application server, in the java.global
namespace. The name under which the bean can be found is the following one:
java:global/bin/EmpBean!ibm.itso.ejbs.EmpBean
The name ibm.itso.ejbs.EmpBean is the fully qualified class name of the class in the class
path and /bin/EmpBean is the location where it can be found. /bin in this case is the output
folder for compiled classes in the current directory (the project directory in Rational
Application Developer). Alternatively, this can be the name of a JAR file containing
@Stateless annotated classes (without the .jar in the name), which then is taken as the EJB
module name.
To run the unit test, the project must have the following JAR files in its class path. Some of the
JAR files can be found in a WebSphere Application Server installation. You can get one, for
example, if you augment IBM Data Studio with the WebSphere Application Server test
environment, as described in Appendix C, “Setting up a WebSphere Application Server test
environment on IBM Data Studio” on page 523.
com.ibm.ws.ejb.embeddableContainer_8.5.0.jar
com.ibm.ws.jpa.thinclient_8.5.0.jar
db2jcc_license_cu.jar or db2jcc_license_cisuz.jar for connections to DB2 for z/OS
db2jcc4.jar
Run TestEmpBean.java as a JUnit test, which creates a run configuration that must be
updated afterward because you must specify a Java agent in your Java system properties to
enhance the JPA entities. For the TestEmpBean JUnit test run, click Run
Run Configurations.
An example of how to do this task for JUnit tests is shown in Figure 6-14 on page 325. For the
run with the embeddable EJB container, use the following statement:
-javaagent:"C:\Programme\ibm\WebSphere\AppServer\runtimes\com.ibm.ws.jpa.thinclien
t_8.5.0.jar
Run TestEmpBean.java as a JUnit test a second time. This time you should see the green bar
for a successful test, as shown in Figure 7-1.
Figure 7-1 Insert and delete a table row with embeddable EJB container - successful test
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 351
Despite your success, you might see the following error message during the unit test:
NMSV0307E: A Java: URL name was used, but Naming was not configured to handle
Java: URL names. The likely cause is a user in error attempting to specify a Java:
URL name in a non-J2EE client or server environment. Throwing
ConfigurationException.
What is enhancement? If a Java class is annotated as a JPA entity (@Entity), then all its
non-transient fields are traced by the JPA run time. Changing a field marks it as dirty, which
means it must be persisted. Similar monitoring occurs with variables that are annotated with
FetchType.LAZY, where a special access strategy must be prepared. The class does this work
by “enhancing” the setters of applicable fields with newly generated Java code. This can be
done at build time by using the org.apache.openjpa.ant.PCEnhancerTask utility. It is more
common to change the entity at class load time dynamically through a Java agent.
The concept of Java agents was introduced in JDK5 and works by specifying a JAR file with
the agent class in the -javaagent keyword at JRE start time. The META-INF/MANIFEST.MF file
of this JAR file has the Premain-Class keyword, which specifies the agent class.
The agent is intercepted in front of your main method. It can configure the runtime
environment before your application runs. The agent can then manipulate the class loaders to
add JPA code to your classes.
Java agents for JPA enhancement are provided by both the openjpa-2.2.0.jar and
com.ibm.ws.jpa.thinclient_8.5.0.jar files, which can be found in the runtimes directory of
WebSphere Application Server.
If you run the application in WebSphere Application Server, you can obtain a small
performance benefit if you can enhance your entities when you build the application. The
application does not attempt to enhance entities that are already enhanced.Enhance the
entity classes by using the JPA enhancer tool, wsenhancer, which can be found in the bin
directory of WebSphere Application Server.
On a Windows development system where all your entity classes are in the build directory,
the command to enhance all the entities on the class path looks like Example 7-11.
Summary
With WebSphere Application Server, embeddable EJB container agile Java EE development
becomes feasible.
352 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
7.2.4 Use of alternative JPA persistence providers
The default persistence provider in WebSphere Application Server is the JPA for the
WebSphere Application Server persistence provider that is implemented in the
com.ibm.websphere.persistence.PersistenceProviderImpl class. Alternatively, the Apache
OpenJPA persistence provider can be used. These two providers are built into the server and
installed automatically during the server installation.
Although they are built from the Apache OpenJPA persistence provider, the JPA for
WebSphere Application Server persistence provider contains the following enhancements
and differences:
Static SQL support using the DB2 pureQuery feature.
Access intent support.
Enhanced tracing support.
Version ID generation.
WebSphere product-specific commands and scripts.
Translated message files.
Check in-memory caches for lazily loaded many-to-one or one-to-one relationships. Setting
the wsjpa.BrokerImpl property to true specifies that the JPA implementation attempts to load
lazy fields from memory at run time if the foreign key data for the lazy fields are available.
If no JPA provider is configured in the <provider> element of the persistence.xml file within
an Enterprise JavaBeans (EJB) module, the default JPA provider that is configured for this
server is used. The product is packaged with the JPA for WebSphere Application Server
persistence provider that is defined as the default provider. However, it is possible to override
this default and specify a different default through the administrative console, as shown in
Figure 7-2. To do so, click Application servers, select your server, and click Container
Services Default Java Persistence API settings.
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 353
7.2.5 Usage of Non-JTA data sources
Some JPA entity features require that a non-JTA data source be specified. An example of this
is automatic entity identity generation. Ensure that a non-JTA data source is configured to
match your application needs. A non-transactional data source must be defined in
WebSphere Application Server for that purpose. To accomplish this task, click Data sources,
click your data source, click WebSphere Application Server data source properties, and
select the Non-transactional datasource check box, as shown in Figure 7-3.
The application server does not enlist the connections from this data source in global or local
transactions. Non-JPA applications must explicitly call setAutoCommit(false) on the
connection if they want to start a local transaction on the connection, and they must commit or
roll back the transaction that they started.
354 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
7.2.7 Definition of the IBM DB2 Driver in WebSphere Application Server V8.5
Liberty Profile
The Liberty profile is a new dynamic profile of WebSphere Application Server V8.5 that
provisions only the features that are required by the applications. For example, if an
application requires a servlet engine, a Liberty profile can be configured to start only the
WebSphere Application Server kernel, the HTTP transport, and the web container. This
improves the server start time and results in a small footprint because it does not use the full
Java Enterprise Edition stack. Furthermore, if the application needs additional features such
as database connectivity, the Liberty profile configuration can be dynamically modified to
include the JDBC feature without the needing a server restart.
The name of the product suggests that the server might be just another profile of the
WebSphere Application Server product. This is misleading. The Liberty Profile is a new
product that is different from WebSphere Application Server. For example, you do not need
the Profile Management Tool (PMT) to create a new server. The code may be shared in many
cases with the normal application server, but the packaging is different. In addition to the
binary files, which have only a less than 50 MB footprint, you need just one XML file to
configure a server.
This section cannot show all the details of that server. You can find a detailed description of
the Liberty Profile in WebSphere Application Server V8.5 Administration and Configuration
Guide, SG24-8056. Here, we focus on the definition of the IBM Data Server Driver for JDBC
and SQLJ driver and the way to configure a data source.
To run the sample application db2_jpa_web, you must define the WebSphere Application
Server Liberty Profile server.xml file as shown in Example 7-13.
Example 7-13 Server and data source definitions for Liberty Profile
<server description="ITSO DB2R1">
<httpEndpoint host="localhost"
httpPort="9080"
httpsPort="9443"
id="defaultHttpEndpoint"/>
<dataSource beginTranForResultSetScrollingAPIs="false"
connectionSharing="MatchCurrentState"
id="sample_ds"
isolationLevel="TRANSACTION_READ_COMMITTED"
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 355
jdbcDriverRef="DB2T4" jndiName="jdbc/sample"
statementCacheSize="20">
<connectionManager
agedTimeout="30m"
connectionTimeout="10s"
maxPoolSize="20"
minPoolSize="5"/>
<properties.db2.jcc databaseName="DB0Z"
driverType="4"
password="db2r1pw"
portNumber="39000"
serverName="d0zg.itso.ibm.com"
currentSchema="DSN81010"
user="db2r1"/>
</dataSource>
<applicationMonitor updateTrigger="mbean"/>
</server>
To run the WebSphere Application Server Liberty Profile, you can either install a single server
run time or you can augment IBM Data Studio with the WebSphere Application Server test
environment, as described in Appendix C, “Setting up a WebSphere Application Server test
environment on IBM Data Studio” on page 523, which describes how to install the Liberty
Profile in IBM Data Studio. For more information about the data source definition, see the
Information Center found at the following website:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp?topic=%2Fcom.ibm.webspher
e.wlp.nd.doc%2Ftopics%2Frwlp_ds_appdefined.html
There is a known issue with LOB data streaming and DB2 for very large streams. You might
have to switch progressive streaming off. For more information, see 7.4, “Known issues with
OpenJPA 2.2 and DB2” on page 359.
356 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
With WebSphere Application Server V8.5, column mapping is no longer a server extension
feature, but is provided directly by OpenJPA. Therefore, you can find information regarding
XML mapping in the Apache OpenJPA documentation directly at the following website:
http://openjpa.apache.org/builds/latest/docs/docbook/manual.html#ref_guide_xmlmapp
ing
Here is an example of this feature. As always with JPA, the process is about mapping Java
objects to database columns. In the case of mapping to an XML column, the standard
mapping routine cannot be used. Instead, you must specify a third-party mapping tool, which
is done by annotating the field containing the JAXB object that is persisted as XML with a
strategy handler, as shown in Example 7-15.
The handler knows how to deal with Java Architecture for XML Binding (JAXB) annotations.
With JAXB, a Java object can be marshalled or unmarshalled to an XML structure as defined
by JAXB annotations. This is analogous to what JPA does with database objects.
Example 7-15 Applying a third-party XML mapping tool using JPA annotations
...
@Persistent
@Strategy("org.apache.openjpa.jdbc.meta.strats.XMLValueHandler")
@Persistence(fetch=FetchType.LAZY)
private MyXMLObject xmlObject;
...
A sample Java object that is converted to its XML equivalent and is included in the JPA entity
that is shown in Example 7-15 looks like Example 7-16.
The XML structure is built by JAXB and then persisted by JPA. The JAXB JAR files must be
on the application class path (jaxb-api.jar, jaxb-impl.jar, jsr173_1.0_api.jar, or the
equivalent).
For more information about how WebSphere Application Server is involved in this process,
see the following website:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp?topic=%2Fcom.ibm.webspher
e.express.doc%2Finfo%2Fexp%2Fae%2Ftejb_jpaColMap.html
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 357
7.3 Preferred practices of Java Platform, Enterprise Edition and
DB2
This section provides samples of Preferred practices of Java Platform, Enterprise Edition
and DB2.
Coding infrastructure information in the Java code is a breach of the separation of concerns
(SoC) principle and prevents portability. Although this works in many cases, it can lead to
some problems in others.
Section 6.6, “JDBC applications in managed environments” on page 326 provides details
about resource references.
The application server requires the usage of resource references for the following reasons:
If application code looks up a data source directly in the JNDI naming space, every
connection that is maintained by that data source inherits the properties that are defined in
the application. Then, you create the potential for numerous exceptions if you configure
the data source to maintain shared connections among multiple applications. For example,
an application that requires a different connection configuration might attempt to access
that particular data source, resulting in application failure.
It relieves the programmer from having to know the name of the data source or connection
factory at the target application server.
You can set the default isolation level for a data source through resource references. With
no resource reference, you get the default for the JDBC driver that you use.
For a large Java Platform, Enterprise Edition project, it is normal that the application is built
several times a day from a central repository. Hundreds and even thousands of program
artifacts are checked out and combined in several deployable archives like .ear, .jar, and .war
files. This process mostly is done by specialized builder programs such as Maven.
This process normally must be done for several environments, such as unit tests, integration
environments, or quality assurance systems. Some might have predefined database
connections, and some might not. Unit tests normally run unmanaged, so they must provide
their own database connectivity. In these cases, you need the JDBC driver in your /lib
directory, but in production you must not have it there, as wrong packaging can easily occur.
358 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
There are problems when these “forgotten” drivers interfere with the installed driver in the
application server. This is especially the case when your application is deployed with the
class loading policy parent last. Parent last means that everything in your application is
loaded before the classes in the application server.
This has the same effect as a STEPLIB in your JCL. Every program in the STEPLIB overcomes
the one that the system provides, which is not wanted behavior in a production environment.
You should always avoid creating tests that depends on the results of preceding tests. The
entire database might not need to be reinitialized, but the parts you use should be.
You see all the generated dynamic SQL statements. You can see how a change of the JPA
class annotations is reflected in the SQL. If the results are not satisfactory, you might have to
use native queries where you have full control over the SQL.
Chapter 7. Java Platform, Enterprise Edition with WebSphere Application Server and DB2 359
When using LOBs with persistent attributes of a streaming data type (for example,
java.io.InputStream) in the case of a very large LOB, the DB2 Data Server driver
automatically uses progressive streaming to retrieve the LOB data. If you get an
LobClosedException, you might have to set the following string:
fullyMaterializeLobData=true;progressiveStreaming=NO
360 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
8
Before you dive into the different tools and traces that can be used to collect and analyze
performance data, it is important to establish a performance analysis strategy or framework in
your installation.
Capturing information is the first step; you also must analyze and interpret the data. As
performance data can be captured on both the WebSphere Application Server and the DB2
side, it is also important to be able to correlate the data that is collected on both sides.
Before you dive into the different tools and traces that can be used to collect and analyze
performance data, it is important to establish a performance analysis strategy or framework in
your installation. This typically consists of two components:
Continuous monitoring
Detailed monitoring
For DB2 for z/OS, this data is typically DB2 statistics and accounting trace records, and for
WebSphere Application Server, the SMF 120 records. When using dynamic SQL, which is
used by JDBC applications, it might be a good idea to capture information from the dynamic
statement cache at regular intervals to track the performance of individual SQL statements
over time. For more information about which DB2 information to capture, see 8.4.1, “Which
information to gather” on page 395.
You can use this information to establish a profile for your applications that you can track
over time.
You can use this information to understand how your applications perform on a day to day
basis, and to determine what has changed if performance deteriorates.
There are many types of traces in all components of the application (WebSphere Application
Server, Data server driver, JVM, and DB2) that you have at your disposal. You must
understand what detailed traces are available to you and in which cases they can be useful.
362 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
From a DB2 side, people traditionally use the planname, authorization ID, or transaction
name (correlation name) to identify transactions. In a WebSphere Application Server
environment, those items are not always available, or are often the same for all work coming
from the application server, and are therefore not helpful.
For example, when you use a type 4 connection using JDBC, there is not really a DB2 plan
(other than the generic DISTSERV plan that is used by everybody). Therefore, using the
planname is not useful for discovering how specific applications are performing. The same
situation applies to the usage of the DB2 authorization ID. In many cases, the application
server uses a single authorization ID for all work that is being sent to DB2.
Therefore, the IBM Data Server Driver for JDBC and SQLJ-only methods (using
com.ibm.db2.jcc.DB2Connection) in Table 8-1 are listed for reference only.
Table 8-1 Setting client information through Data Server Driver only methods
Method Information provided
1 This Java public class, Class BrokerClientInfo, is a data structure that is used to describe client information.
Table 8-2 lists the client information property values that the IBM Data Server Driver for JDBC
and SQLJ returns for DB2 for z/OS when the connection uses type 4 connectivity.
Table 8-2 Client properties that are set by the driver when using a type 4 connection to DB2 for z/OS
Name MAX_LEN DEFAULT_VALUE Description
(bytes)
ClientAccountingInformati 200 A string that is the The value of the accounting string from
on concatenation of the following the client information that is specified
values: for the connection. This value is stored
"JCCnnnnn", where nnnnn is in the DB2 special register CURRENT
the driver level, such as CLIENT_ACCTNG.
04000.
The value that is set by
DB2Connection.setDB2Clie
ntWorkstation. If the value
is not set, the default is the
host name of the local host.
applicationName property
value, if set; 20 blanks
otherwise.
clientUser property value,
if set; eight blanks
otherwise.
ClientHostname 18 The value that is set by The host name of the computer on
DB2Connection.setDB2ClientW which the application that is using the
orkstation. If the value is not connection is running. This value is
set, the default is the host name stored in the DB2 special register
of the local host. CURRENT CLIENT_WRKSTNNAME.
ClientUser 16 The value that is set by The name of the user on whose behalf
DB2Connection.setDB2ClientU the application that is using the
ser. If the value is not set, the connection is running. This value is
default is the current user ID stored in the DB2 special register
that is used to connect to the CURRENT CLIENT_USERID.
database.
364 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Table 8-3 lists the client information property values that the IBM Data Server Driver for JDBC
and SQLJ returns for DB2 for z/OS when the connection uses type 2 connectivity.
Table 8-3 Client properties that are set by the driver when using a type 2 connection to DB2 for z/OS
Name MAX_LEN DEFAULT_VALUE Description
(bytes)
ClientAccountingInformati 200 Empty string The value of the accounting string from
on the client information that is specified
for the connection. This value is stored
in the DB2 special register CURRENT
CLIENT_ACCTNG.
Specifying the client information inside the application has the advantage that each
application can set its own setting and allows a high degree of granularity, making detailed
monitoring of applications and component possible.
The disadvantage of this approach is that you rely on the programmer to provide this
information, and monitoring is often not the number one priority, which might result in this
information not being passed, which can result in the program running with the wrong priority
and missing its service levels when client information is used to classify work in WLM.
They must be entered as custom properties at the data source level. In this example, we use
TradeClientUser for the clientUser property. When the application connects to DB2 through
this data source, this results in the CURRENT CLIENT_USERID special register being set to
TradeClientUser.
366 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 8-2 Specifying client information as data source custom properties
For example, during our tests where we used the type 2 driver, we did not specify any specific
client information strings. In that case, only the WebSphere Application Server application
name (DayTrader-EE6) is passed to DB2 (QWHCEUTX - the user transaction name).
Figure 8-4 on page 369 and Figure 8-5 on page 369 show how to specify this information in
more detail by using the Admin console. We use the D0ZG WASTestClientInfo application to
demonstrate this feature.
368 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 8-4 Application Resource reference window
8.2.2 Using client information strings to classify work in WLM and RMF
reporting
You can use the client information when you classify work on the z/OS system. When work
must be run on a z/OS system, the work is classified by the z/OS workload manager (WLM)
component. A priority is assigned to this piece of work, which is done by WLM based on the
classification rules that you specify in the WLM policy.
There are many other classification types that can be used to qualify work. For more
information, see the following resources:
The “Defining Work Qualifiers” section in z/OS MVS Planning: Workload Management,
SA22-7602-20, found at:
http://publib.boulder.ibm.com/infocenter/zos/v1r13/index.jsp?topic=%2Fcom.ibm.z
os.r13.ieaw100%2Fiea2w1c052.htm
The “Classification attributes” section in DB2 10 for z/OS Managing Performance,
SC19-2978, found at:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/topic/com.ibm.db2z10.doc
.perf/src/tpc/db2z_classificationattributes.htm
The classification rules for the DDF work that we used during this project are shown in
Figure 8-6 on page 371.
370 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Subsystem-Type Xref Notes Options Help
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 19 to 41 of 41
Command ===> ___________________________________________ Scroll ===> CSR
The process name (or application name) is used to qualify the work. Both types of work
(Trade* and dwsClie*) use the same service class (DDFONL), but to distinguish between
them, we use a separate reporting class for each application:
The Trade* application uses RTRADE0Z.
The dwsClie* application uses RDWS0Z.
You define the URL to transaction class assignments in a WebSphere Application Server
classification document, which is a common XML file, as shown in Figure 8-7.
The URI information is obtained from the deployment descriptor of the application. To retrieve
this information, open the administration console, select the application that you want to
classify, and click View Deployment Descriptor, as shown in Figure 8-8.
The context-root that is shown in Figure 8-9 is used to assign the transaction class.
372 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Now that you have built the XML file, tell the application server to use this file by setting the
wlm_classification_file environment variable to the name of our classification file. To do
so, navigate to the appropriate WebSphere Application Server console application, click
Environment Manage WebSphere variables, as shown in Figure 8-10.
Note: Make sure that WebSphere Application Server has the necessary permissions to
access the WLM classification file.
During the starting sequence, WebSphere Application Server issues a runtime message to
confirm that the current WLM_CLASSIFICATION_FILE setting is being used, as shown
in Figure 8-11.
F MZSR014,RECLASSIFY,FILE='/u/rajesh/wlm.xml'
F MZSR014,DISPLAY,WORK,CLINFO
For more information about the workload classification file, see the following website:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/topic/com.ibm.websphere.zseries.doc
/ae/rrun_wlm_tclass_dtd.html
With the transaction workload classification in place on the WebSphere Application Server
side, you can now use the transaction classes in the WLM classification rules, as illustrated in
Figure 8-13 on page 375.
374 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Subsystem-Type Xref Notes Options Help
--------------------------------------------------------------------------
Modify Rules for the Subsystem Type Row 6 to 11 of 11
Command ===> ___________________________________________ Scroll ===> CSR
In this example, use the CB subsystem type to classify the WebSphere Application Server
work:
CN Collection name. This is the logical server name that is defined by
using the Component Broker System Management Utility. It represents
a set of business objects that are grouped and run in a logical server.
This is the WebSphere Application Server cluster name.
TC Transaction class. This is the name that results from mapping the URI
to a name.
DTRADE and DWS are the transaction classes that were assigned through the WLM
classification file. When a transaction arrives on the MZSR01 cluster and it is assigned to the
DTRADE transaction class, it runs by using the WASONL service class, and it uses the
RTRADE RMF reporting class. Using a different reporting class allows you to distinguish
between different transactions classes when they use the same service class.
To get around this problem, you can bind the DB2 JDBC packages into separate collections,
one per application or group of applications. To do so, use the DB2Binder utility and specify
-collection collection-name.
You can also use the DB2 BIND PACKAGE command with the
COPY(collection-name.package-name) keyword. For more information about this command,
see 4.3.9, “Bind JDBC packages” on page 165.
Applications that use the TradeDataSource data source now use the SYS* packages from the
DAYTRADER collection. When you use the currentPackageSet property, all packages that
are used by the applications that use this data source must be present in the collection you
point to through the currentPackageSet property.
In addition to creating the plan, you must indicate in the data source to use this particular plan
by setting the planName property, as shown in Figure 8-15 on page 377.
376 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 8-15 Specifying the planName data source property
Now that you have set up a way to correlate WebSphere Application Server, DB2, and RMF
information, you can start monitoring our applications.
WebSphere Application Server for z/OS Version 7 introduced SMF type 120 subtype 9. It
bundles most of the data that is also spread across the other subtypes, and adds additional
information, such as how much zAAP processing that the server uses in processing a
request. WebSphere Application Server creates one subtype 9 record for every request that
the server processes for both external requests (application requests) and internal requests,
such as when the controller “talks to” the servant regions.
The other record 120 subtypes are still available, but as subtype 9 combines the information
from the other subtypes, we use this information to illustrate the type of information that
is available.
In this case, record types 100 - 255 are enabled, which includes the type 120 record.
378 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
By default, WebSphere Application Server does not write any records to SMF. You must
activate the writing of these records at the application server level. This can be done in
different ways. In this example, we use the administration console interface to enable the SMF
recording by clicking Servers Server Types WebSphere Application Servers,
selecting the server, and clicking Java and Process Management Process definition
Control Environment entries.
Figure 8-16 shows where to find the Java and Process Management and Process definition
options under the Server Infrastructure heading.
We added the following options by using the administration console, which is shown in
Figure 8-18, to activate the SMF recording.
Figure 8-18 SMF recording properties that are set through the administration console
380 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
WebContainer SMF recording (SMF 120 subtype 7 and 8) is activated and deactivated along
with the activation and deactivation of SMF recording for the Java Platform, Enterprise Edition
container (SMF 120 subtype 5 and 6), so there are no specific options to activate subtype 7
and 8.
Here are other properties that you can set (value = 1 to activate) through the
administration console:
server_SMF_request_activity_enabled to enable subtype 9
The following settings add additional information to the subtype 9 record:
– server_SMF_request_activity_CPU_detail
– server_SMF_request_activity_timestamps
– server_SMF_request_activity_security
– server_SMF_request_activity_async
server_SMF_outbound_enabled to enable subtype 10
The subtype 9 record can also be activated by using z/OS console commands, which are
illustrated in Example 8-2. MZSR014 is the WebSphere Application Server name. You can
also display the current settings that are in effect.
Example 8-2 Using MVS commands to activate SMF 120 type 9 recording
F MZSR014,SMF,REQUEST,ON
BBOO0211I MODIFY COMMAND SMF,REQUEST,ON COMPLETED SUCCESSFULLY
F MZSR014,SMF,REQUEST,CPU,ON
BBOO0211I MODIFY COMMAND SMF,REQUEST,CPU,ON COMPLETED SUCCESSFULLY
F MZSR014,SMF,REQUEST,TIMESTAMPS,ON
BBOO0211I MODIFY COMMAND SMF,REQUEST,TIMESTAMPS,ON COMPLETED
SUCCESSFULLY
F MZSR014,SMF,REQUEST,SECURITY,ON
BBOO0211I MODIFY COMMAND SMF,REQUEST,SECURITY,ON COMPLETED
SUCCESSFULLY
F MZSR014,DISPLAY,SMF
BBOO0344I SMF 120-9: FORCED_ON, CPU USAGE: FORCED_ON, TIMESTAMPS:
FORCED_ON, SECURITY INFO: FORCED_ON, ASYNC: OFF
BBOO0345I SMF 120-9: TIME OF LAST WRITE: 2012/08/07 18:25:15.814883,
SUCCESSFUL WRITES: 433, FAILED WRITES: 0
BBOO0346I SMF 120-9: LAST FAILED WRITE TIME: NEVER, RC: 0
BBOO0389I SMF 120-10: OFF
BBOO0387I SMF 120-10: TIME OF LAST WRITE: NEVER, SUCCESSFUL WRITES: 0,
FAILED WRITES: 0
BBOO0388I SMF 120-10: LAST FAILED WRITE TIME: NEVER, RC: 0
BBOO0188I END OF OUTPUT FOR COMMAND DISPLAY,SMF
Note: The changes that you make to the SMF 120 subtype 9 settings through console
commands remain active only until the server is restarted, and changes that are made
through the administration console remain after the server is restarted.
The documentation for the SMF Browser is available in the browser package.
The WebSphere Application Server information Center also has information that is related to
this topic in the “Viewing the output data set” topic. It is available at the following website:
http://www14.software.ibm.com/webapp/wsbroker/redirect?version=phil&product=was-nd
-zos&topic=ttrb_SMFviewdata
Another excellent source of information about SMF 120 subtype 9 records is the white paper
Understanding SMF Record Type 120, Subtype 9. It is available at the following website:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101342
In our example, we used the following commands to generate a summary (SUMPERF) and a
detailed (DEFAULT) report of the SMF 120 records that were collected during one of the runs
that were using the Trader sample application:
java -cp bbomsmfv.jar:batchsmf.jar com.ibm.ws390.sm.smfview.SMF
'INFILE(BART.WAS.TEST1.SMF120)' 'PLUGIN(PERFSUM,/tmp/smf120sum.txt)'
java -cp bbomsmfv.jar:batchsmf.jar com.ibm.ws390.sm.smfview.SMF
'INFILE(BART.WAS.TEST1.SMF120)' 'PLUGIN(DEFAULT,/tmp/smf120detail.txt)'
The second parm of the PLUGIN option indicates the file that the output is directed to.
The SMF 120 records contain much f information. We describe only the subtype 9 record in a
here. Samples of subtypes 1, 3, 7, and 8 for both the summary and detailed output can be
found in Appendix E, “SMF 120 records subtypes 1, 3, 7, and 8” on page 545.
Example 8-3 shows the summary (SUMPERF) output by the SMF Browser program for one
of the SMF 120.9 (Request Activity) records. It shows the elapsed and CPU time (in
microseconds) and the CPU time that was used on a zAAP engine, in case that is available.
In this case, the entire request was offloaded to zAAP (CPU and zAAP time are the same).
The record also provides information about the time the request came into the application
server, when it was queued, dispatched, and ended. The output also indicates which
programs ran; in this case, they are all JSPs.
SMF -Record Time Server Bean/WebAppName Bytes Bytes # of El.Time CPU_Time(uSec) Other SMF 120.9
Numbr -Type hh:mm:ss Instance Method/Servlet toSvr frSvr Calls (msec) Tot-CPU zAAP Sections Present
1---+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+----9----+ ----------------
694 120.9 19:58:06 MZSR014 STC24171-HTTP / 25 584 584
.9Ts: 2012/08/10 23:58:06.377368 Received
.9Ts: 2012/08/10 23:58:06.377459 Queued
.9Ts: 2012/08/10 23:58:06.386165 Dispatched
.9Ts: 2012/08/10 23:58:06.401038 dispatchComplete
.9Ts: 2012/08/10 23:58:06.402788 Complete
.9N ip addr=9.12.6.9 port=24146 832 6176 .9Cl/daytrader/ap
9CPU:Web DayTrader-EE6#web.wa/TradeAppServlet 1 0 29
9CPU:Web DayTrader-EE6#web.wa//quote.jsp 1 0 61
9CPU:Web DayTrader-EE6#web.wa//displayQuote.jsp 1 2 214
382 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example 8-4 shows the detailed (DEFAULT) output that is created by the SMF Browser
program for the same SMF 120.9 (Request Activity) record that we analyzed in Example 8-3
on page 382. The detailed output contains much information.
One thing that might be of interest is the transaction class that is used by the transaction.
#Subtype Version: 2;
Index of this record: 1;
Total number of records: 1;
record continuation token * 000000c4 0120a481 -------- -------- *
#Triplets: 11;
Triplet #: 1; offsetDec: 204; offsetHex: cc; lengthDec: 76; lengthHex: 4c; count: 1;
Triplet #: 2; offsetDec: 280; offsetHex: 118; lengthDec: 156; lengthHex: 9c; count: 1;
Triplet #: 3; offsetDec: 436; offsetHex: 1b4; lengthDec: 68; lengthHex: 44; count: 1;
Triplet #: 4; offsetDec: 504; offsetHex: 1f8; lengthDec: 736; lengthHex: 2e0; count: 1;
Triplet #: 5; offsetDec: 1240; offsetHex: 4d8; lengthDec: 132; lengthHex: 84; count: 1;
Triplet #: 6; offsetDec: 1372; offsetHex: 55c; lengthDec: 188; lengthHex: bc; count: 1;
Triplet #: 7; offsetDec: 1560; offsetHex: 618; lengthDec: 420; lengthHex: 1a4; count: 3;
Triplet #: 8; offsetDec: 0; offsetHex: 0; lengthDec: 0; lengthHex: 0; count: 0;
Triplet #: 9; offsetDec: 1980; offsetHex: 7bc; lengthDec: 1644; lengthHex: 66c; count: 3;
Triplet #: 10; offsetDec: 0; offsetHex: 0; lengthDec: 0; lengthHex: 0; count: 0;
Triplet #: 11; offsetDec: 0; offsetHex: 0; lengthDec: 0; lengthHex: 0; count: 0;
384 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Transaction Class : ;
Flags * 84d00000 -------- -------- -------- *
Reserved * 00000000 00000000 00000000 00000000 *
* 00000000 00000000 00000000 00000000 *
Classification attributes: ;
Stalled thread dump action : 3;
CPU time used dump action : 3;
DPM dump action : 3;
Timeout recovery : 2;
Dispatch timeout : 300;
Queue timeout : 297;
Request timeout : 180;
CPU time used limit : 0;
DPM interval : 0;
Message Tag : ;
Obtained affinity : ;
Routing affinity : C9E1E24F897F45A6000002B00000000400000048sn6zGpx_39-MGb4qNtoil8h;
--------------------------------------------------------------------------------
Using PMI data, performance bottlenecks in the application server can be identified and
addressed. For example, one of the PMI statistics in the Java DataBase Connectivity (JDBC)
connection pool is the number of statements that are discarded from the prepared statement
cache, which we use as an example to illustrate the value of PMI data. This statistic can be
used to adjust the prepared statement cache size to minimize the discards and to improve the
database query performance.
PMI data can be monitored and analyzed by IBM Tivoli® Performance Viewer, other Tivoli
tools, your own applications, or third-party tools. As Tivoli Performance Viewer ships with
WebSphere Application Server, we use it to visualize the PMI data in our example.
386 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Java Platform, Enterprise Edition (Java EE) 1.4 includes a Performance Data Framework that
is defined as part of JSR-077 (Java Platform, Enterprise Edition Management Specification).
This framework specifies the performance data that must be available for various Java EE
components. WebSphere Application Server PMI complies with Java EE 1.4 standards by
implementing the Java EE 1.4 Performance Data Framework. In addition to providing
statistics that are defined in Java EE 1.4, PMI provides additional statistics about the Java EE
components, such as servlets and enterprise beans, and WebSphere Application Server
-specific components, such as thread pools.
You activate PMI data collection at the Application Server level. To do so, expand Monitoring
and Tuning in the left pane of the administration console and click Performance Monitoring
Infrastructure. Select the application server that you want to collect data for (MZSR014 in
our case) and click Start Monitoring, as shown in Figure 8-19.
Figure 8-20 PMI collection that is activated for the application server
PMI can collect many different types of data at various levels of detail. To do so, click
Monitoring and Tuning Request Metrics. We used the standard settings in our example.
For more information about the different options and levels of granularity at which PMI data
can be collected, see the following website:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/topic/com.ibm.websphere.express.doc
/ae/tprf_pmi_encoll.html
388 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Viewing PMI data
To view PMI data, use the Tivoli Performance Viewer tool that is built into WebSphere
Application Server. To use it, expand Monitoring and Tuning in the left pane of the
administration console and click Performance Viewer Current activity. Then, select the
server that you want to see the PMI data for. A window similar to Figure 8-21 opens.
In this case, the servlet summary report of our DayTrader workload is shown. On the left, you
have many options to display different summary reports and look at the different performance
modules that visualize the PMI data. On the right, you see the (selected) report, which is the
servlet summary report in this example. It shows the name of the servlet, the application it
belongs to, and the average response time.
390 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The AllocateCount continues to go up as more transactions run. Notice that the count does
not go beyond 50 connections. You can use the administration console to verify whether 50 is
the maximum size of the connection by clicking Data Sources TradeDataSourceXA
Connection pools, as shown in Figure 8-23.
To verify this discard rate, go to the performance metrics of the connection pool and look for
the PrepStmtCacheDiscardCount statistic, as shown in Figure 8-26 on page 393.
392 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 8-26 Connection pool - PrepStmtCacheDiscardCount
This statistic confirms the alert from the performance advisor. So, what is the current setting
for the statement cache size? You can verify the setting by going to the administration console
and clicking Data sources TradeDatasourceXA WebSphere Application Server data
source properties, as shown in Figure 8-27.
As you can see, the discard count is now down to zero from 1400/sec, which is a
great improvement.
This is just one example of how to use PMI and Tivoli Performance Viewer to analyze
WebSphere Application Server performance data. For more information about the usage of
PMI, see the WebSphere Application Server Information Center PMI topics found at:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/topic/com.ibm.websphere.express.doc
/ae/cprf_pmidata.html
A good article about WebSphere Application Server performance called “Case study: Tuning
WebSphere Application Server V7 and V8 for performance” can be found at:
http://www.ibm.com/developerworks/websphere/techjournal/0909_blythe/0909_blythe.ht
ml#sec3c
394 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
8.4 Monitoring from the DB2 side
Even after an application moves to production, it is important to keep monitoring the
application. Over time, the behavior might change, for example, because the workload
increases or the data becomes disorganized. Therefore, it is important to continuously, or at
least periodically, check the performance of your applications. Dealing with all DB2
performance aspects is beyond the scope of this book, but this section provides an overview
of the information that is available and how to analyze DB2 performance.
Tip: STATIME DSNZPARM determines the interval at which DB2 writes out its statistics
information for classes 1 and 5. The default value in Version 9 is 5 minutes and 1 minute
in Version 10. Use STATIME=1. The cost of gathering this information is negligible and it
provides valuable information for analyzing performance problems.
In DB2 10, IFCIDs 0001, 0002, 0202, 0217, 0225, and 0230 are no longer controlled by
STATIME. These trace records are written at fixed, one-minute intervals.
The DB2 statistics and accounting information is normally written to SMF. DB2 statistics
records use SMF type 100 and DB2 accounting records use SMF type 101 records. Before
you send data to SMF, make sure that SMF is enabled so that it can write DB2 trace record
types 100 and 101 (and 102, which is used for performance type records). For more
information about this topic, see “Enabling SMF 120 data collection” on page 378.
Both traces can be started at DB2 start time through the SMFSTAT (statistics) and SMFACCT (for
accounting) DSNZPARMs.
These traces must be started on each of the members of a DB2 data sharing group to be able
to get the complete picture of all the work in the data sharing group.
To avoid this situation, you can direct the RRS attach to write a DB2 accounting record at
commit time (provided there are no open held cursors). The easiest way to achieve this task
from a WebSphere Application Server application is to set the accountingInterval custom
property on the data source to COMMIT by using the administration console, as shown in
Figure 8-29 on page 397.
396 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure 8-29 Specifying the accountingInterval customer property
If the transaction has no open WITH HOLD cursors, each time a commit point is reached (the
application issues SRRCMIT explicitly or implicitly), DB2 cuts an accounting record. If the
accounting interval is COMMIT and an SRRCMIT is issued while a held cursor is open, the
accounting interval spans that commit and ends at the next valid accounting interval end point
(such as the next SRRCMIT that is issued without open held cursors, application termination, or
SIGNON with a new authorization ID).
If these conditions are met, a DB2 accounting record is cut and the WLM enclave is reset.
When you use accounting rollup, you can also specify how DB2 aggregates the accounting
records by using ACCUMUID DSNZPARM. You can look at ACCUMUID as an SQL GROUP BY
specification. There are 18 different settings that you can specify for ACCUMUID. For more
information, see DB2 10 for z/OS Installation and Migration Guide, GC19-2974.
In our example, we use ACCUMUID=2 during some of our tests. “2” indicates that the
aggregation is done by “user application name” or “transaction name”; this is the value of the
clientApplicationInformation property or CURRENT CLIENT_APPLNAME special register value.
So, with ACCUMACC=10 and ACCUMUID=2, if you run 20 transactions named tran_1 and 10
transactions of tran_2, DB2 produces three accounting records; one for all executions of
tran_2 and two for the executions of tran_1. The information in such a rollup accounting
record is the sum of all the work of these 10 transactions.
The advantage of using rollup accounting is obvious. It can reduce the number of (SMF)
accounting records that DB2 produces. The disadvantage of using ACCUMACC is that you lose
transaction granularity information. With rollup accounting, you can no longer see how each
individual transaction performed because all accounting data of x transactions is rolled into a
single accounting record. This situation can make it difficult to analyze performance problems,
especially when the problem occurred only briefly or when using a high
ACCUMACC value.
Tip: DB2 10 introduced an option to compress records that are written to SMF by using
SMFCOMP=YES DSNZPARM, which compresses the SMF trace record before it is written to the
SMF data set. If you use ACCUMACC > 1 to reduce the data volume that is produced by DB2
accounting records, you might want to consider switching to ACCUMACC=NO and SMFCOMP=YES
to achieve this task, and keep the transaction level granularity that you give up by using
ACCUMACC >1.
Another option to reduce the size of the SMF (offloaded) data sets is to use SMS
compression on the dataclass (compaction=YES) for those data sets.
398 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Start by looking at the overall subsystem statistics. This information can be used to check the
overall health of the DB2 system.
Note: If this DB2 system runs other work than just transactions coming from WebSphere
Application Server, these other transactions are also included in the information of the DB2
statistics record.
We use IBM Tivoli OMEGAMON DB2 Performance Expert on z/OS V5.1.1 batch reporting to
look at the DB2 statistics data.
In our example, we want to look at a one-minute interval. (As the DB2 statistics interval is one
minute, we could also have used a STATISTICTS TRACE report instead).
Statistics highlights
Example 8-6 shows the header of the statistics report. It indicates the interval that we are
looking at, the DB2 subsystem, and also provides an idea about the number of threads that
were created, and the number of commits that occurred in the interval you are
reporting on.
As the DayTrader application that we used during our testing is a JDBC application using
dynamic SQL, it is important to verify that you have a good hit ratio in the global dynamic
statement cache. You can verify that in the DYNAMIC SQL STMT section in the statistics
report, as shown in Example 8-8.
Almost all prepares result in a short prepare, which means that the statement was found in
the global dynamic statement cache, which results in a high cache hit ratio of 99.99%.
The trade workload uses parameter markers, which increases the chance of finding a
matching statement in the dynamic statement cache.
For more information about this topic, see 4.3.2, “Enabling DB2 dynamic statement cache” on
page 141 and 6.2, “Dynamic SQL” on page 299.
400 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Subsystem services and DDF and DRDA location sections
The SUBSYSTEM SERVICES section (Example 8-9), and the DRDA REMOTE LOCS
section (Example 8-10) can be used to quickly determine whether the bulk of the work is
using a type 2 or a type 4 connection.
When you use a type 2 connection, you expect to see high numbers in the various commit
counters in the SUBSYSTEM SERVICES section, and when you use type 4, the commits
show up in the DRDA REMOTE LOCS section of the DB2 statistics report. It is clear from the
reports that this workload was using a type 4 connection, as almost all commit requests are in
the (SINGLE PHASE) COMMITS bucket in the DRDA REMOTE LOCS section.
When the number of active (allied) threads exceeds CTHREAD DSNZPARM value, new create
thread requests are queued. When this situation occurs, the QUEUED AT CREATE THREAD
counter is incremented. Typically, you want to see a zero value in this field. However, it is
possible that you hit CTHREAD when there is a significant spike in the workload, or when things
slow down for some reason. In those cases, it is better to queue the threads, or even deny
them, than to let them start processing. Using a large CTHREAD value allows much work to
start, but when the system is flooded, adding more work makes things worse. Therefore,
queuing work at create thread time, or even outside DB2 (in the application server), is better
than leaving the gates wide open (using a high CTHREAD value) and adding more work to a
system that is already under stress.
Because this is a workload that is using a type 4 connection, the work enters DB2 through the
DDF address space, in which case it is interesting to have a look at the GLOBAL DDF
ACTIVITY section as well, as shown in Example 8-11 on page 403.
Well-behaved transactions run a number of SQL statements and issue a commit. Then, the
DBAT (that represents the thread in DB2) and the connection that is tied to the transport in
the Data Server driver are disconnected from each other. The connection goes inactive
(waiting for the next request from the application server to arrive) and the DBAT is put into a
pool (so it can be reused by other connections that must run SQL statements. These types of
connections (that can go inactive at commit) are also called type 2 inactive connections, and
the DBATs are called pooled DBATs.
The CMTSTAT subsystem parameter controls whether threads are made active or inactive after
they successfully commit or roll back and hold no cursors. A thread can become inactive only
if it holds no cursors, has no temporary tables that are defined, and runs no statements from
the dynamic statement cache.
Note: Type 2 (inactive) connections have nothing to do with the type of Java driver that is
used by the application. On the contrary, DB2 type 2 (inactive) connections are always
associated with work entering DB2 through DRDA and always use a Java type 4 driver.
For more information about setting active/inactive connections, see DB2 10 for z/OS
Installation and Migration Guide, GC19-2974.
When a connection wants to process an SQL request and enters the DB2 server, the request
is put on a queue to allow a DBAT to be selected from the pool, or created, to process
the request.
402 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The ACC QU INACT CONNS (TYPE 2) counter indicates how many of these inactive
connections were put on this queue during the interval that you are looking at. It is a good
indicator of the amount of DRDA work that goes through the system.
Typically, a connection is only on that queue for a short time. Since Version 10, DB2 provides
information about the MIN/MAX and AVG QUEUE TIME in case you suspect that there is a
problem with connections not being able to obtain a DBAT quickly.
If the maximum number of DBATs is reached (MAXDBAT DSNZPARM), new requests are queued
and the DBAT/CONN QUEUED-MAX ACTIVE counter is incremented. Similar to the
QUEUED AT CREATE THREAD counter, you want to have a zero value in this field under
normal conditions. But as indicated above, it is often better to queue requests outside DB2
(and have a non-zero value in this field) than to let all the work into DB2 and get stuck during
DB2 processing.
Well-behaved transactions commit regularly and allow the connection to become inactive and
the DBAT to be pooled so other transactions (connections) can reuse a pooled DBAT. The
number of times a pooled DBAT is reused can be found in the DISCON (POOL) DBATS
REUSED counter.
There are conditions that do not let a connection go inactive. To optimize resource usage, you
want to make sure that these conditions do not apply to your applications. For more
information, see Chapter 10, “Managing DB2 threads”, in DB2 10 for z/OS Managing
Performance, SC19-2978.
Example 8-11 show that 110.0K requests came in during this interval, and DB2 was able to
reuse a DBAT 110.0K times, which is optimal reuse.
This information is from a different test run and the time interval is much larger in this case, so
you cannot compare this data with the data from the type 4 run. In addition, because a type 2
connection uses a local RRS attach, all work is directed to a single DB2 member, which is the
same member that runs the WebSphere Application Server. If there is a need to spread the
work across multiple members, which is typically the case in a data sharing environment, you
can run a WebSphere Application Server on the other LPAR and use the HTTP server to
“spray” the work between the two application servers. For this example, the number of
members or the way the workload is distributed is not important.
Note the high number of COMMIT PHASE 2 and READ ONLY COMMIT requests.
Transactions show only COMMIT PHASE 2 because DB2 is the only subsystem that is
involved in the processing and no global transaction is defined.
404 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
When you use a type 2 connection, applications coming from WebSphere Application Server
come into DB2 through RRS, and this type of RRS connections counts towards the IDBACK
DSNZPARM (which limits the number of background connections). If the high water mark (HWM)
gets close to the IDBACK2 DSNZPARM value, you might have to increase the DSNZPARM value.
For information about IDBACK and other DSNZPARMs, see 4.3.1, “DB2 connectivity
installation parameters” on page 138.
Example 8-13 shows the GLOBAL DDF ACTIVITY section from the type 2 run. There is
almost no DDF activity (ACC QU INACT CONNS (TYPE 2) is low), which is expected if the
entire workload is using a Java type 2 connection (through RRS).
2
The IDBACK subsystem parameter determines the maximum number of concurrent connections that can be
identified to DB2 from batch.
As with all information in the DB2 statistics report, the locking sections contain information
about the locking activity for the entire subsystem. In most cases, it is more interesting to look
at the locking information for individual applications, so when you look at the DB2 statistics
information, you typically want to make sure that only the overall locking behavior and activity
are fine. Example 8-14 shows the locking and data sharing locking section from one of the
runs we ran during the project that produced this book.
For more information about locking, see DB2 9 for z/OS: Resource Serialization and
Concurrency Control, SG24-4725.
Example 8-14 Statistics report - locking and data sharing locking sections
LOCKING ACTIVITY QUANTITY /SECOND /THREAD /COMMIT
--------------------------- -------- ------- ------- -------
SUSPENSIONS (ALL) 4043.00 67.25 50.54 0.05
SUSPENSIONS (LOCK ONLY) 530.00 8.82 6.63 0.01
SUSPENSIONS (IRLM LATCH) 3448.00 57.35 43.10 0.04
SUSPENSIONS (OTHER) 65.00 1.08 0.81 0.00
406 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
TIMEOUTS 0.00 0.00 0.00 0.00
DEADLOCKS 0.00 0.00 0.00 0.00
IBM OMEGAMON XE for DB2 PE on z/OS calculates the GLOBAL CONTENTION RATE for
you. Try to keep it below 3 - 5%. It also calculates the FALSE CONTENTION RATE. False
contention should be less than 1 - 3% of the total number of IRLM requests sent to XES.
Another way to assign buffer pools is to dedicate certain pools to specific applications.
Typically, the buffer pool assignments are a mixture of all the above. IBM OMEGAMON XE for
DB2 PE on z/OS reports on each of the DB2 buffer pools that were used in a specific DB2
statistics interval, but it also rolls up all the BP activity into a single TOTAL section, as shown
in Example 8-15. If you want to get an idea about the overall BP activity, this is the place to
start your analysis. (The information for the individual BPs is identical to the information in the
TOTAL section).
408 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
SEQUENTIAL PREFETCH REQUEST 0.00 0.00 0.00 0.00
SEQUENTIAL PREFETCH READS 0.00 0.00 0.00 0.00
PAGES READ VIA SEQ.PREFETCH 0.00 0.00 0.00 0.00
S.PRF.PAGES READ/S.PRF.READ N/C
LIST PREFETCH REQUESTS 5.00 0.08 0.06 0.00
LIST PREFETCH READS 0.00 0.00 0.00 0.00
PAGES READ VIA LIST PREFTCH 0.00 0.00 0.00 0.00
L.PRF.PAGES READ/L.PRF.READ N/C
DYNAMIC PREFETCH REQUESTED 1166.00 19.39 14.57 0.01
DYNAMIC PREFETCH READS 23.00 0.38 0.29 0.00
PAGES READ VIA DYN.PREFETCH 91.00 1.51 1.14 0.00
D.PRF.PAGES READ/D.PRF.READ 3.96
PREF.DISABLED-NO BUFFER 0.00 0.00 0.00 0.00
PREF.DISABLED-NO READ ENG 0.00 0.00 0.00 0.00
PAGE-INS REQUIRED FOR READ 224.00 3.73 2.80 0.00
The BP information is made up of three sections: read activity, sort activity, merge activity, and
write activity. In the read activity, you want to check the following items:
GETPAGE REQUEST: This is the number of times an SQL statement had to request a
page from the DB2 buffer manager component. When an SQL statement must retrieve a
row, that row lives on a page, and that page lives in the DB2 buffer pool (or on disk, or in
the group buffer pool (GBP) in a data sharing system). So to obtain the row, there is
request to the DB2 buffer manager component to get the page (getpage). The amount of
getpage activity is a good measure for the amount of work that DB2 is performing in a
certain interval.
410 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
If you want to look at a single number for the BP performance, people often use the BP hit
ratio. BP hit ratio gives you some idea about whether pages being requested by the
applications are in the BP. In our example, the hit ratio is 99.94%, which is high and not a
typical number, especially considering this is the total of all buffer pools in the system.
Another measure of how your buffer pools are doing is to calculate the number of getpages
per sync. I/O (calculated by IBM OMEGAMON XE for DB2 PE as GETPAGE PER
SYN.READ-RANDOM). As the application is waiting for sync. I/Os to complete, the higher
this ratio, the better. In our case, DB2 must perform a sync I/O only every 2471.29
getpage requests.
Have these three counters show a low or zero value. If that is not the case, you must
investigate and at least understand why they are non-zero if you cannot remedy the problem.
If there is DB2 sort activity that is triggered by SQL activity, such as by ORDER BY or GROUP BY,
the SORT/MERGE section shows non-zero values. It is important to make sure the workfile
requests that degraded or were rejected are kept to a minimum.
The last section of the buffer pool report contains information about DB2 write operations.
This is where DB2 must write the updated pages in the buffer pool or group buffer pool back
disk. Most write activity is asynchronous to the application, meaning applications typically do
not have to wait for write I/Os to complete. However, it is a preferred practice to verify this
section in the DB2 statistics report to make sure that there are no issues with the write
performance.
There are a number of buffer pool thresholds that indicate to DB2 that it is time to write
updated pages back from the virtual pool to disk. Here are two of these thresholds:
HORIZ.DEF.WRITE THRESHOLD: When the number of unavailable pages reaches the
DWQT buffer pool threshold (set to 30% by default, but you can change the value by
running the -ALTER BUFFERPOOL command), deferred (asynchronous) writes are triggered.
VERTI.DEF.WRITE THRESHOLD: This is the same as the deferred write threshold, but at
the page set level. When the number of updated pages for a data set reaches the VDWQT,
deferred (asynchronous) writes begin for that data set. The default is 5%, but you can
change the value by running the -ALTER BUFFERPOOL command. The VDWQT value can
either be a percentage (of the BP size) or the number of changed pages that changed
before the asynchronous writes are triggered.
For the workload that produced the BP numbers above, these values are zero. This is
because all objects are GBP-dependent and the updated pages (few per transaction) are
written to the GBP at commit time, and are written back to disk through castout I/O
operations. As such, the DWQT and VDQWT are not reached.
412 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
As with the BP read operation section, there are a few counters that should have a zero or low
value:
DM THRESHOLD: This counter indicates that the number of unavailable pages in the BP
reached 95% or more, which is the data manager threshold. Normally, DB2 accesses the
page in the virtual buffer pool once for each page, no matter how many rows are retrieved
or updated on that page. If the threshold is exceeded, a getpage request is done for each
row instead of each page. If more than one row is retrieved or updated in a page, more
than one getpage and release page request is performed on that page. This is a bad thing,
but it is an autonomic mechanism that is built into DB2 to slow things down, giving the
write engines time to write some of the unavailable pages back to disk.
WRITE ENGINE NOT AVAILABLE: This counter is the number of times that a write engine
was not available. This field is no longer populated by DB2, as it was too common to hit the
maximum of 300 engines running at the same time. Under normal circumstances, it is not
a problem when you hit this limit, as applications normally do not wait for write I/Os to
complete.
PAGE-INS REQUIRED FOR WRITE: This counter is similar to the page-in counter for
read operations. Before doing a write I/O from a buffer, the frame is fixed in real storage.
When DB2 detects the frame in auxiliary storage before the write I/O starts, it increments
the counter.
For more information about buffer pool tuning, see DB2 9 for z/OS: Buffer Pool Monitoring and
Tuning, REDP-4604.
The main purpose of the group buffer pool in a DB2 data sharing environment is to ensure
buffer coherency, that is, all members of the data sharing group can obtain the correct (most
recent) version of the page that they want to process. To achieve this task, updated pages are
written to the group buffer pool at commit time, and if these pages are present in the local
buffer pool of other members, these pages are cross-invalidated (XI). The next time such a
member requests that page, it sees the XI flag and it retrieves a new (latest) copy of the
pages either from the group buffer pool or from disk.
To reduce this type of data sharing impact (refreshing XI-pages) to a minimum, make sure
that if such a refresh operation must occur that the requested page is available in the group
buffer pool. A page can be retrieved from the group buffer pool at microsecond speed, and
reading from disk typically takes milliseconds.
414 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
To see whether DB2 can accomplish this task, check the group buffer pool section:
SYN.READ(XI)-DATA RETURNED: This is the number of times that this DB2 member
found an XI page in its local buffer pool, went to the GBP, and found the page in the GBP.
SYN.READ(XI)-NO DATA RETURN: This is the number of times that this DB2 member
found an XI page in its local buffer pool, went to the GBP, and did not find the page in
the GBP.
In general, keep the Sync.Read(XI) miss ratio below 10% by using the following formulas:
TOTAL SYN.READ(XI) = SYN.READ(XI)-DATA RETURNED + SYN.READ(XI)-NO DATA
RETURN
Sync.Read(XI) miss ratio = SYN.READ(XI)-NO DATA RETURN / TOTAL SYN.READ(XI)
Typically, only changed pages are written to the GBP (GBPCACHE(CHANGED) is the default).
However, you can use the group buffer pool as an auxiliary storage level for unchanged pages
as well if you specify GBPCACHE(ALL). This way, unchanged pages are also written to the GBP.
When DB2 does not find a page in the local BP, it checks the GBP first before going to disk
(GBP retrieval is much faster than I/O from disk). If the page is in the GBP, it is reused from
there instead of reading from disk.
To verify the efficiency of this extra caching level, you can use the SYN.READ(NF)-DATA
RETURNED and SYN.READ(NF)-NO DATA RETURN GBP statistics. They are similar to the
XI information, but they represent the time that DB2 was not able to find the page in the local
BP, and went to the GBP, and was either successful or unsuccessful in finding the page in the
GBP. Our tests used the default GBPCACHE(CHANGED) setting, and the local BP hit ratio that we
described in the (local) BP section above, is high, so there is little benefit in using
GBPCACHE(ALL) for this application.
The process to write changed pages from the GBP back to disk is called castout processing.
As for local BP, castout processing is also triggered by a number of thresholds:
CASTOUT CLASS THRESHOLD: This is the number of times the group buffer pool
castout was initiated because the group buffer pool class castout threshold was exceeded.
This is similar to the VDWQT threshold at the page set level for local buffer pools. Queues
inside the GBP are not by page set like local BP, but are based on a class. When the
number of changed pages exceeds the percentage that is specified by CLASST, castout
for that class is triggered. The default is 5%.
GROUP BP CASTOUT THRESHOLD: This is the number of times group buffer pool
castout was initiated because the group buffer pool castout threshold was exceeded. This
threshold is similar to the DWQT threshold for local buffer pools. When the number of
changed pages exceeds the threshold, castout is triggered. The default GBPOOL
threshold is 30%.
GBP CHECKPOINTS TRIGGERED: This is the number of times group buffer pool castout
was initiated because a group buffer pool checkpoint was initiated (the default is every four
minutes). This is also similar to DB2 system checkpoint, which triggers asynchronous
writes for all updated pages in the local BP. You see only a non-zero value on the member
if that member is the GBP structure owner. It is the structure owner that is responsible for
GBP checkpoint processing.
In our test case, the castout is driven by the CLASST threshold, which is expected behavior.
There are not enough different changed pages in the GBP to trigger the GBPOOL threshold.
Tip: You can also use the ALLOWAUTOALT(YES) option to allow XES to dynamically adjust
your DB2 GBP. This feature is not intended to adjust GBP settings to deal with sudden
spikes of activity, but it designed to adjust the settings when the workload changes
gradually over time.
For more information, see “Auto Alter capabilities” in DB2 10 for z/OS Data Sharing:
Planning and Administration, SC19-2973 and “Identifying the coupling facility
structures”, found at:
http://publib.boulder.ibm.com/infocenter/zos/v1r13/index.jsp?topic=%2Fcom.ib
m.zos.r13.ieaf100%2Ficfstr.htm
WRITE TO SEC-GBP FAILED: This is similar to the previous counter, but applies to writes
to the secondary group buffer pool.
As page P-lock processing is handled by the DB2 buffer manager component, the DB2 group
buffer pool statistics also contain valuable information about the page P-lock activity at the BP
level (the example above shows only the total for all GBPs, but the DB2 statistics report
contains this type of information for each group buffer pool.
The DB2 statistics record contains information about the page P-lock activity in terms of the
number of requests, the number of suspensions, and the number of negotiations. It also
distinguishes between page P-locks for space map pages, data pages (when using row level
locking), and page P-locks for index leaf pages. A higher number of page P-lock requests
means that DB2 must do some additional processing of these transactions, which typically
translates into more processor and increased elapsed time. Also, acquiring a page P-lock is
less expensive than a suspension for a page P-lock, which in turn is less expensive than
negotiating a page P-lock. (Unlike L-locks or transaction locks, P-locks can be negotiated
between members, but it is an expensive process, typically requiring forced writes to the
active log data set and synchronous writes to the group buffer pool.
CPU information
The CPU Times section of the DB2 statistics report, which is shown in Example 8-17 on
page 417, provides information about the amount of processing that is used by the different
DB2 system address spaces:
SYSTEM SERVICES ADDRESS SPACE (ssidMSTR)
DATABASE SERVICES ADDRESS SPAC (ssidDBM1)
IRLM
DDF ADDRESS SPACE (ssidDIST)
416 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example 8-17 Statistics report - CPU Times section
CPU TIMES TCB TIME PREEMPT SRB NONPREEMPT SRB TOTAL TIME PREEMPT IIP SRB /COMMIT
------------------------------- --------------- --------------- --------------- --------------- --------------- --------------
SYSTEM SERVICES ADDRESS SPACE 0.032263 0.468393 0.011236 0.511892 N/A 0.000006
DATABASE SERVICES ADDRESS SPACE 0.011671 0.270898 0.075548 0.358117 0.011764 0.000004
IRLM 0.000014 0.000000 1.351361 1.351376 N/A 0.000016
DDF ADDRESS SPACE 3.492625 36.759432 0.721285 40.973342 22.948061 0.000481
The CPU time that is reported here is the amount of processing that is used by DB2 to
perform system-related activity, on behalf of the applications that are running SQL requests.
For example, when DB2 must access a table space the first time, the data set is not open yet.
Therefore, the application is suspended, and DB2 switches to a task control block (TCB) in
the DBM 1 address space. This TCB performs the allocation and physical open of the data
set. The processing to perform the allocation and physical open is charged to the TCB
processing time of the DBM1 address space. After the data set open is done, the application
is resumed and continues processing.
The WLM-managed SP address spaces are not considered system address spaces. The
processing that is used by those spaces is reported in the DB2 nested activity section of the
DB2 accounting reports for the different applications.
Note: The processing time that is reported by the DDF (ssidDIST) address space includes
both the processing time that is used by system tasks running in this address space, but it
also includes the processing time that is used by all the database access threads (DBATs)
in the system. DBATs run as pre-emptible Service Request Blocks (SRBs) in the DIST
address space, so all the work that is done by remote connections running SQL
statements against a DB2 for z/OS system shows up in the DB2 accounting records, and in
the CPU Times section in the DB2 statistics report, under PREEMPT SRB time.
This section separates the CPU Time into the following types:
TCB Time: This is the amount of processing time that is used by work that runs using a
task control block as a dispatchable unit of work.
PREEMPT SRB: This is the amount of processing time that is used by work that runs by
using a pre-emptible service request block as a dispatchable unit of work.
NONPREEMPT SRB: This is similar to PREEMPT SRB, but this type of dispatchable unit
must voluntarily relinquish control, but the other types can be interrupted at any time by
the MVS dispatcher. DB2 has few of these types of SRB. IRLM still uses this type of
dispatchable unit, but IRLM requests are short, must run to completion without being
interrupted, and must be serviced with a high priority.
TOTAL TIME: This is the sum of TCB Time, PREEMPT SRB, and NONPREEMPT SRB.
PREEMPT IIP SRB: This is the amount of processing that DB2 ran on a specialty engine,
such as zIIP or zAAP. It is not included in the other CPU Time fields, as users are not
charged for the use of the zIIP or zAAP engine.
The workload in Example 8-17 is a distributed workload, and the majority of the processing
time is PREEMPT SRB time in the DIST address space. Note the considerable amount of
zIIP offload for the DDF work, 22.959825/ (22.959825 + 37.498723)=38.4%.
There are many more sections in the DB2 statistics record with information about how the
subsystem is doing. Describing all of them is beyond the scope of this book. For more
information, see DB2 10 for z/OS Managing Performance, SC19-2978 and Tivoli
OMEGAMON XE for DB2 on z/OS Report Reference, SH12-6963.
Example 8-19 on page 419 shows a sample SYSIN that creates a DB2 accounting report
using the IBM OMEGAMON XE for DB2 PE on z/OS batch reporting facility.
418 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example 8-19 Create a DB2 accounting report
//SYSIN DD *
DB2PM
* *********************************
* GLOBAL PARMS
* *********************************
GLOBAL
* Adjust for US East Coast DST
TIMEZONE (+4)
FROM(,21:29:00)
TO(,21:30:01)
* Include the entire group
INCLUDE( GROUP(DB0ZG))
* ************************************
* ACCOUNTING REPORTS
* ************************************
ACCOUNTING
REPORT
DDNAME(ACTRCDD1)
LAYOUT(LONG)
ORDER(TRANSACT)
EXEC
The statements in Example 8-19 generate an IBM OMEGAMON XE for DB2 PE on z/OS
accounting report for all members of data sharing group DB0ZG (which includes two
members named D0Z1 and D0Z2). The information is grouped by DB2 transaction name
(ORDER(TRANSACT)). The report covers only a single minute. This report is congruent with
Example 8-18 on page 418.
As this is an accounting report, the averages are based on the number of occurrences (the
number of transactions that qualify for the filter criteria in that interval). This is different from
an accounting trace report, which shows individual transactions (so there are no averages).
As with the statistics report, the accounting report consists of many sections. As an example,
we use the sections of the transaction that is called ‘TraderClientApplication’. For more
information about how we set the client information strings for this test, see “Specifying client
information at the data source level” on page 366.
Example 8-20 Accounting report - identification elapsed time and class 2 time distribution
LOCATION: DB0Z OMEGAMON XE FOR DB2 PERFORMANCE EXPERT (V5R1M1) PAGE: 1-4
GROUP: DB0ZG ACCOUNTING REPORT - LONG REQUESTED FROM: ALL 21:29:00.00
MEMBER: D0Z1 TO: DATES 21:30:01.00
SUBSYSTEM: D0Z1 ORDER: TRANSACT INTERVAL FROM: 08/13/12 21:29:00.37
DB2 VERSION: V10 SCOPE: MEMBER TO: 08/13/12 21:30:00.99
TRANSACT: TraderClientApplication
The example indicates that #DBATS is 86521, so we are clearly dealing with a type 4
connection. We are not using accounting rollup (#DDFRRSAF ROLLUP is zero). It is a
preferred practice to check #COMMITS versus #ROLLBACKS to make sure that the vast
majority of the work is succeeding (and committing), and not constantly failing (rolling back its
unit of work).
The highlights section calculates the SYNCH I/O AVG. seen by DB2, which provides you with
an easy way to do a smoke test and see whether average synchronous I/O times are correct.
In this case, they are 680 microseconds.
The normal termination section, which is shown in Example 8-22 on page 421, gives you a
quick idea about what triggered the accounting record. This is from the same accounting
report in Example 8-21 that uses type 4 connectivity, so expect to see a high number of
TYPE2 INACTIVE threads (where DB2 separates the DBAT and the connection at commit
time, by pooling the DBAT and making the connection inactive).
420 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example 8-22 Accounting report - normal termination
NORMAL TERM. AVERAGE TOTAL
--------------- -------- --------
NEW USER 0.00 1
DEALLOCATION 0.00 1
APPL.PROGR. END 0.00 0
RESIGNON 0.00 0
DBAT INACTIVE 0.00 0
TYPE2 INACTIVE 1.00 86519
RRS COMMIT 0.00 0
In our case, we have a good cache hit ratio. The Trader application uses a limited number of
SQL statements and they typically use parameter markers. When you use parameter
markers, DB2 uses a ‘?’ at prepare time, and provides the actual value of the parameter
marker at execution time. So, the only difference between the SQL statements is the value
that is provided at run time. The actual SQL statement text (using the ‘?’) is used by DB2 to
determine whether the statement is in the cache. Using a parameter marker instead of a
literal value increases the chance of finding a statement cache match, allowing DB2 to reuse
the cached statement.
Locking information
Another important part of the transaction profile is the locking behavior of the transaction. The
locking section and data sharing locking section is shown in Example 8-25. We already
looked at some of the fields when we described the DB2 statistics report in “Locking and data
sharing locking sections” on page 406. For more information about local and global
suspensions, timeouts, deadlocks, and lock escalations, see that section. The only difference
between the accounting and statistics fields is that the accounting information applies to a
specific transaction/application, but the statistics data applies to all the transactions in the
DB2 subsystem that ran during the reporting time frame.
422 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
LOCK SUSPENSIONS 0.01 530
IRLM LATCH SUSPENS. 0.04 3531
OTHER SUSPENS. 0.00 2
LOCK REQUESTS
This counter represents the number of (L-)LOCK requests that were sent to IRLM. It is
important to reduce the number of locks as much as possible, as acquiring locks uses
processing time, but might also prevent other people from accessing the same resource if
they must access the page/row in a way that is not compatible with the lock that your
transaction is holding. This can be achieved by application design and the usage of DB2 lock
avoidance techniques.
UNLOCK REQUESTS
This counter represents the number of (L-)UNLOCK requests that were sent to IRLM. A single
UNLOCK request can release many locks in a single operation. For example, at commit time,
DB2 issues an UNLOCK ANY request to release all locks that are no longer required. If a
program that is using isolation level CS is fetching from a read only cursor, DB2 tries to avoid
taking locks as much as possible (lock avoidance). However, DB2 might have to acquire locks
as it fetches rows from the cursor. If a lock was acquired and the cursor moves off the
row/page, that lock is released by an UNLOCK request.
So, DB2 issues UNLOCK requests mainly for this type of cursor fetching (unlocking a row/page
at a time) or at commit (unlocking them all).
Tip: To assess the effectiveness of the DB2 data lock avoidance techniques at a high level,
you can calculate #UNLOCK/COMMIT. If that value is greater than 5, DB2 lock avoidance is not
effective. In that case, you might want to ensure that the application uses the
ISOLATION(CS) and CURRENTDATA(NO) BIND options.
In Example 8-25 on page 422, UNLOCK/COMMIT is 1.21 (avg #unlocks, as the #commits is
almost identical to the #occurrences here), which is a good value. This is a high-level check.
If, for example, the application is light and is doing only a few fetches, even if lock avoidance is
not working at all, the ratio is still low. Therefore, it is always important to check the overall
transaction profile and not blindly apply any rules of thumb.
Tip: In general, try to issue a COMMIT frequently enough to keep the average MAX PG/ROW
LOCKS HELD below 100.
The AVERAGE value that is shown in the accounting REPORT is the average of MAX
PG/ROW LOCKS HELD of all the accounting records that qualify for the report. The TOTAL is
for the maximum of all MAX PG/ROWS LOCKS HELD, that is, the “high water mark” of all
accounting records that qualify for the report. For example, if transaction A has a MAX
PG/ROWS LOCKS HELD value of 10, and transaction B has a MAX PG/ROWS LOCKS
HELD value of 20, then an accounting report that includes these two transactions has
AVERAGE (average of maximum) of 15, and TOTAL (high water mark) of 20.
The SYNCH.XES - LOCK REQUEST counter represents the total number of lock requests
that have been synchronously sent to XES. This number includes both P-locks (page set and
page) and L-lock requests propagated to z/OS XES synchronously. This number is not
incremented if the request is suspended during processing (either because of some type of
global contention, or because the XES heuristic algorithm dedicated to convert the request
from sync to async. The latter are included in the CONVERSIONS- XES counter.
As with the locking information, we already looked at most of the information when we
described these sections in the DB2 statistics report in “Buffer pool section” on page 408 and
“Group buffer pool section” on page 413. The only difference between the accounting and
statistics fields is that the accounting information applies to a specific transaction/application,
but the statistics data applies to all the transactions in the DB2 subsystem that ran during the
reporting time frame.
Example 8-26 shows only the totals for all buffer pools that are used by the transaction
(TOTAL BPOOL). The accounting report contains the same information for each of the buffer
pools that were accessed by the transaction.
Example 8-26 Accounting report - buffer pool and group buffer pool
TOTAL BPOOL ACTIVITY AVERAGE TOTAL
--------------------- -------- --------
BPOOL HIT RATIO (%) 99.95 N/A
GETPAGES 6.08 526390
424 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
GETPAGES-FAILED 0.00 0
BUFFER UPDATES 0.54 46784
SYNCHRONOUS WRITE 0.00 0
SYNCHRONOUS READ 0.00 221
SEQ. PREFETCH REQS 0.00 0
LIST PREFETCH REQS 0.00 5
DYN. PREFETCH REQS 0.01 1193
PAGES READ ASYNCHR. 0.00 56
From an application profile point of view, the following information is typically used.
GETPAGE REQUEST
This is the number of times an SQL statement must request a page from the DB2 buffer
manager component. When an SQL statement must retrieve a row, that row lives on a page,
and that page lives in the DB2 buffer pool (or on disk, or in the group buffer pool (GBP) in a
data sharing system). So, to obtain the row, there is request to the DB2 buffer manager
component to get the page (getpage). The amount of getpage activity is a good measure for
the amount of work that DB2 is performing to satisfy the SQL requests from the application.
BUFFER UPDATES
This is the number of times a buffer update occurs. This field is incremented every time that a
page is updated and is ready to be written to DASD/GBP. DB2 typically increments the
counter for each row that is changed (inserted, updated, or deleted). For example, if an
application updates two rows on the same page, you are likely to see GETPAGE REQUEST
1, BUFFER UPDATES 2 (provided no additional getpages were required to retrieve the page
or any index updates were needed).
SYNCHRONOUS READS
When the DB2 buffer manager finds that the page is not in the buffer pool (or GBP), it has to
read the page from disk. When DB2 reads a single page from disk, and the application waits
for this page to be brought into the BP, this is called a synchronous read operation. Even
though disk I/O has improved dramatically over the last couple of years, it is still orders of
magnitude slower than retrieving a page that is already in the BP. Therefore, reducing the
number of SYNC I/Os has a positive effect on the transaction response time. So, it is
important to verify the amount I/O activity that an application must perform. A high number of
sync I/O requests can be a sign of a poor access path (when combined with a high number of
getpage requests) or a sign of a buffer pools that is performing poorly.
The thread activity time is the entire time that the thread (application/transaction) was active.
This time is identical to the DB2 class 1 elapsed time. This covers the time between the first
SQL statement (that triggers the DB2 thread creation or reuse) until the thread ends or
commits depending on the type of attachment you use. This is described in more detail in
8.4.2, “Creating DB2 accounting records at a transaction boundary” on page 396.
426 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Thread activity time = Class 1 elapsed
The accounting class 2 time is the time that the transaction spends inside the DB2 database
engine. When the thread is running an SQL statement, it can either be using CPU time,
recorded as CLASS 2 CPU time (when one general-purpose engine) or SE CPU time (when
running on a specialty engine), or waiting for something. This waiting time is divided in to
class 3 wait time, which is a wait that DB2 is aware of and can account for, like waiting for a
lock that is held by another tran in an incompatible state, and waits where we know were in
DB2 but it is not one of the classes 3 wait times that DB2 can report on. This latter time is also
known as not accounted time, and is typically a small portion of the class 2 elapsed time.
Figure 8-31 shows the life of a transaction and how the work inside and outside of DB2 is
reported on, as accounting class 1, class 2, and class 3 time.
SQL ..
SQL ..
...Commit
(Terminating Thread)
Class 3
(susp time)
Agent Agent, non-nested CPU
Agent, non-nested ET
428 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
LOG WRITE I/O 0.000081 0.12
OTHER READ I/O 0.000012 0.01
OTHER WRTE I/O 0.000001 0.00
SER.TASK SWTCH 0.000001 0.00
UPDATE COMMIT 0.000000 0.00
OPEN/CLOSE 0.000001 0.00
SYSLGRNG REC 0.000000 0.00
EXT/DEL/DEF 0.000000 0.00
OTHER SERVICE 0.000000 0.00
ARC.LOG(QUIES) 0.000000 0.00
LOG READ 0.000000 0.00
DRAIN LOCK 0.000000 0.00
CLAIM RELEASE 0.000000 0.00
PAGE LATCH 0.000060 0.02
NOTIFY MSGS 0.000000 0.00
GLOBAL CONTENTION 0.000164 0.31
COMMIT PH1 WRITE I/O 0.000000 0.00
ASYNCH CF REQUESTS 0.000044 0.19
TCP/IP LOB XML 0.000000 0.00
TOTAL CLASS 3 0.000399 0.71
When the transaction is using a type 4 connection, as is the case here, the work comes in
over the network in to the DB2 DIST address space. As the class 1 time is the total time that
the thread is active, it also includes the time that the transaction is spending in the application
server and in the network. As the class 2 time records the time doing SQL-related activity, the
time that we spend in the DDF address space not performing SQL activity is also included in
the class 1 time, but it a small amount of time and processing.
Both class 1 and class 2 record both the elapsed and CPU time. For the CPU time, DB2
distinguishes between CPU time that is used on a general-purpose engine (CP CPU time)
and time on a specialty engine (zIIP or zAAP, that is, SE CPU time). SE CPU time is not
included in the CP CPU time.
The class 2 suspend time is the sum of all the CLASS 3 wait counters that DB2 tracks.
The class 3 suspensions section records the time and number of events that the transaction
(on average, as this an accounting report, not an accounting trace) was suspended for each
of the suspensions that DB2 tracks.
When you use a type 4 connection, you can calculate the time in DB2:
Class 2 non-nested ET + (SP + UDF + trigger Class 1 ET) + non-nested (Class 1 CP CPU +
Class 1 SE CPU - Class 2 CP CPU - Class 2 SE CPU)
When using a type 4 connections, the accounting record also includes a distributed activity
section, as shown in Example 8-28, from a DRDA requester with IP address 9.12.6.9 (our
WebSphere Application Server).
#COMMIT(2) RECEIVED: N/A TRANSACTIONS RECV. : N/A #PREPARE RECEIVED: N/A MSG.IN BUFFER: N/A
#BCKOUT(2) RECEIVED: N/A #COMMIT(2) RES.SENT: N/A #LAST AGENT RECV.: N/A #FORGET SENT : N/A
#COMMIT(2) PERFORM.: N/A #BACKOUT(2)RES.SENT: N/A
#BACKOUT(2)PERFORM.: N/A
In an OMPE accounting report, most of the fields are averages (based on the number of
occurrences), but some of the fields contain the total for all occurrences that are included in
the report (the fields in bold). This section indicates how much DRDA traffic occurred in terms
of the number of SQL statements, bytes, blocks, and messages. When using blocking that
rows are put into blocks, which are then sent out in messages. As these are short running
transactions with only a small amount of data being passed, there is little blocking activity.
When DB2 accounting trace class 7, 8, or 10 is active, DB2 also produces accounting
information at the program or package level, as shown in Example 8-29.
TRANSACT: TraderClientApplication
As the Trader workload is using JDBC, all the work that is performed by the application runs
under the standard JDBC packages, in this case, SYSLN300. (We bound the package into a
special collection called JDBCNOHDBAT - JDBC_No_High_performance_DBAT, so the
regular packages do not use RELEASE(DEALLOCATE)).
430 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
At the package level, DB2 also records the ET and CPU time (GCP and SE) that was spent in
the package (Class 7 time), and a large subset of the Class 3 suspension counters is also
available at the package level (Class 8 time). When Class 10 is active, there is additional
information about the SQL, locking, and buffer pool activity, but we did not record this
information during this test.
Package level information is not that useful when using JDBC, as all work runs under the
same set of packages. In this case, you must rely on using the client strings to trigger correct
segregation of the JDBC work.
When you use SQLJ, package level information can be helpful. In the SQLJ case, the SQL
statements run as static SQL and each application binds its own set of packages, and allows
for a more granular view at the package level of the application’s database access pattern.
Example 8-30 shows the accounting class 1, 2, and 3 information when using a type 2 (RRS)
connection to access DB2 for z/OS. The application is identical to what we used before
(TraderClientApplication); we only changed from a type 4 to type 2 connection in the data
source. However, the number of users being simulated was different, so you should not be
comparing the type 2 and the type 4 run, as the application profile is different.
TRANSACT: TraderClientApplication
AVERAGE APPL(CL.1) DB2 (CL.2) IFI (CL.5) CLASS 3 SUSPENSIONS AVERAGE TIME AV.EVENT HIGHLIGHTS
------------ ---------- ---------- ---------- -------------------- ------------ -------- --------------------------
ELAPSED TIME 0.018236 0.010092 N/P LOCK/LATCH(DB2+IRLM) 0.005214 0.70 #OCCURRENCES : 1304797
NONNESTED 0.018236 0.010092 N/A IRLM LOCK+LATCH 0.001948 0.26 #ALLIEDS : 1304797
STORED PROC 0.000000 0.000000 N/A DB2 LATCH 0.003266 0.44 #ALLIEDS DISTRIB: 0
UDF 0.000000 0.000000 N/A SYNCHRON. I/O 0.000001 0.00 #DBATS : 0
TRIGGER 0.000000 0.000000 N/A DATABASE I/O 0.000000 0.00 #DBATS DISTRIB. : 0
LOG WRITE I/O 0.000001 0.00 #NO PROGRAM DATA: 57
CP CPU TIME 0.000243 0.000197 N/P OTHER READ I/O 0.000000 0.00 #NORMAL TERMINAT: 0
AGENT 0.000243 0.000197 N/A OTHER WRTE I/O 0.000000 0.00 #DDFRRSAF ROLLUP: 130474
NONNESTED 0.000243 0.000197 N/P SER.TASK SWTCH 0.000817 0.11 #ABNORMAL TERMIN: 57
STORED PRC 0.000000 0.000000 N/A UPDATE COMMIT 0.000817 0.11 #CP/X PARALLEL. : 0
UDF 0.000000 0.000000 N/A OPEN/CLOSE 0.000000 0.00 #IO PARALLELISM : 0
TRIGGER 0.000000 0.000000 N/A SYSLGRNG REC 0.000000 0.00 #INCREMENT. BIND: 0
PAR.TASKS 0.000000 0.000000 N/A EXT/DEL/DEF 0.000000 0.00 #COMMITS : 1304734
OTHER SERVICE 0.000000 0.00 #ROLLBACKS : 0
SECP CPU 0.000000 N/A N/A ARC.LOG(QUIES) 0.000000 0.00 #SVPT REQUESTS : 0
LOG READ 0.000000 0.00 #SVPT RELEASE : 0
SE CPU TIME 0.000050 0.000041 N/A DRAIN LOCK 0.000000 0.00 #SVPT ROLLBACK : 0
NONNESTED 0.000050 0.000041 N/A CLAIM RELEASE 0.000000 0.00 MAX SQL CASC LVL: 0
STORED PROC 0.000000 0.000000 N/A PAGE LATCH 0.001033 0.08 UPDATE/COMMIT : 0.21
UDF 0.000000 0.000000 N/A NOTIFY MSGS 0.000000 0.00 SYNCH I/O AVG. : 0.000613
TRIGGER 0.000000 0.000000 N/A GLOBAL CONTENTION 0.000292 0.05
COMMIT PH1 WRITE I/O 0.000000 0.00
PAR.TASKS 0.000000 0.000000 N/A ASYNCH CF REQUESTS 0.000000 0.00
TCP/IP LOB XML 0.000000 0.00
SUSPEND TIME 0.000000 0.007358 N/A TOTAL CLASS 3 0.007358 0.95
AGENT N/A 0.007358 N/A
PAR.TASKS N/A 0.000000 N/A
STORED PROC 0.000000 N/A N/A
Now we are looking at an RRS (T2) connection, as indicated by a non-zero number in the
#ALLIEDS field (in the highlights section). #DDFRRSAF ROLLUP is non-zero, which
indicates that during this run we used rollup accounting (ACCUMACC=10).
This is also confirmed by the non-zero value in the END USER THRESH field (accounting
record that is written because the ACCUMACC value was reached for the ACCUMUID
aggregation field) in the Normal Term section, as shown in Example 8-31.
To calculate the time in DB2 for local applications, use the following formula:
When using a local attach such as RRS, the CPU time spent in the application can
be calculated:
In our case:
432 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example 8-32 shows the accounting package level information (when accounting class 7 and
8 are active). Type 2 JDBC and type 4 JDBC use the same packages (SYSLN300, in
our case).
Where:
1. We scheduled the workload to run on both data sharing members, so we observed
DSNT772I messages that are issued by both data sharing members.
2. nnnnn TIME(S) shows the number of times that the warning threshold was exceeded since
the last DSNT772I message was issued. As observed in the syslog, DB2 issued the
DSNT772I message every 5 minutes. Using the interval duration of 300 seconds, you can
use these values to calculate the number of active threads that are needed per second:
– D0Z1 = 31802 times
31802 / 300 = 106.66 + 7 = 113.66 active threads per second
– D072 = 12306 times
12306 / 300 = 41.02 + 7 = 48.02 active threads per second
Our DB2 setup collects statistics trace class 4 to include IFCID 402 information about DB2
profile warning and exception conditions. Using the IFCID 402 information, we create the
OMPE record trace report that is shown in Figure 8-33 to obtain more information.
|-----------------------------------------------------------------------------------------------------------------
|PROFILE ID : 1 (THR = THREAD, EXC = EXCEPTIOTSH=THRESHOLD)
|ACCUMULATED COUNTER OF ...
|THR EXC TSH EXCEEDED : 0 THR QUEUED/SUSP WHEN EXC TSH WAS EXCEEDED : 0
|REQUEST FAILED WHEN THR EXC TSH WAS EXCEEDED : 0 THR WARNING TSH BEING EXCEEDED :1019250
|CONNECTION EXC TSH BEING EXCEEDED : 0 CONNECTION WARN TSH BEING EXCEEDED : 0
|IDLE THR EXC TSH BEING EXCEEDED : 0 IDLE THR WARN TSH BEING EXCEEDED : 0
|-----------------------------------------------------------------------------------------------------------------
Additional information
For more information about the topics that are covered in this section, see the setup in 4.3.19,
“Configure thread monitoring for the DayTrader-EE6 application” on page 187. For more
information about managing and implementing DB2 profile monitoring, see Chapter 45,
“Using profiles to monitor and optimize performance”, in DB2 10 for z/OS, Managing
Performance, SC19-2978.
434 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Connection type
– RRS for JDBC type 2 connections
– DRDA for JDBC type 4 connections
Using the UDF is simple and straightforward. For example, to query the aggregated PDB
accounting tables for JDBC type 2 connections that are collected on 14th August 2012, run
the query that is shown in Example 8-33.
Example 8-33 .Using the SQL table UDF to query JDBC type 2 accounting information
select * from
table(accounting('TraderClientApplication','RRS')) a
where substr("DateTime",1,10) = '2012-08-14'
order by "DateTime" ;
-+---------+---------+---------+---------+---------
92588 22366 6971 9945 0 171534
814550 62241 4646 22235 1152 757692
1223042 41686 7165 30923 1877 1149004
1248040 45798 7359 31095 1801 1184893
1261375 48539 7371 31672 1887 1203983
1258972 50600 7364 31069 1794 1197565
1282236 56418 7477 31693 1839 1221505
1281040 56666 7425 31869 1925 1218127
97848 19435 562 2391 136 92090
DSNE610I NUMBER OF ROWS DISPLAYED IS 9
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 100
Figure 8-34 PDB query JDBC type 2 aggregated accounting data
436 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
8.5.2 Querying aggregated JDBC type 4 accounting information
Using the UDF, we ran the query that is shown in Figure 8-35 to obtain interval aggregated
information about our JDBC type 4 workload execution. For each interval, the query returns
aggregations of elapsed time, total and DB2 related processor and zIIP usage, number of
commits, SQL DML, locks, get page requests, and row statistics on insert, update, and
delete activities.
select
"DateTime" , "Elapsed" , "TotCPU" , "TotzIIP" , DB2CPU , "DB2zIIP" , "Commit"
, SQL , "Locks" , "RowsFetched" , "RowsInserted" , "RowsUpdated" ,
"RowsDeleted" , "GetPage"
from table(accounting('TraderClientApplication','DRDA')) a
where substr("DateTime",1,10) = '2012-08-17' order by "DateTime" ;
---------+---------+---------+---------+---------+---------+-------
DateTime Elapsed TotCPU TotzIIP DB2CPU DB2zIIP Commit SQL
---------+---------+---------+---------+---------+---------+-------
2012-08-17-22.59 288.46 23.56 12.14 20.75 10.59 55235 13185
2012-08-17-22.59 291.66 27.38 14.90 23.69 12.76 64120 26333
2012-08-17-23.00 433.34 49.51 26.76 43.21 23.11 123548 27323
2012-08-17-23.00 423.71 48.04 24.41 42.37 21.32 114517 24805
2012-08-17-23.01 309.15 47.00 23.71 41.51 20.69 112304 24123
2012-08-17-23.01 375.32 51.78 28.08 45.12 24.23 131529 28668
2012-08-17-23.02 796.84 94.92 52.00 81.67 44.18 265297 58104
2012-08-17-23.02 141.11 20.66 10.31 18.35 9.06 46233 10033
2012-08-17-23.03 1101.42 124.68 68.45 106.66 57.73 360722 79375
2012-08-17-23.04 660.42 79.00 43.25 67.71 36.55 226388 50443
2012-08-17-23.04 125.28 27.55 13.98 23.90 11.95 76366 17038
2012-08-17-23.05 637.05 78.89 43.05 67.65 36.41 228661 49864
2012-08-17-23.05 161.45 28.07 14.36 24.35 12.28 78467 16906
--+---------+---------+---------+---------+-------
Locks Rows Rows Rows Rows GetPage
Fetched Inserted Updated Deleted
--+---------+---------+---------+---------+-------
367481 72629 2067 10083 541 818516
452411 91083 6059 17562 579 818516
813130 159809 4691 20264 1185 1469024
746300 148772 4415 18197 1061 1469024
730570 145980 4188 17805 1035 1523288
867468 170043 4969 21239 1256 1523288
1737859 343606 10085 42965 2515 1957605
301810 60545 1729 7414 443 1957605
2361915 467835 13735 58712 3479 2269742
1482703 290534 8826 37282 2172 1910393
497295 99019 2956 12587 740 1910393
1495054 295893 8654 36847 2182 1926461
508469 100866 2983 12480 707 1926461
8.5.3 Using RTS to identify DB2 tables that are involved in DML operations
The query result in Figure 8-35 provides information about the number of rows fetched,
inserted, updated, and deleted without telling you which tables these activities were
performed on. All that we know is that the DayTrader-EE6 application accesses tables
belonging to the DBTR8074 database.
Answering these questions is important to identify tables that need extra care when it comes
to planning disk capacity, identifying tables for partitioning, and identifying tables that need
extra care when it comes to planning Runstats, Reorg, and Backup. For example, you might
use the RTS information to identify tables that are volatile or in need of extra reorg utility
executions. If you identify tables that start small in size and are going to be huge, you might
want to provide for table partitioning and you might want to talk with application developers to
determine whether data partitioning secondary indexes (DPSI) are a good choice.
Let us focus on the query output that is shown in Figure 8-35 on page 437, in which we obtain
information about the rows that are inserted, updated, and deleted during the workload
execution that is performed on 17 August. Before and right after workload execution, we
saved the RTS tables using the process that is described in 4.3.24, “DB2 real time statistics”
on page 198. We then ran the query that is shown in Figure 8-36 on page 439 to determine
the number of rows that were inserted, updated, and deleted for each of the tables that are
accessed during workload execution.
438 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
WITH
Q1 AS
( SELECT
DBNAME,NAME,PARTITION,
NACTIVE,NPAGES,REORGINSERTS,REORGDELETES,REORGUPDATES,
REORGMASSDELETE,TOTALROWS,SNAPSHOTTS
FROM TABLESPACESTATS
WHERE SNAPSHOTTS = '2012-08-17-22.57.57.673670'
AND DBNAME = 'DBTR8074'
ORDER BY DBNAME, NAME, SNAPSHOTTS),
Q2 AS
( SELECT
DBNAME,NAME,PARTITION,
NACTIVE,NPAGES,REORGINSERTS,REORGDELETES,REORGUPDATES,
REORGMASSDELETE,TOTALROWS,SNAPSHOTTS
FROM TABLESPACESTATS
WHERE SNAPSHOTTS = '2012-08-17-23.08.49.191718'
AND DBNAME = 'DBTR8074'
ORDER BY DBNAME, NAME, SNAPSHOTTS)
SELECT
SUBSTR(Q1.DBNAME,1,8) AS DBNAME,
SUBSTR(Q1.NAME ,1,8) AS NAME,
Q1.PARTITION,
Q2.TOTALROWS - Q1.TOTALROWS AS #ROWS,
Q2.REORGINSERTS - Q1.REORGINSERTS AS INSERTS ,
Q2.REORGDELETES - Q1.REORGDELETES AS DELETES ,
Q2.REORGUPDATES - Q1.REORGUPDATES AS UPDATES ,
Q2.REORGMASSDELETE - Q1.REORGMASSDELETE AS MASSDELETE
FROM Q1,Q2
WHERE
(Q1.DBNAME,Q1.NAME,Q1.PARTITION) = (Q2.DBNAME,Q2.NAME,Q2.PARTITION)
---------+---------+---------+---------+---------+---------+---------+---
DBNAME NAME PARTITION #ROWS INSERTS DELETES UPDATES MASSDELETE
---------+---------+---------+---------+---------+---------+---------+---
DBTR8074 TSACCEJB 0 11489 11489 0 133129 1
DBTR8074 TSACPREJ 0 11489 11489 0 21603 1
DBTR8074 TSHLDEJB 0 2476 24037 21561 21561 1
DBTR8074 TSKEYGEN 0 3 3 0 83 0
DBTR8074 TSORDEJB 0 45598 45598 0 158305 1
DBTR8074 TSQUOEJB 0 1000 1000 0 45580 1
DSNE610I NUMBER OF ROWS DISPLAYED IS 6
Figure 8-36 Using RTS to determine workload-related table changes
The query output that is shown in Figure 8-36 shows table spaces, their SQL insert, update,
and delete activities, and the number of rows that are stored in each table space upon
workload completion. Because we store only one table per table space, we can relate the
name of the table that was involved in the SQL DML operation to the table space that the
table belongs to.
440 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
JDBC type 2
The performance indicators of the DayTrader-EE6 JDBC type 2 are shown in Figure 8-37.
select
"DateTime"
,"AVG-Time"
,"AVG-CPU"
,"Time/SQL"
,"CPU/SQL"
,"AVG-SQL"
,"LOCK/Tran"
,"LOCK/SQL"
,"GETP/Tran"
,"GETP/SQL"
from table(accounting('TraderClientApplication','RRS')) a
where substr("DateTime",1,10) = '2012-08-14'
order by "DateTime" ;
---------+---------+---------+---------+---------+---------+-
DateTime AVG-Time AVG-CPU Time/SQL CPU/SQL AVG-SQL
---------+---------+---------+---------+---------+---------+-
2012-08-14-22.39 .001333 .000555 .000304 .000126 4.380156
2012-08-14-22.41 .019606 .000308 .014670 .000231 1.336417
2012-08-14-22.42 .018041 .000290 .013600 .000219 1.326507
2012-08-14-22.43 .016868 .000290 .012775 .000220 1.320434
2012-08-14-22.44 .018514 .000291 .013983 .000219 1.324069
2012-08-14-22.45 .017083 .000291 .012967 .000221 1.317430
2012-08-14-22.46 .016689 .000288 .012657 .000219 1.318597
2012-08-14-22.47 .016517 .000288 .012505 .000218 1.320781
2012-08-14-22.48 .016578 .000291 .012692 .000223 1.306092
--------+---------+---------+---------+
LOCK/Tran LOCK/SQL GETP/Tran GETP/SQL
--------+---------+---------+---------+
10.321962 2.356528 19.123076 4.365843
6.606298 4.943287 6.145159 4.598231
6.536137 4.927329 6.140466 4.629048
6.523686 4.940560 6.193608 4.690583
6.540060 4.939362 6.242491 4.714624
6.518813 4.948128 6.200855 4.706780
6.522088 4.946230 6.213180 4.711960
6.524633 4.939977 6.204202 4.697371
6.480000 4.961362 6.098675 4.669404
Figure 8-37 Performance indicators JDBC type 2
select
"DateTime"
,"AVG-Time"
,"AVG-CPU"
,"Time/SQL"
,"CPU/SQL"
,"AVG-SQL"
,"LOCK/Tran"
,"LOCK/SQL"
,"GETP/Tran"
,"GETP/SQL"
from
table(accounting('TraderClientApplication','DRDA')) a
where substr("DateTime",1,10) = '2012-08-17'
order by "DateTime" ;
---------+---------+---------+---------+---------+---------+
DateTime AVG-Time AVG-CPU Time/SQL CPU/SQL AVG-SQL
---------+---------+---------+---------+---------+---------+
2012-08-17-22.58 .002664 .000604 .001339 .000303 1.989921
2012-08-17-22.59 .005222 .000426 .021877 .001786 .238707
2012-08-17-22.59 .004548 .000427 .011075 .001039 .410683
2012-08-17-23.00 .003507 .000400 .015859 .001812 .221152
2012-08-17-23.00 .003699 .000419 .017081 .001936 .216605
2012-08-17-23.01 .002752 .000418 .012815 .001948 .214800
2012-08-17-23.01 .002853 .000393 .013091 .001806 .217959
2012-08-17-23.02 .003003 .000357 .013714 .001633 .219014
2012-08-17-23.02 .003052 .000446 .014064 .002059 .217009
2012-08-17-23.03 .003053 .000345 .013876 .001570 .220044
2012-08-17-23.04 .002917 .000348 .013092 .001566 .222816
2012-08-17-23.04 .001640 .000360 .007352 .001616 .223109
2012-08-17-23.05 .002786 .000345 .012775 .001582 .218069
2012-08-17-23.05 .002057 .000357 .009549 .001660 .215453
2012-08-17-23.06 .003343 .000345 .015042 .001555 .222288
---------+---------+---------+---------+
LOCK/Tran LOCK/SQL GETP/Tran GETP/SQL
---------+---------+---------+---------+
9.476892 4.762445 17.073746 8.580111
6.653046 27.871141 14.818792 62.079332
7.055692 17.180382 12.765377 31.083279
6.581490 29.759909 11.890309 53.765106
6.516936 30.086676 12.827999 59.222898
6.505289 30.285204 13.563969 63.146706
6.595260 30.259104 11.581385 53.135482
6.550616 29.909455 7.378918 33.691398
6.528021 30.081730 42.342158 195.116615
6.547743 29.756409 6.292219 28.595174
6.549388 29.393632 8.438578 37.872311
6.511994 29.187404 25.016276 112.125425
6.538299 29.982632 8.424965 38.634305
6.480036 30.076245 24.551225 113.951319
6.552779 29.478718 6.300677 28.344596
442 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Conclusion
When we compare the JDBC type 2 with the JDBC type 4 performance indicators. We notice
a ratio of less than 1 SQL per commit for the JDBC type 4 workload. This is caused by the
data source custom property AUTOCOMMIT=ON, which causes the JDBC driver to issue an SQL
commit for each SQL statement. When we compare the JDBC type 2 with the JDBC type 4
performance indicators, we notice higher resource usage for CPU, number of locks, and
number of get page requests per SQL.
In 8.2.2, “Using client information strings to classify work in WLM and RMF reporting” on
page 369, we set up our WLM service classes and, for monitoring purposes,
reporting classes:
RTRADE as the reporting class for the Trader application inside the WebSphere
Application Server
RTRADE0Z as the reporting class for the DDF enclaves that run the DB2 work when the
Trader application is using a type 4 connection.
Example 8-34 shows a sample JCL that you can use to run the RMF post processor. The first
step sorts the data, which is especially important when you use multiple input data sets, for
example, when combining data from multiple systems. The second step generates the
reports. In this case, we use the following JCL:
SYSRPTS(WLMGL(SCLASS,RCLASS,SCPER,RCPER,SYSNAM(SC64)))
This JCL creates a sysplex-wide workload activity report. We look at only one of the
systems, SC64 (SYSNAM(SC64)). The report has information about the different service
classes (SCLASS), report classes (RCLASS), and within the service class, the individual
service class periods (SCPER), and the individual reporting class periods (RCPER) within
each reporting class.
For more information about the different post-processor reporting option, see Chapter 17.
“Long-term reporting with the Postprocessor z/OS RMF”, in z/OS V1R13 Resource
Measurement Facility (RMF) User's Guide, SC33-7990.
Example 8-34 JCL that is used to create the postprocessor workload activity report
//BAT4RMF JOB (999,POK),'BART JOB',CLASS=A,MSGCLASS=T,
// NOTIFY=&SYSUID,TIME=1440,REGION=0M
/*JOBPARM SYSAFF=SC63
//RMFSORT EXEC PGM=SORT
//SORTIN DD DISP=SHR,DSN=DB2SMF.WASRB.SC64.T4.SMFRMF
//SORTOUT DD DISP=(NEW,PASS),UNIT=(SYSDA,5),SPACE=(CYL,(800,800))
//SORTWK01 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(100,200))
//SORTWK02 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(100,200))
//SORTWK03 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(100,200))
//SORTWK04 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(100,200))
//SORTWK05 DD DISP=(NEW,DELETE),UNIT=SYSDA,SPACE=(CYL,(100,200))
//SYSPRINT DD SYSOUT=*
Workload activity that is reported for the DB2 work in the RTRADE0Z
reporting class
Example 8-35 shows the workload activity report class period report for the RTRADE0Z
reporting class for period 1. The DDFONL service class that is used by this reporting class
uses two periods. The period 2 part is shown in Example 8-36 on page 445. As an example,
we picked a one minute interval that stated at 21.29.01. (We reduced the RMF interval to 1
minute to have more granularity in the reports.)
In this one minute interval, DB2 completed 43968 transactions in period 1, or 732.8
transactions per second by running, on average, 6.91 threads (enclaves) in parallel.
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 6.91 ACTUAL 9 SSCHRT 3.9 IOC 0 CPU 24.780 CP 20.60 BLK 0.000 AVG 0.00
MPL 6.91 EXECUTION 9 RESP 0.0 CPU 1447K SRB 0.000 AAPCP 0.00 ENQ 0.000 TOTAL 0.00
ENDED 43968 QUEUED 0 CONN 0.0 MSO 0 RCT 0.000 IIPCP 0.07 CRM 0.000 SHARED 0.00
END/S 732.80 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000 LCK 0.020
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.0 TOT 1447K HST 0.000 AAP 0.00 SUP 0.000 -PAGE-IN RATES-
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 24117 AAP 0.000 IIP 20.69 SINGLE 0.0
AVG ENC 6.91 STD DEV 29 IIP 12.417 BLOCK 0.0
REM ENC 0.00 ABSRPTN 3491 SHARED 0.0
444 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
MS ENC 0.00 TRX SERV 3491 HSP 0.0
RESPONSE TIME EX PERF AVG --EXEC USING%-- -------------- EXEC DELAYS % ----------- -USING%- --- DELAY % --- %
SYSTEM ACTUAL% VEL% INDX ADRSP CPU AAP IIP I/O TOT CPU IIP CRY CNT UNK IDL CRY CNT QUI
SC64 100 4.9 0.5 6.4 1.5 0.0 1.2 0.0 52 52 0.7 0.0 0.0 45 0.0 0.0 0.0 0.0
The number of CPU seconds that were needed to do these 43968 transactions is 24.78. The
SERVICE CPU time includes the CPU time on zAAP and zIIP (12.417). So, we used 12.363
seconds on a general CP in this case.
The workload activity report also gives an indication, in percent of an engine, that this
reporting or service class (period) used. This information can be found in the APPL% column.
In this case, it is 20.6% of a general engine, or one-fifth of an engine. It also used 20.69% of a
zIIP engine. In this example, the zIIP time is not included in the CP%.
Tip: The service time CPU includes the CPU seconds that are used on a zIIP or zAAP
engine. The CP percentage in the APPL% column does not include the zIIP and zAAP
processing.
A workload activity reporting or service class period report also indicates whether the WLM
goal for the service class period is met. If the performance index (PI) is less than one, which is
the case here, the goal is exceeded. When the PI =1, you meet the goal, and when the PI > 1,
WLM could not achieve the goal that is specified in the policy.
After the performance index in a period report, there is also a response time distribution
section. In this run, 43939 transactions out of 43968 completed in less than or equal to 0.5
seconds, so you are exceeding the response time goal of 000.00.01.000 second for 90% of
the transactions.
Example 8-36 shows the period 2 information for the RTRADE0Z reporting class in this one
minute interval. Only 24 transactions finished in period two.
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 0.03 ACTUAL 156 SSCHRT 0.1 IOC 0 CPU 0.014 CP 0.02 BLK 0.000 AVG 0.00
MPL 0.03 EXECUTION 156 RESP 0.0 CPU 804 SRB 0.000 AAPCP 0.00 ENQ 0.000 TOTAL 0.00
ENDED 24 QUEUED 0 CONN 0.0 MSO 0 RCT 0.000 IIPCP 0.00 CRM 0.000 SHARED 0.00
GOAL: EXECUTION VELOCITY 40.0% VELOCITY MIGRATION: I/O MGMT 0.0% INIT MGMT 0.0%
RESPONSE TIME EX PERF AVG --EXEC USING%-- -------------- EXEC DELAYS % ----------- -USING%- --- DELAY % --- %
SYSTEM VEL% INDX ADRSP CPU AAP IIP I/O TOT CPU CRY CNT UNK IDL CRY CNT QUI
SC64 --N/A-- 0.0 0.0 0.0 0.0 0.0 0.0 0.0 50 50 0.0 0.0 50 0.0 0.0 0.0 0.0
After looking at the different periods, look at all the transactions in the reporting class, with
both periods combined, as shown in Example 8-37. The total number of transactions is 43992
(or 43968 in period 1 + 24 in period 2). As almost all the transactions that are completed in
period 1, so the report for the entire report class is similar to the report of period 1. However,
the report class report does not have a performance index or response time distribution
information. That information is available only at the period report level.
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 6.93 ACTUAL 9 SSCHRT 4.0 IOC 0 CPU 24.794 CP 20.62 BLK 0.000 AVG 0.00
MPL 6.93 EXECUTION 9 RESP 0.0 CPU 1448K SRB 0.000 AAPCP 0.00 ENQ 0.000 TOTAL 0.00
ENDED 43992 QUEUED 0 CONN 0.0 MSO 0 RCT 0.000 IIPCP 0.07 CRM 0.000 SHARED 0.00
END/S 733.20 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000 LCK 0.020
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.0 TOT 1448K HST 0.000 AAP 0.00 SUP 0.000 -PAGE-IN RATES-
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 24130 AAP 0.000 IIP 20.70 SINGLE 0.0
AVG ENC 6.93 STD DEV 29 IIP 12.420 BLOCK 0.0
REM ENC 0.00 ABSRPTN 3480 SHARED 0.0
MS ENC 0.00 TRX SERV 3480 HSP 0.0
446 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
When you use a type 4 connection, the part of the transaction that is running inside the
WebSphere Application Server is represented by a different enclave. It is classified by the
Subsystem Type CB in the WLM classification panels. (WebSphere Application Server uses
Subsystem Type CB for enclave work.)
In Figure 8-13 on page 375, we use the Transaction Class to select the service class
(WASONL) and reporting class (RTRADE) that applies to the trader application. The
workload activity report for SC64 (that runs our WebSphere Application Server) for period 1 of
the RTRADE reporting class is shown in Example 8-38.
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 33.02 ACTUAL 50 SSCHRT 0.0 IOC 0 CPU 60.113 CP 41.19 BLK 0.000 AVG 0.00
MPL 33.02 EXECUTION 49 RESP 0.0 CPU 3510K SRB 0.000 AAPCP 32.28 ENQ 0.000 TOTAL 0.00
ENDED 39697 QUEUED 0 CONN 0.0 MSO 0 RCT 0.000 IIPCP 0.00 CRM 0.000 SHARED 0.00
END/S 661.62 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000 LCK 0.000
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.0 TOT 3510K HST 0.000 AAP 58.99 SUP 0.000 -PAGE-IN RATES-
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 58504 AAP 35.396 IIP 0.00 SINGLE 0.0
AVG ENC 33.02 STD DEV 68 IIP 0.000 BLOCK 0.0
REM ENC 0.00 ABSRPTN 1772 SHARED 0.0
MS ENC 0.00 TRX SERV 1772 HSP 0.0
RESPONSE TIME EX PERF AVG --EXEC USING%-- -------------- EXEC DELAYS % ----------- -USING%- --- DELAY % --- %
SYSTEM ACTUAL% VEL% INDX ADRSP CPU AAP IIP I/O TOT AAP CPU Q CRY CNT UNK IDL CRY CNT QUI
MPL
SC64 100 13.5 0.5 31.0 0.3 0.9 0.0 0.0 7.8 7.5 0.2 0.1 0.0 0.0 91 0.0 0.0 0.0 0.0
The performance is good; the sampled state did not show any delays where the work
is waiting.
For more information about WLM Delay Monitoring, go to the following website:
http://pic.dhe.ibm.com/infocenter/wasinfo/v8r5/index.jsp?topic=%2Fcom.ibm.webspher
e.zseries.doc%2Fae%2Frprf_wlmdm.html
Note: There are some caveats when using duration reports. For more information, go to:
http://publib.boulder.ibm.com/infocenter/zos/v1r13/topic/com.ibm.zos.r13.erbb20
0/dintv.htm
Example 8-39 shows the SYSIN DD statements that are used to create a workload activity
report for the 9 minute time frame 22:39 - 22:48.
448 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example 8-40 shows the workload activity report class period report for the RTRADE
reporting class for period 1. The report looks similar to the ones we looked at before. When
using a type 2 connection, all the work that is done by the WebSphere Application Server
application, including the SQL activity, is done under the enclave that is created by the
WebSphere Application Server control region. As a result, the appl % CP is much higher than
in the type 4 case, which is expected, as the DB2 work is also included now. The state
samples breakdown shows a small percentage of delay for TYP8, which is the J2C Resource
manager delay when you call a J2C connector to resource managers, such as DB2.
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 27.23 ACTUAL 39 SSCHRT 0.2 IOC 0 CPU 554.785 CP 80.93 BLK 0.000 AVG 0.00
MPL 27.23 EXECUTION 37 RESP 0.2 CPU 32396K SRB 0.000 AAPCP 0.62 ENQ 0.000 TOTAL 0.00
ENDED 392496 QUEUED 1 CONN 0.1 MSO 0 RCT 0.000 IIPCP 0.00 CRM 0.000 SHARED 0.00
END/S 726.85 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000 LCK 0.008
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 32396K HST 0.000 AAP 21.81 SUP 0.000 -PAGE-IN RATES-
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 59994 AAP 117.756 IIP 0.00 SINGLE 0.0
AVG ENC 27.23 STD DEV 37 IIP 0.000 BLOCK 0.0
REM ENC 0.00 ABSRPTN 2203 SHARED 0.0
MS ENC 0.00 TRX SERV 2203 HSP 0.0
CB : TYP8 - CONNECTOR
RESPONSE TIME EX PERF AVG --EXEC USING%-- -------------- EXEC DELAYS % ----------- -USING%- --- DELAY % --- %
SYSTEM ACTUAL% VEL% INDX ADRSP CPU AAP IIP I/O TOT CPU AAP Q CRY CNT UNK IDL CRY CNT QUI
MPL
SC64 100 2.2 0.5 28.4 1.5 0.1 0.0 0.0 70 69 0.6 0.5 0.0 0.0 28 0.0 0.0 0.0 0.0
Example 8-41 shows the similar workload activity report class period report for the RTRADE
reporting class for period 2. Only 48 transactions ended in this period.
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 0.05 ACTUAL 633 SSCHRT 0.1 IOC 0 CPU 5.884 CP 0.35 BLK 0.000 AVG 0.00
MPL 0.05 EXECUTION 631 RESP 0.2 CPU 343611 SRB 0.000 AAPCP 0.05 ENQ 0.000 TOTAL 0.00
ENDED 48 QUEUED 2 CONN 0.2 MSO 0 RCT 0.000 IIPCP 0.00 CRM 0.000 SHARED 0.00
END/S 0.09 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000 LCK 0.012
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 343611 HST 0.000 AAP 0.74 SUP 0.000 -PAGE-IN RATES-
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 636 AAP 3.982 IIP 0.00 SINGLE 0.0
AVG ENC 0.05 STD DEV 2.233 IIP 0.000 BLOCK 0.0
REM ENC 0.00 ABSRPTN 13K SHARED 0.0
MS ENC 0.00 TRX SERV 13K HSP 0.0
GOAL: EXECUTION VELOCITY 40.0% VELOCITY MIGRATION: I/O MGMT 31.2% INIT MGMT 31.2%
RESPONSE TIME EX PERF AVG --EXEC USING%-- -------------- EXEC DELAYS % ----------- -USING%- --- DELAY % --- %
SYSTEM VEL% INDX ADRSP CPU AAP IIP I/O TOT AAP CPU CRY CNT UNK IDL CRY CNT QUI
SC64 --N/A-- 31.2 1.3 0.0 7.3 15 0.0 0.0 49 46 2.8 0.0 0.0 29 0.0 0.0 0.0 0.0
Example 8-42 shows the workload activity report for the RTRADE (looking at the overall
DRDA activity of 392544 transactions).
-TRANSACTIONS- TRANS-TIME HHH.MM.SS.TTT --DASD I/O-- ---SERVICE--- SERVICE TIME ---APPL %--- --PROMOTED-- ----STORAGE----
AVG 27.28 ACTUAL 39 SSCHRT 0.2 IOC 0 CPU 560.670 CP 81.28 BLK 0.000 AVG 0.00
MPL 27.28 EXECUTION 37 RESP 0.2 CPU 32740K SRB 0.000 AAPCP 0.67 ENQ 0.000 TOTAL 0.00
ENDED 392544 QUEUED 1 CONN 0.1 MSO 0 RCT 0.000 IIPCP 0.00 CRM 0.000 SHARED 0.00
END/S 726.94 R/S AFFIN 0 DISC 0.0 SRB 0 IIT 0.000 LCK 0.020
#SWAPS 0 INELIGIBLE 0 Q+PEND 0.1 TOT 32740K HST 0.000 AAP 22.54 SUP 0.000 -PAGE-IN RATES-
EXCTD 0 CONVERSION 0 IOSQ 0.0 /SEC 60630 AAP 121.738 IIP 0.00 SINGLE 0.0
AVG ENC 27.28 STD DEV 45 IIP 0.000 BLOCK 0.0
REM ENC 0.00 ABSRPTN 2223 SHARED 0.0
MS ENC 0.00 TRX SERV 2223 HSP 0.0
450 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
9
The IBM Data Server Driver for JDBC and SQLJ does not throw an exception for warning
messages, but it accumulates warnings when SQL statements return positive SQLCODEs,
and when SQL statements return a zero SQLCODEs with a non-zero SQLSTATE. You can use
methods from the java.sql.SQLWarning class to handle them:
getWarnings(): Returns the SQLCODE.
getNextWarning(): Returns the next SQLWARN in the chain.
getSQLState(): Returns the SQLSTATE.
The object of the SQLException or SQLWarning class can call the following methods under the
java.lang.Throwable class. They provide additional information.
getMessage(): Returns the description of the error or warning.
printStackTrace(): Prints the current exception or throwable and its backtrace to a
standard error stream.
Example 9-1 shows how to print a warning, SQLCODE, error message, and stack trace.
452 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
System.out.println ("Warning code: " + sqlwarn.getErrorCode());
con.commit();
con.close();
}
catch(SQLException qex)
{
System.err.println ("SQLException information");
System.err.println ("Error msg: " + qex.getMessage());
System.err.println ("SQLSTATE: " + qex.getSQLState());
System.err.println ("Error code: " + qex.getErrorCode());
qex.printStackTrace();
}
Example 9-2 lists the output with the warning messages, error messages, and stack trace.
The meaning of each field in SQLCA depends on the specific error. The most interesting part
of the SQLCA is a string that is called SQLERRM, which contains several error tokens, which are
separated by the character 0xFF. The DB2Sqlca.getSqlErrmcTokens() method tokenizes this
string for you.
For more information about what the individual error tokens mean for a given SQLCODE, see
DB2 10 for z/OS, DB2 codes, found at:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.ibm.
db2z10.doc.codes%2Fsrc%2Fcodes%2Fdb2z_codes.htm
Look up the error text in the description of the SQLCODE. The tokens appear sequentially in the
SQLERRM in the order that they appear in the message text.
454 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The code in Example 9-3 demonstrates how to obtain SQLCA from an SQLException.
The output of the program is shown in Example 9-4. SQLCODE -103 is returned because of an
invalid numeric constant “4321A” (as the number contains the letter “A”). SQLERRP contains the
DB2 module name that issues the SQLCODE. SQLERRD(5) indicates the starting position of the
invalid constant in the SQL statement, byte 48 in this case. This can be helpful for syntax
checking of complicated SQL statements.
For example, deferPrepares, a property of the IBM Data Server Driver for JDBC and SQLJ,
allows the PREPARE and EXECUTE statements to be sent across the network as a single
message, to reduce network processing:
....
PreparedStatement pst = con.prepareStatement("SELECT C1 FROM T1");
pst.executeQuery();
....
456 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
If table T1 does not exist, SQLCODEs -204, -516, and -514 are returned by the server in
succession. You must use getNextException to handle each SQLCODE accordingly.
Example 9-5 provides an example of how to code this type of error handling.
As it is not always possible to go in and make changes to the application, we focus more on
the trace capabilities that do not require any changes to the applications themselves.
There are many different ways to accomplish tracing. The option that you choose depends on
whether you want to activate the tracing outside your application code or within the
application, and how granular the trace must be (one application, all applications using a data
source, or all applications within the application server).
This section describes a few options, but the focus is on activating tracing outside the
application code in a WebSphere Application Server environment.
Another option that you can use when using the DataSource interface is to run the
javax.sql.DataSource.setLogWriter method to turn on the trace. However, when you use
this method, TRACE_ALL is the only available trace level.
After a connection is established, you can turn off or on the trace, change the trace
destination, or change the trace level by running the DB2Connection.setJccLogWriter method
(DB2Connection.setJCCLogWriter(java.io.PrintWriter logWriter, int traceLevel)).
458 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
To turn off the trace, set the logWriter value to null. The logWriter property is an object of
type java.io.PrintWriter. If your application cannot handle java.io.PrintWriter objects,
you can use the traceFile property to specify the destination of the trace output. To use the
traceFile property, set the logWriter property to null, and set the traceFile property to the
name of the file to which the driver writes the trace data. This file and the directory in which it
is must be writable. If the file exists, the driver overwrites it.
Another option when you use the DriverManager interface it to specify the traceFile and
traceLevel properties as part of the URL when you load the driver. For example:
String url = "jdbc:db2://d0zg.itso.ibm.com:39000/DB0Z" +
":traceFile=/u/jcctrace;" +
"traceLevel=" +
com.ibm.db2.jcc.DB2BaseDataSource.TRACE_DRDA_FLOWS + ";";
All of the methods that are mentioned above require you to change the application code to
activate the JCC trace or at least preinstall code to be able to trace these events. This is often
not an option, either because it requires program changes that are typically subject to change
management control and testing procedures, but it is also possible that you are dealing with a
packaged application that you bought and you do not have the source program to add the
trace points to be able to activate the JCC trace. Therefore, it is a preferred practice to
activate the JCC trace outside of the application code itself, either at the application server
level or the data source level.
Figure 9-1 Specifying JCC trace parameters at the data source level
The advantage of using the data source custom properties to activate the JCC trace is that
the application does not have to be changed. However, the disadvantage of this method is
that the application server must be stopped and started to activate these settings. This action
might not be practical in a production environment.
Perform the actions that are described in the following sections to take a combined
WebSphere and IBM Data Server Driver for JDBC and SQLJ trace (JCC- Java Universal
Driver) trace. You can use the WebSphere Application Server administration console if the
data source is managed by WebSphere Application Server.
460 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Using the administration console, click JDBC Data sources, select your data source, click
Custom properties, and specify the traceLevel. In our example, we use 131072, which is the
TRACE_SYSTEM_MONITOR, as shown in Figure 9-2. You can specify any valid trace level
that you want. For more information about the different trace levels, see “TraceLevels” on
page 465.
We do not specify the traceFile and traceDirectory properties, which allows the JCC trace
to be automatically embedded in the WebSphere Application Server trace (the SYSOUT
DD-card of the servant region when you use WebSphere Application Server on z/OS).
If you want to make this trace permanent, select the Save runtime changes to
configuration as well check box, but you typically want to activate the trace only for a short
time, so we did not select the check box.
Figure 9-3 Set the log detail level in WebSphere Application Server
When the changes are saved, the trace is activated dynamically. There is no need to stop and
start the application server.
You can verify whether the changed trace options were picked up by checking the servant’s
SYSOUT information. There should be a message similar to the following one:
Trace: 2012/11/20 22:41:31.769 02 t=7B74F8 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ejs.ras.ManagerAdmin
ExtendedMessage: BBOO0222I: TRAS0018I: The trace state has changed. The new
trace state is *=info:WAS.j2c=all:RRA=all:WAS.database=all:Transaction=all.
From then on, the WebSphere Application Server and JCC trace are active. The trace log (in
SYSOUT) combines the output of the WebSphere Application Server trace with the JCC
trace, as shown in Example 9-6.
462 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
ExtendedMessage: Exit
Trace: 2012/11/20 22:42:34.222 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.rsadapter.spi.WSRdbManagedConnectionImpl
ExtendedMessage: No Matching Prepared Statement found in cache
Trace: 2012/11/20 22:42:34.223 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.db2.logwriter
ExtendedMessage: Ýjcc¨ÝSystemMonitor:start¨
Trace: 2012/11/20 22:42:34.223 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.db2.logwriter
ExtendedMessage: Ýjcc¨ÝTime:2012-11-20-22:42:34.223¨ÝThread:WebSphere WLM Dispatch Thread t=007b7718¨
ÝConnection@370f35ff¨prepareStatement (select * from orderejb o where o.orderstatus = 'closed' AND
o.account_accountid = (select a.accountid from accountejb a where a.profile_userid = ?), 1003, 1007)
called
Trace: 2012/11/20 22:42:34.234 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.db2.logwriter
ExtendedMessage: Ýjcc¨ÝTime:2012-11-20-22:42:34.234¨ÝThread:WebSphere WLM Dispatch Thread t=007b7718¨
ÝConnection@370f35ff¨prepareStatement () returned com.ibm.db2.jcc.t4.k@ec3e3073
Trace: 2012/11/20 22:42:34.234 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.db2.logwriter
ExtendedMessage: Ýjcc¨ÝThread:WebSphere WLM Dispatch Thread t=007b7718¨ÝSystemMonitor:stop¨ core:
10.774125ms | network: 0.0ms | server: 0.0ms
Trace: 2012/11/20 22:42:34.240 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.rsadapter.jdbc.WSJccPreparedStatement.<init>
ExtendedMessage: Entry; com.ibm.db2.jcc.t4.k@ec3e3073,
com.ibm.ws.rsadapter.jdbc.WSJccSQLJPDQConnection@e7857dfb, DEFAULT CURSOR HOLDABILITY VALUE (0), PSTMT:
select * from orderejb o where o.orderstatus = 'closed' AND o.account_accountid = (select a.accountid
from accountejb a where a.profile_userid = ?) 1003 1007 0 0 4
Trace: 2012/11/20 22:42:34.240 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.rsadapter.jdbc.WSJccPreparedStatement
ExtendedMessage: current fetchSize is 0
Trace: 2012/11/20 22:42:34.240 02 t=7B7718 c=UNK key=P8 tag= (13007004)
SourceId: com.ibm.ws.rsadapter.jdbc.WSJccPreparedStatement.<init>
ExtendedMessage: Exit; com.ibm.ws.rsadapter.jdbc.WSJccPreparedStatement@aaca8816
The entries in bold that are marked by [jcc] (not shown correctly because of code page
differences) are from the JCC trace. The others are written as part of the WebSphere
Application Server traces that were activated as well.
WebSphere Application Server traces can be verbose, so try to limit the type of tracing to a
minimum and trace only the events in which you are interested.
The major advantage of activating the JCC trace in the configuration properties file is that
changes to the settings are automatically picked up without stopping and starting the
application server.
For a complete list of driver properties settings, see the IBM Data Server Driver for JDBC and
SQLJ configuration properties topic in the Information Center at the following website:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/topic/com.ibm.imjccz10.doc.
updates/src/tpc/imjcc_r0052075.htm
The IBM Data Server Driver for JDBC and SQLJ configuration properties have a driver-wide
scope. If there is a corresponding Connection or DataSource property that is specified, those
properties typically override the setting in the properties file. For example,
db2.jcc.traceLevel is a configuration file property, and traceLevel is the equivalent
Connection or DataSource property setting and it overrides the configuration file property
setting. So in this case, the configuration property provides a default value for the Connection
or DataSource property.
464 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Note the db2.jcc.tracePolling=true setting. This indicates to the driver that it must check
for possible changes in the properties file and db2.jcc.tracePollingInterval=10 directs the
driver to perform this check every 10 seconds.
The use of the override feature together with regular polling allows you to dynamically
activate/de-activate the JCC trace for the JVM.
Tip: When you direct the trace to a directory, make sure that you have write authority for
that directory. Otherwise, you might think the trace is not active, but the driver is unable to
write the data to the specified directory.
You can also set up circular logging when you use a type 4 connection by using the
db2.jcc.traceOption=1 setting. Combined with the db2.jcc.traceFileSize and
db2.jcc.traceFileCount properties, you dedicate a number of trace files, each of a certain
size. When all the trace files reach the maximum size, the first file is reused and the existing
data is overwritten, which can be useful when you must trace a situation where you are not
sure when the problem will occur. So, you set up circular tracing and activate the trace, and
when the problem occurs, you turn off the trace immediately, which gives you a good chance
to capture the problem in the trace without using large trace files (some of the JCC trace
options are verbose).
TraceLevels
Table 9-1 shows the different trace levels that are available with the IBM Data Server Driver
for JDBC and SQLJ.
Table 9-1 IBM Data Server Driver for JDBC and SQLJ trace levels
TraceLevel Trace value in hex Trace value in decimal
If you want to combine multiple trace levels, you can use OR to combine the values.
The traceLevel should be set to the sum of the integer values of these constants:
1 + 2 + 4 + 16 + 32 + 512 = 567
466 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
traceFileSize specifies the maximum size of each trace file for circular tracing.
traceOption specifies the way in which trace data is collected. The data type of this
property is int. Here are possible values:
DB2BaseDataSource.NOT_SET (0) Specifies that a single trace file is
generated, and that there is no limit to
the size of the file. This is the default.
If the value of traceOption is NOT_SET,
the traceFileSize and
traceFileCount properties are
ignored.
DB2BaseDataSource.TRACE_OPTION_CIRCULAR (1) Specifies that the IBM Data Server
Driver for JDBC and SQLJ does
circular tracing.
Each line is prefixed with a string [jcc]. After the prefix are one or more tokens in [ ]:
[t2] or [t4] when the trace entry is specific for the driver type (N/A here)
Timestamp (in GMT) ([Time:2012-11-16-21:49:08.222]).
Thread name ([Thread:WebSphere WLM Dispatch Thread t=007bd580])
Object name that is associated with the trace entry (Connection, Statement, ResultSet, ...)
([PreparedStatement@41d88590]).
Tracepoint number when applicable (N/A here).
The rest of the line is the method name that is called or returned with arguments or the
return value (executeQuery () called).
468 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: QRYDSC (ASCII) (EBCDIC)
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: QRYDTA (ASCII) (EBCDIC)
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: ENDQRYRM (ASCII) (EBCDIC)
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: SQLCARD (ASCII) (EBCDIC)
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: MONITORRD (ASCII) (EBCDIC)
[jcc][t4]
In addition, in a number of places, the JCC trace also provides the instance number and the
commit sequence number. They are part of the Logical Unit of Work ID (LUWID) that should
uniquely define a transaction. In versions before DB2 10, you often see multiple transactions
using the same LUWID (when they had not been making changes to the database). However,
starting with DB2 10, the LUWID’s commit sequence number should be incremented
each time.
In our case, the DB2 correlator is CA7B405C24DB.0007. To verify this transaction, go back to the
DB2 accounting data and find the matching accounting record for this transaction.
To find the matching trace data (accounting, performance traces) on the DB2 for z/OS side,
consider the following items:
If your system is using a clock that is taking leap seconds into account, you might see a 25
second discrepancy between the times in the JCC trace and the times in the DB2 trace
records. (At the time of writing, the number of leap seconds in effect is 25.) At the time of
the commit of this transaction, the JCC trace shows the following information:
[jcc][t4] [time:2012-11-16-21:49:08.233]
The time stamp in the DB2 accounting record shows the following information:
ACCT TSTAMP: 11/16/12 21:49:33.23
When you are matching the LUWID, you can use the commit sequence number from the
JCC trace to match with DB2 performance trace records. However, when you look for the
corresponding accounting record, you must use the commit sequence number from the
JCC trace and add one to it, so in our example, 0007 + 1 = 8.
Example 9-12 DB2 accounting record that matches the JCC trace
LOCATION: DB0Z OMEGAMON XE FOR DB2 PERFORMANCE EXPERT (V5R1M1) PAGE: 1-55
GROUP: DB0ZG ACCOUNTING TRACE - LONG REQUESTED FROM: ALL 21:49:00.00
MEMBER: D0Z1 TO: DATES 23:59:59.99
SUBSYSTEM: D0Z1 ACTUAL FROM: 11/16/12 21:49:12.97
DB2 VERSION: V10
TIMES/EVENTS APPL(CL.1) DB2 (CL.2) IFI (CL.5) CLASS 3 SUSPENSIONS ELAPSED TIME EVENTS HIGHLIGHTS
------------ ---------- ---------- ---------- -------------------- ------------ -------- --------------------------
ELAPSED TIME 0.008414 0.005296 N/P LOCK/LATCH(DB2+IRLM) 0.000000 0 THREAD TYPE : DBAT
NONNESTED 0.008414 0.005296 N/A IRLM LOCK+LATCH 0.000000 0 TERM.CONDITION: NORMAL
STORED PROC 0.000000 0.000000 N/A DB2 LATCH 0.000000 0 INVOKE REASON : TYP2 INACT
UDF 0.000000 0.000000 N/A SYNCHRON. I/O 0.000416 1 PARALLELISM : NO
TRIGGER 0.000000 0.000000 N/A DATABASE I/O 0.000000 0 QUANTITY : 0
LOG WRITE I/O 0.000416 1 COMMITS : 1
CP CPU TIME 0.000811 0.000694 N/P OTHER READ I/O 0.000000 0 ROLLBACK : 0
AGENT 0.000811 0.000694 N/A OTHER WRTE I/O 0.000000 0 SVPT REQUESTS : 0
NONNESTED 0.000811 0.000694 N/P SER.TASK SWTCH 0.000000 0 SVPT RELEASE : 0
STORED PRC 0.000000 0.000000 N/A UPDATE COMMIT 0.000000 0 SVPT ROLLBACK : 0
UDF 0.000000 0.000000 N/A OPEN/CLOSE 0.000000 0 INCREM.BINDS : 0
TRIGGER 0.000000 0.000000 N/A SYSLGRNG REC 0.000000 0 UPDATE/COMMIT : 1.00
PAR.TASKS 0.000000 0.000000 N/A EXT/DEL/DEF 0.000000 0 SYNCH I/O AVG.: 0.000416
OTHER SERVICE 0.000000 0 PROGRAMS : 1
SECP CPU 0.000000 N/A N/A ARC.LOG(QUIES) 0.000000 0 MAX CASCADE : 0
LOG READ 0.000000 0
470 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
SE CPU TIME 0.000000 0.000000 N/A DRAIN LOCK 0.000000 0
NONNESTED 0.000000 0.000000 N/A CLAIM RELEASE 0.000000 0
STORED PROC 0.000000 0.000000 N/A PAGE LATCH 0.000000 0
UDF 0.000000 0.000000 N/A NOTIFY MSGS 0.000000 0
TRIGGER 0.000000 0.000000 N/A GLOBAL CONTENTION 0.003075 4
COMMIT PH1 WRITE I/O 0.000000 0
PAR.TASKS 0.000000 0.000000 N/A ASYNCH CF REQUESTS 0.001105 2
TCP/IP LOB XML 0.000000 0
SUSPEND TIME 0.000000 0.004596 N/A TOTAL CLASS 3 0.004596 7
AGENT N/A 0.004596 N/A
PAR.TASKS N/A 0.000000 N/A
STORED PROC 0.000000 N/A N/A
UDF 0.000000 N/A N/A
DB2SystemMonitor
To help you isolate performance problems with your Java-DB2 applications, the IBM Data
Server Driver for JDBC and SQLJ provides a proprietary API (DB2SystemMonitor class) to
enable application monitoring.
The driver collects the timing information that is shown in Figure 9-5.
prepareStatement()
Network
executeUpdate()
executeUpdate()
monitor.stop()
To collect system monitoring data by using the DB2SystemMonitor interface, complete the
following steps:
1. Run the DB2Connection.getDB2SystemMonitor method to create a
DB2SystemMonitor object.
2. Run the DB2SystemMonitor.enable method to enable the DB2SystemMonitor object for
the connection.
3. Run the DB2SystemMonitor.start method to start system monitoring.
4. When the activity that is to be monitored is complete, run DB2SystemMonitor.stop to stop
system monitoring.
5. Lastly, run the following methods to retrieve the elapsed time data:
– DB2SystemMonitor.getCoreDriverTimeMicros
– DB2SystemMonitor.getNetworkIOTimeMicros
– DB2SystemMonitor.getServerTimeMicros
– DB2SystemMonitor.getApplicationTimeMillis
Note: Starting with Version 3.63 or Version 4.13, the The server time that is returned by
DB2SystemMonitor.getServerTimeMicros now includes commit and rollback time. (This
was not the case before these driver levels.)
Using the DB2SystemMonitor interface allows you to trace specific areas of your application
that you are interested in, but it also requires that you change you application code to
incorporate these calls.
An easier, yet not as specific, way is to use set the TRACE_SYSTEM_MONITOR trace level, either at
the connection, data source, or JVM level. This method allows you to obtain this information
without making any changes to the application; simply starting this trace level
is enough.
To show the information that can be obtained this way, we used the following settings in the
jcc.properties file.
db2.jcc.override.traceLevel=131072
db2.jcc.override.traceDirectory=/tmp
db2.jcc.override.traceFile=jcc6
db2.jcc.tracePollingInterval=10
db2.jcc.tracePolling=true
472 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
We ran a few simple servlets from the DayTrader workload and captured the trace file
(jcc6_global_9). An (edited) excerpt is shown in Example 9-13.
This entry did not result to a call to the database, as the server time (and network time; this
example uses type 4 connectivity) are zero. It indicates which method was called
(setAutoCommit (false), in this case).
You can get a good idea about what the application is requesting from the database and how
long it took to perform actions in each of the components.
For the executeQuery() invocation in Example 9-13 on page 473, the request was sent to the
database, and the call spent core: 2.292ms | network: 1.211875ms | server: 0.867ms, or
1.080125 (2.292 - 1.211875) ms in the driver, 0.344875(1.211875-0.867) ms in the network,
and 0.867 ms in the database engine. In this case, these are reasonable numbers. However,
when you run into a problem situation, this is an easy way to find calls that took a long time to
complete and immediately see whether the time was spent in the driver, the network, or the
database engine.
This section introduces the general usage of these commands. For more information, see
DB2 10 for z/OS Command Reference, SC19-2972.
474 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DISPLAY DATABASE command
The -DISPLAY DATABASE command can show information about the status and usage of the
DB2 database objects (table spaces, partitions, indexes, and index partitions) in that
database. You cannot display the status of a particular table, only the table space it is in.
The -DISPLAY DATABASE command has a number of options that allow you to display different
types of information about the database. Here are the keywords that are most often used
when dealing with concurrency issues:
USE: You can use this option to quickly check whether a certain transaction, job, and so on
is accessing (holding locks or waiting for them on) the displayed object. The command
output shows information, such as the connection-IDs, correlation-IDs, authorization IDs,
LUW-ID, and location of any threads accessing the local database.
RESTRICT: This option lists the objects that are in a restricted status, which typically
prevents an application from accessing the object. When the system is not performing well
and is generating many DSNT500I (resource unavailable) messages, make sure that no
objects are in a restricted state that can prevent transactions, batch jobs, or utilities from
accessing the table or index spaces.
CLAIMERS: This option lists the claims on objects whose status is displayed, and
information that allows you to identify who acquired the claim, such as the LUW-ID and
location of any remote threads accessing the local database, and the connection-IDs,
correlation-IDs, and authorization IDs, as well as the agent token number for the claim, if
the claim is local. You can then match the token with the output of the -DIS THREAD
command to obtain more information about the thread.
LOCKS: This option provides you with information about the parent transaction locks
(L-locks) for objects whose statuses are displayed, the drain locks for a resource that is
held by running jobs, and the page set or partition physical locks (P-locks) for a resource.
It also presents thread identification information, so you can match it with the output of the
-DIS THREAD command.
In Example 9-14, we can see both local and distributed threads in the whole data
sharing group.
The command is issued on D0Z2, so those threads are displayed first. (The thread token is
also displayed). Next are the threads on the other members, D0Z1 in this case.
For example, when a thread is accessing an object, you cannot perform an ALTER or DROP
operation against the object. If the SQL accessing the object is a dynamic SQL statement, an
SQLCODE -904 with a reason code 00E70081 is issued by DB2, as shown in Example 9-15.
This is a common issue when an application does not COMMIT in a timely manner.
00E70081 Explanation:
A DROP or ALTER statement was issued but the object cannot be dropped or altered.
The object is referenced by a prepared dynamic SQL statement that is currently
stored in the prepared statement cache and is in use by an application.
476 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
You can use the resource name from the message, to check who is accessing table
DB2R6.ACT by running -DISPLAY DATABASE(DSN00023) SPACENAM(ACT) USE/CLAIMERS. The
output of the command is shown in Example 9-16.
We can see one thread is accessing the table. Its LUWID is G97B8F7B.FEF9.CA4D6EA21017
and the token is 140295, which can be used to narrow down the scope of the display thread
command, as shown in Example 9-17.
In this case, we must go and talk to user DB2R6 who is using workstation
IBM-M0666QA2AQE to see what the db2jcc_application is doing that results in the resource
unavailable message.
This section describes how to analyze a DB2 deadlock problem. It is also applicable to other
types of concurrency problems, such as timeouts or long suspension times.
When this condition occurs, there is also in information that is written to the DB2 MSTR STC’s
JOBLOG, and to the SYSLOG.
478 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Tip: The WebSphere Application Server Servant log uses GMT time, while the DB2 MSTR
uses local time.
DSNT375I and DSNT376I messages contain information about the threads that are involved
in the deadlock or timeout. MEMBER is the name of the DB2 member where the thread is
executing. THREAD-INFO is presented in a colon-delimited list that contains the
following segments.
The primary authorization ID that is associated with the thread.
The name of the user's workstation.
The ID of the user.
The name of the application.
The statement type for the previously run statement: dynamic or static.
The statement identifier for the currently executing statement, if available. The statement
identifier can be used to identify the particular SQL statement.
The name of the role that is associated with the thread.
The correlation token that can be used to correlate work at the remote system with work
that is performed at the DB2 subsystem.
A DSNT501I message indicates the resource name, type, and reason code.
For more information about the message, see DB2 10 for z/OS Messages, GC19-2979.
The deadlock record provides more detailed information than is shown in the DB2 MSTR log
or SYSLOG.
In this particular case, the following transactions and resources are involved.
On member D0Z1, thread A (LUW=G90C0609.C36D.CA4C155D1D64) of application
"TraderClientApplicationInformati" holdsan S-LOCK on page x'03' on table space
DBTR8074.TSACPREJ, and it is waiting for X-LOCK on page x'03' of table space
DBTR8074.TSACCEJB.
On member D0Z2, thread B (LUW=G90C048E.C25C.CA4C1542DE25) from application
"TraderClientApplicationInformati" holds an S-LOCK on page x'03' of table space
DBTR8074.TSACCEJB, and is waiting for an X-LOCK on page x'03' of table space
DBTR8074.TSACPREJ.
Because S-LOCK and X-LOCK are not compatible, these two threads get into a deadlock.
The victim of the deadlock is thread B (LUW=G90C048E.C25C.CA4C1542DE25) on D0Z2.
The survivor of the deadlock in on D0Z1 is thread A (G90C0609.C36D.CA4C155D1D64).
480 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Note: The time stamp of the deadlock record is **:26:58, while the time stamp of the
DSNT375I and SQLCODE -913 is **:26:33, which is a 25 second difference.
This machine is connected to an external timer facility that is configured to use leap
seconds (the delta between UTC (Coordinated Universal Time) and UT1 (mean solar time
- observed Earth rotation). The DB2 trace records use an STCK time stamp, which is not
adjusted for leap seconds. The job log messages use the local time (including the 25 leap
seconds).
Unfortunately, OMPE does not allow you to use a TIMEZONE that includes seconds (only
hours and minutes).
In addition, the STMTID information that was introduced in DB2 10 indicates the SQL
statements that resulted in the deadlock condition.
According to deadlock trace record information from Example 9-20 on page 479, you can
conclude the following information:
On member D0Z1:
– STMTID x’0DC1' (3521) is holding an S-LOCK on page x'03' of table space
DBTR8074.TSACPREJ.
– STMTID x’0DC3' (3523) is waiting for an X-LOCK on page x'03' of table space
DBTR8074.TSACCEJB.
On member D0Z2:
– STMTID x’0247' (583) is holding an S-LOCK on page x'03' of table space
DBTR8074.TSACCEJB.
– This statement is waiting for an X-LOCK on page x'03' of table space
DBTR8074.TSACPREJ.
The SQL statements are dynamic SQL in our case. Because the dynamic statement cache is
enabled, you can use the EXPLAIN STMTCACHE statement on all members of the data sharing
system to extract information from the dynamic statement cache and insert the information
into the DSN_STATEMENT_CACHE_TABLE table.
Example 9-21 shows how to identity the SQL statements that are involved in our deadlock
using STMT_ID and GROUP_MEMBER.
The objects that are involved in the deadlock are using page level locking. With the high
transaction rate and the rather small number of pages that are involved, page level locking is
locking too much data, resulting in this deadlock condition. Changing to row level locking
solves the problem.
In this report, you can find the deadlock record and work your way back to the start of the
SQL statement of each of the transactions and SQL statements that are involved, and then
move forward in the trace again to determine in which sequence the locks were acquired, and
how that led to the locking problem that you are trying to resolve.
For more information about tracing, see DB2 9 for z/OS: Resource Serialization and
Concurrency Control, SG24-4725.
482 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
A
This appendix provides information about the implementation tasks that we performed to put
the initial infrastructure in place and then describes the steps that we took to add batch jobs
and regular autonomic statistic monitoring tasks to the ADMT task list.
This appendix describes the installation and use of the DB2 administrative task scheduler by
detailing these activities in the following sections:
Implementation
Administrative scheduler operation
Using ADMT for DB2STOP, DB2START, and statistics monitoring
Additional information
D0Z1MSTR D0Z2MSTR
SQL interface
DB2 for z/OS (stored procedures and
user-defined functions)
Subsystem parameter Subsystem parameter
Security Security
DFLTUID = D0ZGADMT VSAM task list DFLTUID = D0ZGADMT
..........
External task list .......... External task list
ADMTDD1 = prefix.TASKLIST ADMTDD1 = prefix.TASKLIST
The illustration that is shown in Figure A-2 on page 485 provides an overview of the
administration scheduler installation jobs and outlines the implementation tasks.
484 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Job = DSNTIJMV Job = DSNTIJRA
Job = DSNTIJRT
VSAM
Create DB2 objects task list
DSNWLM_GENERAL (default) Job = DSNTIJIN
..........
Bind packages
Grant privileges .......... Create VSAM task list
VSAM TL = prefix.TASKLIST
We configured the STC JCL for the D0Z2ADMT started task by using the JCL template that is
shown in Example A-1 on page 485, which has a procedure name of D0Z2ADMT and has the
DB2SSID JCL parameter set to D0Z2.
486 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
A.1.3 Installing the DSNTIJRA job
DSNTIJRA performs the following security-related tasks in RACF.
488 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
PERMIT IRRPTAUTH.D0Z2ADMT.* CL(PTKTDATA) +
ID(D0Z2ADMT) ACCESS(UPDATE)
PERMIT D0Z1ADMT CL(PTKTDATA) +
ID(D0Z1ADMT) ACCESS(UPDATE)
PERMIT D0Z2ADMT CL(PTKTDATA) +
ID(D0Z2ADMT) ACCESS(UPDATE)
/* refresh RACF changes */
SETROPTS RACLIST (PTKTDATA) REFRESH
SETROPTS RACLIST (FACILITY) REFRESH
SETROPTS REFRESH GENERIC(*) RACLIST(PTKTDATA)
490 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
– DSNADM.DSNADMSS
– DSNADM.DSNADMTA
– DSNADM.DSNADMTC
– DSNADM.DSNADMTD
– DSNADM.DSNADMTH
– DSNADM.DSNADMTL
– DSNADM.DSNADMTO
– DSNADM.DSNADMTR
– DSNADM.DSNADMTS
– DSNADM.DSNADMTU
– DSNADM.DSNADMUM
– DSNADM.DSNADMUS
– DSNADMSI.DSNADMSI
In data sharing, ADMT provides one administrative scheduler STC per DB2 member, with
each ADMT instance running in the same LPAR as its corresponding DB2 member. The
ADMT STC names are unique across the data sharing group. In our environment, each
ADMT STC uses its own STC user, which must be different from the user that is specified in
the DFLTUID parameter of the STC JCL. The ADMT STCs share one VSAM tasklist data set
and the ADMT DB2 tables.
When you stop DB2, ADMT loses its connection to DB2 and writes out the message that is
shown in Figure A-4.
DSNA679I DSNA6BUF THE ADMIN SCHEDULER D0Z2ADMT CANNOT ACCESS TASK LIST
SYSIBM.ADMIN_TASK
DB2 CODE X'00F30002' IN IFI IDENTIFY
Figure A-4 ADMT DB2 unavailable message
492 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4. Call the ADMIN_TASK_ADD stored procedure to add DB2START and DB2STOP job
submission tasks for both members of the data sharing group.
5. Call the ADMIN_TASK_ADD stored procedure to add calls to the ADMIN_UTL_MONITOR
stored procedure to monitor and resolve outdated statistics on user objects and on the
DSNDB06.SYSTSKEYS table space.
SELECT
substr(TRIGGER_TASK_NAME,1,8) as TASKNAME
, DB2_SSID
, SUBSTR(JCL_LIBRARY,1,18) AS JCL_LIBRARY
, JCL_MEMBER
, JOB_WAIT
, TASK_NAME
, DESCRIPTION
, CREATOR
, LAST_MODIFIED
FROM table(DSNADM.ADMIN_TASK_LIST()) as tasklist
WHERE TRIGGER_TASK_NAME = 'DB2START';
---------+---------+---------+---------+---------+---------+---------+---
TASKNAME DB2_SSID JCL_LIBRARY JCL_MEMBER JOB_WAIT TASK_NAME
---------+---------+---------+---------+---------+---------+---------+---
DB2START D0Z1 DB0ZM.D0ZGADMT.JCL D0Z1STRT YES D0Z1STRT
DB2START D0Z2 DB0ZM.D0ZGADMT.JCL D0Z2STRT YES D0Z2STRT
DSNE610I NUMBER OF ROWS DISPLAYED IS 2
DSNE616I STATEMENT EXECUTION WAS SUCCESSFUL, SQLCODE IS 100
Figure A-5 Query DB2START events
We then reference these data sets by defining a common data set alias name that uses the
&SYSNAME symbolic variable in the data set alias definition. The alias name is identical to the
JCL library name that we used in the ADMT task definition in Example A-9 on page 493. With
this technique, the alias name references the appropriate system-related JCL data set,
depending on the system (SC63 or SC64) from which the reference is made. The define alias
control statement that we used is shown in Example A-10.
The D0Z1STRT JCL that we created for D0Z1 ADMT DB2START processing is shown in
Example A-11.
1. We coded a JOBPARM JES control statement to define system affinity. The job that is
shown in Example A-11 runs on system SC63.
2. The JCLLIB statement refers to a library that is used to include JCL templates that are
used in DB2 data sharing across administrative scheduler instances for job submission.
494 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
3. We set the SSID variable to the name of the DB2 subsystem ID. The variable is then used
for resolving include member names and to pass the DB2 subsystem IDs parameter for
JCL and program parameters processing.
4. The JCL shown in Example A-11 on page 494 uses the SSID variable to include the JCL
template D0Z1STRT from JCLLIB data set DB0ZM.D0ZGADMT.INCLUDE.
1. The SSID variable is passed in by the D0Z1STRT job described in “JCL member
D0Z1STRT” on page 494.
2. This part of the JCL shows the DB2 commands that are required to complete the tasks
that are described in “JCL member D0Z1STRT” on page 494.
SDSF OUTPUT DISPLAY D0Z1ADMT STC24540 DSID 2 LINE 34 COLS 21- 100
COMMAND INPUT ===> SCROLL ===> CSR
$HASP100 D0Z1STRT ON INTRDR FROM STC24540 D0Z1ADMT
IRR010I USERID D0ZGADMT IS ASSIGNED TO THIS JOB.
Figure A-6 Administrative scheduler DB2START messages
SELECT
SUBSTR(TASK_NAME,1,8) AS TASKNAME
, SUBSTR(STATUS,1,10) AS STATE
, NUM_INVOCATIONS AS #INV
, SUBSTR(CHAR(START_TIMESTAMP),1,19) AS BETS
, SUBSTR(CHAR(END_TIMESTAMP),1,19) AS ENTS
, JOB_ID
, DB2_SSID AS SSID
FROM table(DSNADM.ADMIN_TASK_STATUS()) as taskstatus
where task_name = 'D0Z1STRT'
---------+---------+---------+---------+---------+---------+---------+---------
TASKNAME STATE #INV BETS ENTS JOB_ID SSID
---------+---------+---------+---------+---------+---------+---------+---------
D0Z1STRT COMPLETED 6 2012-08-12-14.22.47 2012-08-12-14.22.48 JOB24643 D0Z1
Figure A-8 Query DB2START processing status
496 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
The query that is shown in Figure A-8 on page 496 provides information about the most
recent DB2START event run. You can use the UDF to obtain a history of recent runs. You can
limit the number of rows to be returned by passing a numeric input parameter in the UDF
interface. An example of such a query and its processing result is illustrated in Figure A-9.
SELECT
SUBSTR(TASK_NAME,1,8) AS TASKNAME
, SUBSTR(STATUS,1,10) AS STATE
, NUM_INVOCATIONS AS #INV
, SUBSTR(CHAR(START_TIMESTAMP),1,19) AS BETS
, SUBSTR(CHAR(END_TIMESTAMP),1,19) AS ENTS
, JOB_ID
, DB2_SSID AS SSID
FROM table(DSNADM.ADMIN_TASK_STATUS(10)) as taskstatus
where task_name = 'D0Z1STRT'
---------+---------+---------+---------+---------+---------+---------+--------
TASKNAME STATE #INV BETS ENTS JOB_ID SSID
---------+---------+---------+---------+---------+---------+---------+--------
D0Z1STRT COMPLETED 1 2012-08-12-02.03.46 2012-08-12-02.03.48 JOB24557 D0Z1
D0Z1STRT COMPLETED 2 2012-08-12-02.14.27 2012-08-12-02.14.30 JOB24565 D0Z1
D0Z1STRT COMPLETED 3 2012-08-12-03.36.19 2012-08-12-03.36.20 JOB24584 D0Z1
D0Z1STRT COMPLETED 4 2012-08-12-03.41.18 2012-08-12-03.41.19 JOB24594 D0Z1
D0Z1STRT COMPLETED 5 2012-08-12-14.11.31 2012-08-12-14.11.32 JOB24632 D0Z1
D0Z1STRT COMPLETED 6 2012-08-12-14.22.47 2012-08-12-14.22.48 JOB24643 D0Z1
DSNE610I NUMBER OF ROWS DISPLAYED IS 6
Figure A-9 Query DB2START history
SELECT
substr(TRIGGER_TASK_NAME,1,8) as TASKNAME
, DB2_SSID
, SUBSTR(JCL_LIBRARY,1,18) AS JCL_LIBRARY
, JCL_MEMBER
, JOB_WAIT
, TASK_NAME
, DESCRIPTION
, CREATOR
, LAST_MODIFIED
FROM table(DSNADM.ADMIN_TASK_LIST()) as tasklist
WHERE TRIGGER_TASK_NAME = 'DB2STOP';
---------+---------+---------+---------+---------+---------+---------+-
TASKNAME DB2_SSID JCL_LIBRARY JCL_MEMBER JOB_WAIT TASK_NAME
---------+---------+---------+---------+---------+---------+---------+-
DB2STOP D0Z1 DB0ZM.D0ZGADMT.JCL D0Z1STOP YES D0Z1STOP
DB2STOP D0Z2 DB0ZM.D0ZGADMT.JCL D0Z2STOP YES D0Z2STOP
DSNE610I NUMBER OF ROWS DISPLAYED IS 2
Figure A-10 Query DB2STOP events
The D0Z1STOP JCL that we created for D0Z1 ADMT DB2STOP processing is shown in
Example A-14.
The SSID and JOBPARM setting and the JCLLIB statement that are used in Example A-14
are similar to the ones in Example A-11 on page 494. For information about these settings,
see Example A-11 on page 494.
498 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
JCL include member D0Z1STOP
Example A-14 on page 498 references include template D0Z1STOP. D0Z1STOP contains a
JCL template that consists of a JCL job step that runs a series of DB2 commands against the
DB2 system that is referred to by the SSID variable. The D0Z1STOP JCL template that we
use is shown in Example A-15.
1. In DB2STOP processing, you cannot use the TSO batch DSN processor to process DB2
commands because DB2 has been stopped and does not allow for any further work to be
submitted through the traditional DB2 interfaces. Thus, we use SDSF REXX to perform
DB2 command processing through an operating system console. The logic of the SDSF
REXX program is illustrated in Example A-17.
2. The @OSCMD REXX program is stored in the PO data set that is referenced by the
SYSEXEC DD statement.
3. JCL DD statement CMDIN refers to the data set that contains the DB2 commands to be
run through SDSF REXX in case of DB2STOP processing.
/* issue commands */
runcmds:
address SDSF "isfexec /"||CMDIN.xi
if RC <> 0 then do
Say "RC" RC "returned from ..."
call DisplayMessages
exit 12
end
500 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
return
502 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Verifying the status of DB2STOP processing
You can verify the status of DB2STOP processing by using the ADMIN_TASK_STATUS table
UDF for querying the ADMT status. The result of the query that we ran is provided in
Figure A-13.
SELECT
SUBSTR(TASK_NAME,1,8) AS TASKNAME
,SUBSTR(STATUS,1,10) AS STATE
,NUM_INVOCATIONS AS #INV
,SUBSTR(CHAR(START_TIMESTAMP),1,19) AS BETS
,SUBSTR(CHAR(END_TIMESTAMP),1,19) AS ENTS
,JOB_ID
,DB2_SSID AS SSID
FROM table(DSNADM.ADMIN_TASK_STATUS()) as taskstatus
where task_name = 'D0Z1STOP'
---------+---------+---------+---------+---------+---------+---------+------
TASKNAME STATE #INV BETS ENTS JOB_ID SSID
---------+---------+---------+---------+---------+---------+---------+------
D0Z1STOP COMPLETE 6 2012-08-12-14.20.3 2012-08-12-14.20.4 JOB24638 D0Z1
Figure A-13 Query the DB2STOP processing status
The query that is shown in Figure A-13 provides information about the most recent DB2STOP
event run. You can use the UDF also to obtain a history of recent runs. You can limit the
number of rows to be returned by passing a numeric input parameter in the UDF interface. An
example of such a query and its processing result is illustrated in Figure A-14.
SELECT
SUBSTR(TASK_NAME,1,8) AS TASKNAME
,SUBSTR(STATUS,1,10) AS STATE
,NUM_INVOCATIONS AS #INV
,SUBSTR(CHAR(START_TIMESTAMP),1,19) AS BETS
,SUBSTR(CHAR(END_TIMESTAMP),1,19) AS ENTS
,JOB_ID
,DB2_SSID AS SSID
FROM table(DSNADM.ADMIN_TASK_STATUS(10)) as taskstatus
where task_name = 'D0Z1STOP'
---------+---------+---------+---------+---------+---------+---------+--------
TASKNAME STATE #INV BETS ENTS JOB_ID SSID
---------+---------+---------+---------+---------+---------+---------+--------
D0Z1STOP COMPLETED 2 2012-08-12-02.13.25 2012-08-12-02.13.2 JOB24560 D0Z1
D0Z1STOP COMPLETED 3 2012-08-12-03.34.05 2012-08-12-03.34.0 JOB24579 D0Z1
D0Z1STOP COMPLETED 4 2012-08-12-03.38.50 2012-08-12-03.38.5 JOB24588 D0Z1
D0Z1STOP COMPLETED 5 2012-08-12-03.42.36 2012-08-12-03.42.3 JOB24596 D0Z1
D0Z1STOP COMPLETED 6 2012-08-12-14.20.34 2012-08-12-14.20.4 JOB24638 D0Z1
DSNE610I NUMBER OF ROWS DISPLAYED IS 5
Figure A-14 Query the DB2STOP history
Figure A-15 illustrates the relationships between the various objects that DB2 uses for
autonomic statistics maintenance.
2
Add task
(ADMIN_UTL_EXECUTE)
1
Administrative Monitor statistics
scheduler Execute (ADMIN_UTL_MONITOR) Read Catalog
statistics
Reschedule Execute Write
self
3
5 Read/
Alerts Write
update Read/write
RUNSTATS
profiles
Solve alerts
(ADMIN_UTL_EXECUTE) Write History
Read
Read
Update
4
Time Execute RUNSTATS Read Table
windows spaces
504 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Scheduling autonomic statistics monitoring
We scheduled autonomic statistics monitoring for the following group of DB2 objects:
User table and index spaces
DB2 catalog table space DSNDB06.SYSTSKEYS on the first day of each month at
1:00 a.m.
SELECT
substr(TASK_NAME,1,10) as TASKNAME
, substr(POINT_IN_TIME,1,10 ) as PIT
, substr(PROCEDURE_SCHEMA,1,10) as STPSCHEMA
, substr(PROCEDURE_NAME ,1,20) as STPNAME
, substr(DESCRIPTION ,1,40) as DESCRIPTION
, CREATOR
FROM table(DSNADM.ADMIN_TASK_LIST()) as tasklist
WHERE PROCEDURE_NAME = 'ADMIN_UTL_MONITOR';
---------+---------+---------+---------+---------+---------+---------+---------
TASKNAME PIT STPSCHEMA STPNAME DESCRIPTION
---------+---------+---------+---------+---------+---------+---------+---------
STATSMON1 0 1 * * * SYSPROC ADMIN_UTL_MONITOR statistics monitoring
STATSMON2 0 1 1 * * SYSPROC ADMIN_UTL_MONITOR statistics monitoring
Figure A-16 Query statistics monitoring tasks
506 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
(TTHD000) Signal received with command = 1
(TTHD000) Execution begins for task = 6
(TTHD000) Execution begins at time 2012-09-21-09.38.00.000000
(TTHD000) PassTicket generated for user = "D0ZGADMT"
(TTHD000ÝP¨) starting
(TTHD000ÝP¨) connected]
(TTHD000ÝP¨) stored procedure schema = "SYSPROC"
(TTHD000ÝP¨) stored procedure name = "ADMIN_UTL_MONITOR"
(TTHD000ÝP¨) P parmÝ0¨ type=449-0 length=30000 name = "MONITOR_OPTIONS"
(TTHD000ÝP¨) O parmÝ1¨ type=493-0 length=8 name = "HISTORY_ENTRY_ID"
(TTHD000ÝP¨) O parmÝ2¨ type=497-0 length=4 name = "RETURN_CODE"
(TTHD000ÝP¨) O parmÝ3¨ type=449-0 length=1331 name = "MESSAGE"
(TTHD000ÝP¨) num(columns) = 4
(TTHD000ÝP¨) num(variables) = 128
(TTHD000ÝP¨) columnÝ0¨ type=448 length=79 addr = x24E2263E
(TTHD000ÝP¨) columnÝ1¨ type=497 length=8 addr = x24E22693
(TTHD000ÝP¨) columnÝ2¨ type=497 length=4 addr = x24E226A1
(TTHD000ÝP¨) columnÝ3¨ type=449 length=1331 addr = x24E226AB
(TTHD000ÝP¨) "SYSPROC"."ADMIN_UTL_MONITOR"
(TTHD000ÝP¨) call stored procedure, SQLCODE = 0
(setSQLStatus) DSNT400I SQLCODE = 000, SUCCESSFUL EXECUTION
(TTHD000ÝP¨) out parmÝ1¨ = "0x0000011D"
(TTHD000ÝP¨) out parmÝ2¨ = "0x00000000"
(TTHD000ÝP¨) out parmÝ3¨ = ""
(TTHD000ÝP¨) disconnected]
(TTHD000) logged out
(TTHD000) Execution status COMPLETED
(TTHD000) Execution ends at time 2012-09-21-09.38.00.000000
Figure A-17 ADMIN_UTL_MONITOR ADMT trace information
The query output that is shown in Example A-20 on page 507 confirms a status of COMPLETED
for both of our statistics monitoring tasks.
We created the SQL table UDF shown in Example A-21 to retrieve the RUNSTATS utility
output of a table space. We provide the table space name and qualifier as input parameters in
the SQL table UDF interface.
In the example that is shown in Example A-22, we use the SQL table UDF shown in
Example A-21 to obtain the RUNSTATS utility output of the most recent utility that is run for
table space DSNADMDB.DSNADMTS.
Example A-22 Query for recent RUNSTATS for table space DSNADMDB.DSNADMTS
SELECT output
FROM TABLE(UTILOUTPUT('DSNADMDB','DSNADMTS')) AS A
OUTPUT
2012-09-21 09:38:02.487888> 1DSNU000I 265 09:38:01.88 DSNUGUTC - OUTPUT START
2012-09-21 09:38:02.487899> DSNU1045I 265 09:38:01.96 DSNUGTIS - PROCESSING S
2012-09-21 09:38:02.487910> 0DSNU050I 265 09:38:02.11 DSNUGUTC - RUNSTATS TA
2012-09-21 09:38:02.487920> PROFILE
2012-09-21 09:38:02.487930> DSNU1361I -D0Z2 265 09:38:02.11 DSNUGPRF - THE STAT
508 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2012-09-21 09:38:02.487940> ADMIN_TASKS HAS BEEN USED
2012-09-21 09:38:02.487950> DSNU1368I 265 09:38:02.11 DSNUGPRB - PARSING STAT
2012-09-21 09:38:02.487961> DSNU1369I 265 09:38:02.11 DSNUGPRB - PARSING STAT
SC64 SC63
512 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure B-2 provides an overview of the DayTrader application workload.
Our environment
The environment consists of the following elements:
z/OS R13
WebSphere Application Server V8.5
IBM Data Server Driver for JDBC and SQLJ Fix Pack 6
Note: The steps show the installation of trade application with the assumption that the
software was installed.
DayTrader installation
Download your copy of the DayTrader application from the following web page:
https://cwiki.apache.org/GMOxDOC20/daytrader.html
Tip: You cannot choose an option to use a type 4 driver for DB2 for z/OS. But you must
choose DB2 for z/OS or your application will not run. After you run the installation script,
you can modify your data source setting from the WebSphere Application Server
administrative console. We show how to change them later.
Username
This is the user ID that is used to install the DayTrader application; in our example, we
used “mzadmin”.
Password
This is the password for the user name.
WebSphere Application Server installation
If you are using global security or cluster installation, you are prompted by this option.
WebSphere Application Server node
514 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DB name
The location name of DB2 for z/OS. In our installation, “D0ZG” is the location name for our
DB2 data sharing group.
DB username
The user ID that is used to connect to DB2 for z/OS. In our installation, we used “Rajesh”.
DB password
The password that is associated with the DB username.
Tip: If your scripts do not complete, check your WebSphere Application Server
administrative console to see whether any of configurations or installations are done. If you
rerun your script, you manually must delete what is configured or installed from your
last installation.
After you are done with running scripts, go to your WebSphere Application Server
administrative console. The DayTrader application should be installed, but it might not be
started, as shown in Figure B-3, after the installation. You do not need to start the DayTrader
application now.
Customize your data source settings so you can connect to DB2 for z/OS through the
network, where the default installation for “db2zos” is the type 2 driver. In our example, we
also performed the setup for sysplex workload balancing and a type 4 connection.
516 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Click TradeDataSource to access the settings for your DayTrader application data source. At
the bottom, you find the Driver type, Server name, and Port number. Change Driver type to
“4”, Server name to an IP address or the domain name for your DB2 for z/OS, and change the
Port number to the DRDA service port. In our example, the settings are the ones that are
shown in Figure B-6. In addition, we added two properties that are related to sysplex
workload balancing in the data source custom properties.
Figure B-6 Modify the data source for the type 4 connection
Go to Servers Application servers, and restart the application server in which the
DayTrader application is installed. After restarting, navigate to Applications Enterprise
Application to verify that the DayTrader application started.
Your DayTrader application should be accessible from your browser. You must access your
installation of the DayTrader application to finish your installation.
Tip: If your application does not work, you might need to ask your WebSphere Application
Server administrator for help.
518 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Click Go Trade! from the left menu pane of the window shown in Figure B-8.
The window that is shown in Figure B-9 opens. Click Log in (the Username and Password
are already entered) to get started, or create an account by clicking Register With Trade.
After you verify your installation (by clicking all the menus), you can start your workload by
using the Test Trade Scenario.
520 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Click Configuration in the left menu pane (shown in Figure B-8 on page 519) and then click
Test Trade Scenario, as shown in Figure B-11. A new window opens. In that window,
click Reload.
You can use any load testing tool to create a workload to run by clicking Test Trade Scenario.
For our test, we used Apache JMeter. This is a Java -based application that tests functional
behavior and measures performance. It is available from the following web page:
http://jakarta.apache.org/
Apache JMeter can load and performance test various server types, but because the
DayTrader application provides the Test Trade Scenario, you can use it to test a scenario by
using an HTTP request.
After you download the compressed file containing the server tools repository
(wdt-update-site_8.5.0.WDT85-I20120530_0920.zip, in our case), it can be installed in to
IBM Data Studio by completing the following steps:
1. Click Help Install new software.
2. In the Add Repository window, click Archive.
3. Browse to the location of the compressed file of IBM WebSphere Application Server
Developer Tools for Eclipse. Select the file and then click Open.
524 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
4. You should see a selection menu, as shown in Figure C-1, where you can select the
versions of the server adapters that you need. No server installation is done now; you get
the only adapter software, which lets you connect to a separate server installation. Thus,
you must use an existing application server installation or install either a full version of
WebSphere Application Server or the new Liberty profile, which is optimized for developer
productivity and web / mobile application deployment.
The installation of the application server is well documented and does not need to be
repeated here. For more information, go to the following website:
http://www.ibm.com/software/webservers/appserv/developer/index.html
Appendix C. Setting up a WebSphere Application Server test environment on IBM Data Studio 525
C.1.2 WebSphere Application Server Liberty Profile
WebSphere Application Server V8.5 includes a Liberty profile, which is a highly composable
and dynamic application server profile. It is a stand-alone product that must be installed
independently from the WebSphere Application Server product, which has knowledge about
profiles, but the Liberty profile is different.
In the directory that you chose to download the file into, run java -jar
wlp-developers-8.5.0.0.jar and follow the installation instructions, which
are straightforward.
526 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
D
In this appendix, we introduce the OMEGAMON PDB, and outline how to create the PDB
database, and extract, transform, and load (ETL) DB2 trace information into the PDB tables.
We used this functionality to implement the activity that is described at 4.4, “Tivoli
OMEGAMON XE for DB2 Performance Expert for z/OS” on page 201.
System
Accounting Audit Locking Record Trace Statistics Exception Parameters
Save-file Save-file
data set data set
DB2
As indicated in Figure D-1, ETL processes non-aggregated (FILE format) and aggregated
(SAVE format) information.
Aggregated information
Several records are summarized by specific identifiers. In a report, each entry represents
aggregated data. You run the SAVE subcommand to generate a VSAM data set that
contains the aggregated data. When the data is saved, you use the Save-File utility to
generate a DB2 -loadable data set. As you might have noticed in Figure D-1, this format is
supported only for statistics and accounting trace information. This option is useful if you
must process huge volumes of accounting information.
Non-aggregated information
For non-aggregated data, each record is listed in the order of occurrence. In a trace, each
entry represents non-aggregated data. You run the FILE subcommand to generate a data
set that contains non-aggregated data. This format is supported for all DB2 trace
information. Analyzing non-aggregated accounting information can be useful if you want to
use the report capabilities of SQL to drill down on thread level accounting information. In
our scenario, the volume of DB2 trace information is not expected to be large. We
therefore decided to load the PDB tables with non-aggregated information.
With PDB ETL, you can process DB2 trace data of the following input formats:
System Measurement Facility (SMF) record types 100 (statistics), 101 (accounting), and
102 (performance and audit).
Generalized Trace Facility (GTF).
OMPE ISPF interface (collect report data).
Batch program FPEZCRD. For an example of how to run program FPEZCRD in batch,
refer the JCL sample that is provided in the RKO2SAMP library, member FPEZCRDJ.
Near term history sequential data sets.
528 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
In our DB2 environment, we processed DB2 traces that we collected through SMF and GTF.
For this book, we focused on using non-aggregated accounting and statistics information. If
you need details about using the PDB for the other information categories, see Chapter 5,
“Advanced reporting concepts. The Performance Database and the Performance
Warehouse”, in IBM Tivoli OMEGAMON XE for DB2 Performance Expert on z/OS Reporting
User's Guide, SH12-6927.
Accounting tables
Figure D-2 shows the accounting table categories that are provided by the performance
database. PDB stores each data type in its own DB2 table.
General Data
One record
per thread
1 1 1 1 1
M M M M M
Resource Limit
Group buffer Buffer pool
Package data DDF data Facility
pool data
(Save-File only)
530 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Statistics tables DDL and load statements
Figure D-3 shows the structure of each of the statistics tables in the performance database.
PDB stored each data type in its own DB2 table.
General Data
One (delta) record
per statistics
interval
1 1 1 1
M M M M
Distributed Data
Group buffer Buffer pool
Facility Buffer pool data
pool data data set data
(DDF data)
In our environment, we generate loadable input records in the FILE data format. In that
format, each table type that is shown in Figure D-3 stores the following information:
General data: One row for each Statistics delta record, containing data from IFCID 1 and
2. A delta record describes the activity between two consecutive statistics record pairs.
Group buffer pool data: One row per group buffer pool that is active at the start of the
corresponding delta record.
DDF data: One row per remote location that is participating in distributed activity by using
the DB2 private protocol and one row for all remote locations that used DRDA.
Buffer pool data: One row per buffer pool that is active at the start of the corresponding
delta record.
Buffer pool data set data: One row for each open data set that has an I/O event rate at
least one event per second during the reporting interval. To obtain that statistics trace
information, you must activate statistics trace class 9.
OMEGAMON provides sample create table DDL, load utility control statement templates, and
table metadata descriptions in the RKO2SAMP library members that are shown in Table D-3.
We used these templates to create and load these statistics tables.
532 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Example D-2 PDB generate create table DDL data set
//S1GEN EXEC PGM=IEBGENER
//SYSUT1 DD *
SET CURRENT SCHEMA = 'PDB';
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFBU)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFDF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFGE)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFGP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACFPK)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSBU)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSDF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSGE)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSGP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSPK)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOACSRF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCBUF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCDDF)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCGBP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCGEN)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOSCSET)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWCSFP)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC106)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC201)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC202)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC230)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOWC256)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCBND)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCBRD)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCCHG)
// DD DISP=SHR,DSN=DB2R3.SG.PM.DDL(SEMIKOLO)
// DD DISP=SHR,DSN=<omhlq>.RKO2SAMP(DGOXCCNT)
534 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
PMPDB.TSPSGEN
PMPDB.TSPSSET
As these table spaces do not yet exist, we used the create table space DDL template that is
shown in Example D-3 to create these table spaces. The template supports table space
compression and uses the primary and secondary space quantity sliding scale feature to take
advantage of autonomic space management.
Example D-4 Batch JCL PDB accounting and statistics table creation
//S10TEP2 EXEC PGM=IKJEFT1B,DYNAMNBR=20,TIME=1440
//STEPLIB DD DISP=SHR,DSN=DB0ZT.SDSNEXIT
// DD DISP=SHR,DSN=DB0ZT.SDSNLOAD
// DD DISP=SHR,DSN=DB0ZT.RUNLIB.LOAD
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
DSN SYSTEM(D0ZG)
RUN PROGRAM(DSNTEP2) PLAN(DSNTEP10)
//SYSPRINT DD SYSOUT=*
//SYSUDUMP DD SYSOUT=*
//SYSIN DD DISP=SHR,DSN=DB2R3.PM.CNTL($04DDLTB)
D.3.1 Extracting and transforming DB2 trace data into the FILE format
We ran the batch JCL that is shown in Example D-5 to extract and to transform SMF DB2
accounting and statistics data into the OMEGAMON XE for DB2 PE FILE format. We
obtained the accounting and statistics data in a sequential data set that we later use for the
DB2 LOAD utility to load the data into DB2 Performance Database accounting and
statistics tables.
Example D-5 OMPE extract and transform DB2 trace data into FILE format
//* --------------------------------------------------------------
//*DOC Extract and transform accounting and statistics trace data
//*DOC into Omegamon PE FILE format
//* --------------------------------------------------------------
//DB2PM1 EXEC PGM=DB2PM,REGION=0M
//STEPLIB DD DISP=SHR,DSN=<omhlq>.RKANMOD
//INPUTDD DD DISP=SHR,DSN=SMF.DUMP.G0033V00
//STFILDD1 DD DISP=(NEW,CATLG,DELETE),DSN=DB2R3.PM.STAT.FILE,
// SPACE=(CYL,(050,100),RLSE),
// UNIT=SYSDA,
// DATACLAS=COMP /*trigger DFSMS compression */
//ACFILDD1 DD DISP=(NEW,CATLG,DELETE),DSN=DB2R3.PM.ACCT.FILE,
// SPACE=(CYL,(050,100),RLSE),
// UNIT=SYSDA,
// DATACLAS=COMP /*trigger DFSMS compression */
//JOBSUMDD DD SYSOUT=A
//DPMLOG DD SYSOUT=A
//SYSOUT DD SYSOUT=A
//SYSIN DD *
GLOBAL
INCLUDE(SSID(DB1S)) TIMEZONE(-1)
STATISTICS
FILE DDNAME(STFILDD1)
ACCOUNTING
FILE DDNAME(ACFILDD1)
EXEC
536 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
D.3.2 Extracting and transforming DB2 trace data into the SAVE format
We ran the batch JCL that is shown in Example D-6 to extract and to transform SMF DB2
accounting data into OMEGAMON XE for DB2 PE accounting SAVE format. We obtained the
accounting data in a sequential data set, which we later use as input for the DB2 LOAD utility
to load the data into DB2 Performance Database save accounting tables
Example D-7 Merge statistics and accounting file load utility control statements
//S1GEN EXEC PGM=IEBGENER
//SYSUT1 DD *
--OPTIONS PREVIEW
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLBUF)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLDDF)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLGBP)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLGEN)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOSLSET)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFBU)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFDF)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFGE)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFGP)
// DD DISP=SHR,DSN=<omhlq>.TKO2SAMP(DGOALFPK)
//SYSUT2 DD DISP=SHR,DSN=DB2R3.PM.CNTL($08LOATB)
//SYSPRINT DD SYSOUT=*
//SYSIN DD DUMMY
538 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
We then modified the generated data set to reflect the table qualifier and the appropriate input
DD statement and implemented the load utility options that we needed to use. Here are the
load options that we use:
RESUME YES
LOG NO
KEEPDICTIONARY
NOCOPYPEND
Image copy
We ran the batch JCL that is shown in Example D-9 to perform image copy on PDB
accounting and statistics tables.
Runstats
Because we configured the administrative scheduler to perform autonomic statistics
maintenance on non-catalog table spaces, there was no need to plan any further
Runstats activity.
Reorg
We ran the batch JCL that is shown in Example D-10 to perform Reorg on PDB accounting
and statistics tables.
540 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
, "RowsUpdated" INTEGER
, "RowsDeleted" INTEGER
, "GetPage" INTEGER
,"AVG-Time" DECIMAL(15, 6)
,"AVG-CPU" DECIMAL(15, 6)
,"Time/SQL" DECIMAL(15, 6)
,"CPU/SQL" DECIMAL(15, 6)
,"AVG-SQL" DECIMAL(15, 6)
,"LOCK/Tran" DECIMAL(15, 6)
,"LOCK/SQL" DECIMAL(15, 6)
,"GETP/Tran" DECIMAL(15, 6)
,"GETP/SQL" DECIMAL(15, 6)
)
LANGUAGE SQL READS SQL DATA NO EXTERNAL ACTION
DETERMINISTIC
RETURN
WITH
Q1 AS
(SELECT
substr(char(INTERVAL_TIME),1,16 ) AS DATETIME
, CLIENT_TRANSACTION
, DECIMAL(CLASS1_ELAPSED,9,2 ) AS ELAPSED
, DECIMAL(CLASS1_CPU_NNESTED+CLASS1_CPU_STPROC+CLASS1_CPU_UDF
+CLASS1_IIP_CPU,9,2 ) AS CPU
, DECIMAL(CLASS1_IIP_CPU,9,2 ) AS ZIIP
, DECIMAL(CLASS2_CPU_NNESTED+CLASS2_CPU_STPROC+CLASS2_CPU_UDF
+CLASS2_IIP_CPU,9,2 ) AS DB2CPU
, DECIMAL(CLASS2_IIP_CPU,9,2 ) AS DB2ZIIP
, DECIMAL(COMMIT,9,2 ) AS COMMIT
, DECIMAL(SELECT+INSERT+UPDATE+DELETE+FETCH+MERGE,9,2) AS SQL
, DECIMAL(LOCK_REQ,9,2 ) AS LOCKS
, INTEGER(ROWS_FETCHED ) AS ROWS_FETCHED
, INTEGER(ROWS_INSERTED ) AS ROWS_INSERTED
, INTEGER(ROWS_UPDATED ) AS ROWS_UPDATED
, INTEGER(ROWS_DELETED ) AS ROWS_DELETED
FROM DB2PMSACCT_GENERAL
WHERE CONNECT_TYPE = ACCOUNTING.CONNTYPE
AND CLIENT_TRANSACTION = ACCOUNTING.CLIENTAPPLICATION
AND COMMIT > 0 ),
Q2 AS
(SELECT
substr(char(INTERVAL_TIME),1,16 ) AS DATETIME
, CLIENT_TRANSACTION
, decimal(SUM(BP_GETPAGES),9,2 ) AS GETPAGE
FROM DB2PMSACCT_BUFFER
WHERE CONNECT_TYPE = ACCOUNTING.CONNTYPE
AND CLIENT_TRANSACTION = CLIENTAPPLICATION
GROUP BY substr(char(INTERVAL_TIME),1,16), CLIENT_TRANSACTION ),
Q3 AS
(SELECT Q1.*, Q2.GETPAGE FROM Q1, Q2 WHERE
(Q1.DATETIME,Q1.CLIENT_TRANSACTION) =
(Q2.DATETIME,Q2.CLIENT_TRANSACTION) AND Q1.SQL > 0),
Q4 AS
(SELECT Q3.*,
ELAPSED/COMMIT as "AVG-Time",
542 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
For more information about how we used the UDF in our application scenario, see Chapter 8,
“Monitoring WebSphere Application Server applications” on page 361.
The second parm of the PLUGIN option indicates the file that the output is directed to.
We provide this information to give you a better feel for the different types of information that
is available through the different SMF 120 subtype records. We provide both the summary
and the detailed output for each of the subtype records that are mentioned above.
Example E-1 shows the summary output of an SMF 120 subtype 1 (120.1) server
activity record.
SMF -Record Time Server Bean/WebAppName Bytes Bytes # of El.Time CPU_Time(uSec) Other SMF 120.9
Numbr -Type hh:mm:ss Instance Method/Servlet toSvr frSvr Calls (msec) Tot-CPU zAAP Sections Present
1---+----1----+----2----+----3----+----4----+----5----+----6----+----7----+----8----+----9----+ ----------------
679 120.1 19:58:06 MZSR014 882
942 8038
Example E-2 shows the detailed output of an SMF 120 subtype 1 (120.1) server activity
record. It is the detailed output of the same record that is shown in Example E-1.
#Triplets: 4;
Triplet #: 1; offsetDec: 76; offsetHex: 4c; lengthDec: 32; lengthHex: 20; count: 1;
Triplet #: 2; offsetDec: 108; offsetHex: 6c; lengthDec: 216; lengthHex: d8; count: 1;
Triplet #: 3; offsetDec: 324; offsetHex: 144; lengthDec: 100; lengthHex: 64; count: 1;
Triplet #: 4; offsetDec: 424; offsetHex: 1a8; lengthDec: 28; lengthHex: 1c; count: 2;
546 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
#ServerRegions: 1;
ASID1: 252; ASID2: 0; ASID3: 0; ASID4: 0; ASID5: 0;
UserCredentials: ;
ActivityType: 1 (method request);
ActivityID * ca00265f 3cf7d4eb 000002a4 00000062 *
* 090c0609 -------- -------- -------- *
WlmEnclaveToken * 0000011c 012075bf -------- -------- *
ActivityStartTime * ca00265f 3cf7d4eb 00000000 00000000 *
ActivityStopTime * ca00265f 418d6820 00000000 00000000 *
#InputMethods : 1;
#GlobalTransactions: 0;
#LocalTransactions : 2;
WLM enclave CPU time: 882;
--------------------------------------------------------------------------------
#Triplets: 3;
Triplet #: 1; offsetDec: 64; offsetHex: 40; lengthDec: 32; lengthHex: 20; count: 1;
Triplet #: 2; offsetDec: 96; offsetHex: 60; lengthDec: 308; lengthHex: 134; count: 1;
Triplet #: 3; offsetDec: 404; offsetHex: 194; lengthDec: 144; lengthHex: 90; count: 1;
#HeapIdSections: 2;
Triplet #: 3.1; offsetDec: 32; offsetHex: 20; length: 56; count: 1;
548 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Triplet #: 3.2; offsetDec: 88; offsetHex: 58; length: 56; count: 1;
--------------------------------------------------------------------------------
Example E-6 shows the detailed output of an SMF 120 subtype 7 (120.7) WebContainer
activity record. It is the detailed output of the same record that is shown in Example E-5.
# Triplets: 4;
Triplet #: 1; offsetDec: 76; offsetHex: 4c; lengthDec: 32; lengthHex: 20; count: 1;
Triplet #: 2; offsetDec: 108; offsetHex: 6c; lengthDec: 156; lengthHex: 9c; count: 1;
Triplet #: 3; offsetDec: 264; offsetHex: 108; lengthDec: 16; lengthHex: 10; count: 1;
Triplet #: 4; offsetDec: 280; offsetHex: 118; lengthDec: 868; lengthHex: 364; count: 1;
# Servlets: 2;
Triplet #: 4.1; offsetDec: 284; offsetHex: 11c; length: 292; count: 1;
Triplet #: 4.2; offsetDec: 576; offsetHex: 240; length: 292; count: 1;
--------------------------------------------------------------------------------
550 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
/welcome.jsp 2 1 9 -Av
/portfolio.jsp 38 1 70 -Av
/quote.jsp 1155 3 262 -Av
DayTrader-EE6#web.war
Example E-8 shows the detailed output of an SMF 120 subtype 8 (120.8) WebContainer
interval record. It is the detailed output of the same record that is shown in Example E-7 on
page 550.
# Triplets: 4;
Triplet #: 1; offsetDec: 76; offsetHex: 4c; lengthDec: 32; lengthHex: 20; count: 1;
Triplet #: 2; offsetDec: 108; offsetHex: 6c; lengthDec: 128; lengthHex: 80; count: 1;
Triplet #: 3; offsetDec: 236; offsetHex: ec; lengthDec: 44; lengthHex: 2c; count: 1;
Triplet #: 4; offsetDec: 280; offsetHex: 118; lengthDec: 3544; lengthHex: dd8; count: 1;
# Servlets loaded: 0;
# Servlets: 10;
Triplet #: 4.1; offsetDec: 384; offsetHex: 180; length: 316; count: 1;
Triplet #: 4.2; offsetDec: 700; offsetHex: 2bc; length: 316; count: 1;
Triplet #: 4.3; offsetDec: 1016; offsetHex: 3f8; length: 316; count: 1;
Triplet #: 4.4; offsetDec: 1332; offsetHex: 534; length: 316; count: 1;
Triplet #: 4.5; offsetDec: 1648; offsetHex: 670; length: 316; count: 1;
Triplet #: 4.6; offsetDec: 1964; offsetHex: 7ac; length: 316; count: 1;
Triplet #: 4.7; offsetDec: 2280; offsetHex: 8e8; length: 316; count: 1;
Triplet #: 4.8; offsetDec: 2596; offsetHex: a24; length: 316; count: 1;
Triplet #: 4.9; offsetDec: 2912; offsetHex: b60; length: 316; count: 1;
Triplet #: 4.10; offsetDec: 3228; offsetHex: c9c; length: 316; count: 1;
552 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Loaded since: Fri Aug 10 19:56:21 EDT 2012;
Average CPU Time: 20;
Minimum CPU Time: 13;
Maximum CPU Time: 38;
--------------------------------------------------------------------------------
554 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
F
Example F-1 contains this output. It was captured with the TRACE_ALL setting, so it contains
the most detailed information.
There is much information in a JCC trace, but the trace data sets can get larger quickly.
Therefore, complete the following tasks:
Activate the traces for the shortest possible time.
Try to make the trace as selective as possible, both with regard to which applications are
traced and the level of detail that is specified for the trace.
Set up circular tracing, as described in “Specifying the JCC trace at the driver
configuration properties level” on page 463.
556 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
[jcc][Time:2012-11-16-21:49:08.224][Thread:WebSphere WLM Dispatch Thread t=007bd580][Connection@13361385]setDB2ClientAccountingInformation
(TraderClientAccountingInformation) called
[jcc][t4] [time:2012-11-16-21:49:08.224][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:208]T4XAResource setting
XACallInfo[0] Xid: {DB2Xid: formatID(-1), gtrid_length(0), bqual_length(0), data()} freeEntry: false
[jcc][t4][time:2012-11-16-21:49:08.224][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:1][Request.flush]
[jcc][t4] SEND BUFFER: PRPSQLSTT (ASCII) (EBCDIC)
[jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
[jcc][t4] 0000 0056D05100010050 200D004421134442 .V.Q...P ..D!.DB ..}....&........
[jcc][t4] 0010 305A202020202020 2020202020202020 0Z .!..............
[jcc][t4] 0020 4E554C4C49442020 2020202020202020 NULLID +.<<............
[jcc][t4] 0030 20205359534C4E33 3030202020202020 SYSLN300 .....<+.........
[jcc][t4] 0040 202020205359534C 564C303100060008 SYSLVL01.... .......<.<......
[jcc][t4] 0050 1900A0000000 ...... ......
[jcc][t4]
[jcc][t4] SEND BUFFER: SQLATTR (ASCII) (EBCDIC)
[jcc][t4] 0000 0037D05300010031 2450000000002746 .7.S...1$P....'F ..}......&......
[jcc][t4] 0010 4F52205245414420 4F4E4C5920574954 OR READ ONLY WIT |.......|+<.....
[jcc][t4] 0020 4820455854454E44 454420494E444943 H EXTENDED INDIC ......+.....+...
[jcc][t4] 0030 41544F525320FF ATORS . ..|....
[jcc][t4]
[jcc][t4] SEND BUFFER: SQLSTT (ASCII) (EBCDIC)
[jcc][t4] 0000 00A3D0430001009D 2414000000009373 ...C....$......s .t}...........l.
[jcc][t4] 0010 656C656374202A20 66726F6D206F7264 elect * from ord .%........?_.?..
[jcc][t4] 0020 6572656A62206F20 7768657265206F2E erejb o where o. ...|..?.......?.
[jcc][t4] 0030 6F72646572737461 747573203D202763 orderstatus = 'c ?....../........
[jcc][t4] 0040 6C6F736564272041 4E44206F2E616363 losed' AND o.acc %?......+..?./..
[jcc][t4] 0050 6F756E745F616363 6F756E746964203D ount_accountid = ?.>../..?.>.....
[jcc][t4] 0060 202873656C656374 20612E6163636F75 (select a.accou ....%...././..?.
[jcc][t4] 0070 6E7469642066726F 6D206163636F756E ntid from accoun >......?_./..?.>
[jcc][t4] 0080 74656A6220612077 6865726520612E70 tejb a where a.p ..|../......./..
[jcc][t4] 0090 726F66696C655F75 7365726964203D20 rofile_userid = .?..%...........
[jcc][t4] 00A0 3F29FF ?). ...
[jcc][t4]
[jcc][t4] SEND BUFFER: OPNQRY (ASCII) (EBCDIC)
[jcc][t4] 0000 0091D0510002008B 200C004421134442 ...Q.... ..D!.DB .j}.............
[jcc][t4] 0010 305A202020202020 2020202020202020 0Z .!..............
[jcc][t4] 0020 4E554C4C49442020 2020202020202020 NULLID +.<<............
[jcc][t4] 0030 20205359534C4E33 3030202020202020 SYSLN300 .....<+.........
[jcc][t4] 0040 202020205359534C 564C303100060008 SYSLVL01.... .......<.<......
[jcc][t4] 0050 211400007FFF0005 215D0100081900E0 !.......!]...... ...."....).....\
[jcc][t4] 0060 0000000005214703 0005214BF1000C21 .....!G...!K...! ............1...
[jcc][t4] 0070 3700000000001000 00000C2136000000 7..........!6... ................
[jcc][t4] 0080 0000003000000C21 340000000000A000 ...0...!4....... ................
[jcc][t4] 0090 00 . .
[jcc][t4]
[jcc][t4] SEND BUFFER: SQLDTA (ASCII) (EBCDIC)
[jcc][t4] 0000 0027D00300020021 2412001000100676 .'.....!$......v ..}.............
[jcc][t4] 0010 D03F7FFF0671E4D0 0001000D147A0000 .?...q.......z.. }."...U}.....:..
[jcc][t4] 0020 00057569643A30 ..uid:0 .......
[jcc][t4]
[jcc][t4] [time:2012-11-16-21:49:08.224][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:101]Request flushed.
[jcc][t4] [time:2012-11-16-21:49:08.224][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:102]Reply to be filled.
[jcc][t4][time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:2][Reply.fill]
[jcc][t4] RECEIVE BUFFER: SQLCARD (ASCII) (EBCDIC)
[jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
[jcc][t4] 0000 0057D05300010051 2408000000000030 .W.S...Q$......0 ..}.............
[jcc][t4] 0010 3030303044534E20 2020202000000000 0000DSN .... ......+.........
[jcc][t4] 0020 0000000000000000 054056CDEC000000 .........@V..... ......... ......
[jcc][t4] 0030 0000000000202020 2020202020202020 ..... ................
[jcc][t4] 0040 00104442305A2020 2020202020202020 ..DB0Z .....!..........
[jcc][t4] 0050 202000000000FF ..... .......
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: MONITORRD (ASCII) (EBCDIC)
[jcc][t4] 0000 0046D04300010040 1C00000C19010000 .F.C...@........ ..}.... ........
[jcc][t4] 0010 00000000006C000C 1915000000000000 .....l.......... .....%..........
[jcc][t4] 0020 00070018116D4430 5A312E4442305A2E .....mD0Z1.DB0Z. ....._..!.....!.
[jcc][t4] 0030 44305A312E444230 5A47000C112E4453 D0Z1.DB0ZG....DS ..!.....!.......
[jcc][t4] 0040 4E3130303135 N10015 +.....
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: OPNQRYRM (ASCII) (EBCDIC)
[jcc][t4] 0000 002CD05200020026 2205000611490000 .,.R...&"....I.. ..}.............
[jcc][t4] 0010 0006210224170005 215001000C215B00 ..!.$...!P...![. .........&....$.
[jcc][t4] 0020 00024177CE4A3000 05214BF1 ..Aw.J0..!K. ...........1
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: QRYDSC (ASCII) (EBCDIC)
[jcc][t4] 0000 0078D05300020072 241A077800050101 .x.S...r$..x.... ..}.............
[jcc][t4] 0010 250C705090000000 2501000020077800 %.pP....%... .x. ...&............
[jcc][t4] 0020 050101330C705191 0000002501017FFF ...3.pQ....%.... .......j......".
[jcc][t4] 0030 077800050201D024 76D00F0E0250001A .x.....$v....P.. ......}..}...&..
[jcc][t4] 0040 5100FF5100FF0F0E 020A000850001A02 Q..Q........P... ............&...
[jcc][t4] 0050 00040300045100FF 0300040778000503 .....Q......x... ................
[jcc][t4] 0060 01E00971E0540001 D000010778000504 ...q.T......x... .\..\...}.......
[jcc][t4] 0070 01F00671F0E00000 ...q.... .0..0\..
Appendix F. Sample IBM Data Server Driver for JDBC and SQLJ trace 557
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: QRYDTA (ASCII) (EBCDIC)
[jcc][t4] 0000 0085D05300028008 241B00000077FF00 ...S....$....w.. .e}.............
[jcc][t4] 0010 0000000000000249 5C00F2F0F1F260F1 .......I\.....`. ........*.2012-1
[jcc][t4] 0020 F160F1F660F2F14B F4F94BF0F14BF8F9 .`..`..K..K..K.. 1-16-21.49.01.89
[jcc][t4] 0030 F2F0F0F000000382 A4A8000006839396 ................ 2000...buy...clo
[jcc][t4] 0040 A285840000000000 0019421C42640000 ..........B.Bd.. sed.............
[jcc][t4] 0050 0000000000F2F0F1 F260F1F160F1F660 .........`..`..` .....2012-11-16-
[jcc][t4] 0060 F2F14BF4F94BF0F1 4BF2F9F9F0F0F000 ..K..K..K....... 21.49.01.299000.
[jcc][t4] 0070 001F420000000000 000005A27AF1F6F8 ..B.........z... ...........s:168
[jcc][t4] 0080 0000001771 ....q .....
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: ENDQRYRM (ASCII) (EBCDIC)
[jcc][t4] 0000 0026D05200020020 220B000611490004 .&.R... "....I.. ..}.............
[jcc][t4] 0010 001621104442305A 2020202020202020 ..!.DB0Z .......!........
[jcc][t4] 0020 202020202020 ......
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: SQLCARD (ASCII) (EBCDIC)
[jcc][t4] 0000 0057D05300020051 2408000000006430 .W.S...Q$.....d0 ..}.............
[jcc][t4] 0010 3230303044534E58 52464E2000FFFFFF 2000DSNXRFN .... ......+...+.....
[jcc][t4] 0020 9200000000000000 0000000000000000 ................ k...............
[jcc][t4] 0030 0000000000202020 2020202020202020 ..... ................
[jcc][t4] 0040 00104442305A2020 2020202020202020 ..DB0Z .....!..........
[jcc][t4] 0050 202000000000FF ..... .......
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: MONITORRD (ASCII) (EBCDIC)
[jcc][t4] 0000 006AD00300020064 1C00000C19010000 .j.....d........ .|}.............
[jcc][t4] 0010 0000000001060024 1914800000002012 .......$...... . ................
[jcc][t4] 0020 1114160835117280 0000000000000000 ....5.r......... ................
[jcc][t4] 0030 00A0000000000000 0000000C19150000 ................ ................
[jcc][t4] 0040 0000000000070018 116D44305A312E44 .........mD0Z1.D ........._..!...
[jcc][t4] 0050 42305A2E44305A31 2E4442305A47000C B0Z.D0Z1.DB0ZG.. ..!...!.....!...
[jcc][t4] 0060 112E44534E313030 3135 ..DSN10015 ....+.....
[jcc][t4]
[jcc][t4] [time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:510]initXid -
com.ibm.db2.jcc.t4.yb@7e1f760f xid_ = {DB2Xid: formatID(-1), gtrid_length(0), bqual_length(0), data()} t4Connection_.currXACallInfoOffset_
= 0
[jcc][t4] [time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:431]After Executing, AutoCommit=false
RLSCONV=240
[jcc][Time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]executeQuery () returned
com.ibm.db2.jcc.t4.i@9a783140
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 3.0797499999999998ms | network: 0.921125ms | server:
0.37ms [STMT@1104709008]
[jcc][SystemMonitor:start]
[jcc][Time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]next () called
[jcc][Time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]next () returned true
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 0.15337499999999998ms | network: 0.0ms | server: 0.0ms
[STMT@1104709008]
[jcc][Time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getInt (orderID) called
[jcc][Time:2012-11-16-21:49:08.225][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getInt (8) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getInt () returned 8002
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString (orderType) called
[jcc][SystemMonitor:start]
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString (3) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString () returned buy
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 0.03125ms | network: 0.0ms | server: 0.0ms
[STMT@1104709008]
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString (orderStatus) called
[jcc][SystemMonitor:start]
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString (4) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString () returned closed
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 0.025875ms | network: 0.0ms | server: 0.0ms
[STMT@1104709008]
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getTimestamp (openDate) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getTimestamp (7) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getTimestamp () returned
2012-11-16 21:49:01.299
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getTimestamp (completionDate)
called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getTimestamp (2) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getTimestamp () returned
2012-11-16 21:49:01.892
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getDouble (quantity) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getDouble (6) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getDouble () returned 100.0
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getBigDecimal (price) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getBigDecimal (5) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getBigDecimal () returned 194.21
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getBigDecimal (orderFee) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getBigDecimal (1) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getBigDecimal () returned 24.95
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString (quote_symbol) called
[jcc][SystemMonitor:start]
558 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString (10) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]getString () returned s:168
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 0.028499999999999998ms | network: 0.0ms | server: 0.0ms
[STMT@1104709008]
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]getFetchSize () returned 0
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]setString (1, completed)
called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]setTimestamp (2,
2012-11-16 21:49:08.226) called
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]setInt (3, 8002) called
[jcc][SystemMonitor:start]
[jcc][Time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]executeUpdate () called
[jcc][t4] [time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:432]Before Executing,
AutoCommit=false RLSCONV=240
[jcc][t4] [time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:816]====== connected to primary
server = true
[jcc][t4][time:2012-11-16-21:49:08.226][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:1][Request.flush]
[jcc][t4] SEND BUFFER: PRPSQLSTT (ASCII) (EBCDIC)
[jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
[jcc][t4] 0000 0056D05100010050 200D004421134442 .V.Q...P ..D!.DB ..}....&........
[jcc][t4] 0010 305A202020202020 2020202020202020 0Z .!..............
[jcc][t4] 0020 4E554C4C49442020 2020202020202020 NULLID +.<<............
[jcc][t4] 0030 20205359534C4E33 3030202020202020 SYSLN300 .....<+.........
[jcc][t4] 0040 202020205359534C 564C303100100008 SYSLVL01.... .......<.<......
[jcc][t4] 0050 1900A0000000 ...... ......
[jcc][t4]
[jcc][t4] SEND BUFFER: SQLATTR (ASCII) (EBCDIC)
[jcc][t4] 0000 0029D05300010023 2450000000001957 .).S...#$P.....W ..}......&......
[jcc][t4] 0010 4954482045585445 4E44454420494E44 ITH EXTENDED IND ........+.....+.
[jcc][t4] 0020 494341544F525320 FF ICATORS . ....|....
[jcc][t4]
[jcc][t4] SEND BUFFER: SQLSTT (ASCII) (EBCDIC)
[jcc][t4] 0000 0059D04300010053 2414000000004975 .Y.C...S$.....Iu ..}.............
[jcc][t4] 0010 7064617465206F72 646572656A622073 pdate orderejb s ../...?.....|...
[jcc][t4] 0020 6574206F72646572 737461747573203D et orderstatus = ...?....../.....
[jcc][t4] 0030 203F2C20636F6D70 6C6574696F6E6461 ?, completionda .....?_.%...?>./
[jcc][t4] 0040 7465203D203F2077 68657265206F7264 te = ? where ord .............?..
[jcc][t4] 0050 65726964203D203F FF erid = ?. .........
[jcc][t4]
[jcc][t4] SEND BUFFER: EXCSQLSTT (ASCII) (EBCDIC)
[jcc][t4] 0000 005BD05100020055 200B004421134442 .[.Q...U ..D!.DB .$}.............
[jcc][t4] 0010 305A202020202020 2020202020202020 0Z .!..............
[jcc][t4] 0020 4E554C4C49442020 2020202020202020 NULLID +.<<............
[jcc][t4] 0030 20205359534C4E33 3030202020202020 SYSLN300 .....<+.........
[jcc][t4] 0040 202020205359534C 564C303100100005 SYSLVL01.... .......<.<......
[jcc][t4] 0050 2105F100081900E0 000000 !.......... ..1....\...
[jcc][t4]
[jcc][t4] SEND BUFFER: SQLDTA (ASCII) (EBCDIC)
[jcc][t4] 0000 0057D00300020051 2412001600100C76 .W.....Q$......v ..}.............
[jcc][t4] 0010 D03F7FFF25002003 00040671E4D00001 .?..%. ....q.... }.".........U}..
[jcc][t4] 0020 0037147A00000009 636F6D706C657465 .7.z....complete ...:.....?_.%...
[jcc][t4] 0030 6400323031322D31 312D31362D32312E d.2012-11-16-21. ................
[jcc][t4] 0040 34392E30382E3232 3630303030303030 49.08.2260000000 ................
[jcc][t4] 0050 30300000001F42 00....B .......
[jcc][t4]
[jcc][t4] [time:2012-11-16-21:49:08.227][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:101]Request flushed.
[jcc][t4] [time:2012-11-16-21:49:08.227][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:102]Reply to be filled.
[jcc][t4][time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:2][Reply.fill]
[jcc][t4] RECEIVE BUFFER: SQLCARD (ASCII) (EBCDIC)
[jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
[jcc][t4] 0000 0057D05300010051 2408000000000030 .W.S...Q$......0 ..}.............
[jcc][t4] 0010 3030303044534E20 2020202000000000 0000DSN .... ......+.........
[jcc][t4] 0020 0000000000000000 003FF83568000000 .........?.5h... ..........8.....
[jcc][t4] 0030 0000000000202020 2020202020202020 ..... ................
[jcc][t4] 0040 00104442305A2020 2020202020202020 ..DB0Z .....!..........
[jcc][t4] 0050 202000000000FF ..... .......
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: MONITORRD (ASCII) (EBCDIC)
[jcc][t4] 0000 0046D04300010040 1C00000C19010000 .F.C...@........ ..}.... ........
[jcc][t4] 0010 000000000018000C 1915000000000000 ................ ................
[jcc][t4] 0020 00070018116D4430 5A312E4442305A2E .....mD0Z1.DB0Z. ....._..!.....!.
[jcc][t4] 0030 44305A312E444230 5A47000C112E4453 D0Z1.DB0ZG....DS ..!.....!.......
[jcc][t4] 0040 4E3130303135 N10015 +.....
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: RDBUPDRM (ASCII) (EBCDIC)
[jcc][t4] 0000 0026D05200020020 2218000611490000 .&.R... "....I.. ..}.............
[jcc][t4] 0010 001621104442305A 2020202020202020 ..!.DB0Z .......!........
[jcc][t4] 0020 202020202020 ......
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: SQLCARD (ASCII) (EBCDIC)
[jcc][t4] 0000 0057D05300020051 2408000000000030 .W.S...Q$......0 ..}.............
[jcc][t4] 0010 3030303044534E20 2020202000000000 0000DSN .... ......+.........
[jcc][t4] 0020 0000000000000000 01FFFFFFFF000000 ................ ................
Appendix F. Sample IBM Data Server Driver for JDBC and SQLJ trace 559
[jcc][t4] 0030 0000000000202020 2020202020202020 ..... ................
[jcc][t4] 0040 00104442305A2020 2020202020202020 ..DB0Z .....!..........
[jcc][t4] 0050 202000000000FF ..... .......
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: MONITORRD (ASCII) (EBCDIC)
[jcc][t4] 0000 006AD00300020064 1C00000C19010000 .j.....d........ .|}.............
[jcc][t4] 0010 000000000CCC0024 1914800000002012 .......$...... . ................
[jcc][t4] 0020 1114160835120025 0000000000000000 ....5..%........ ................
[jcc][t4] 0030 00A1000000000000 0000000C19150000 ................ .~..............
[jcc][t4] 0040 0000000000070018 116D44305A312E44 .........mD0Z1.D ........._..!...
[jcc][t4] 0050 42305A2E44305A31 2E4442305A47000C B0Z.D0Z1.DB0ZG.. ..!...!.....!...
[jcc][t4] 0060 112E44534E313030 3135 ..DSN10015 ....+.....
[jcc][t4]
[jcc][t4] [time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:432]After Executing, AutoCommit=false
RLSCONV=240
[jcc][Time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]executeUpdate () returned
1
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 4.1976249999999995ms | network: 3.7115ms | server:
3.3000000000000003ms [STMT@2065652867]
[jcc][Time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]getMoreResults () called
[jcc][Time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]getMoreResults () returned
false
[jcc][Time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]getUpdateCount () called
[jcc][Time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]getUpdateCount () returned
-1
[jcc][Time:2012-11-16-21:49:08.230][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@7b1f5c83]clearParameters () called
[jcc][SystemMonitor:start]
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]next () called
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]closeX ({DB2Xid: formatID(-1),
gtrid_length(0), bqual_length(0), data()}, com.ibm.db2.jcc.t4.T4XAConnection@13361385) called
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]markClosed () called
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]next () returned false
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 0.10062499999999999ms | network: 0.0ms | server: 0.0ms
[STMT@1104709008]
[jcc][SystemMonitor:start]
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]close () called
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][ResultSet@9a783140]closeX ({DB2Xid: formatID(-1),
gtrid_length(0), bqual_length(0), data()}, com.ibm.db2.jcc.t4.T4XAConnection@13361385) called
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 0.08199999999999999ms | network: 0.0ms | server: 0.0ms
[STMT@1104709008]
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]getMoreResults () called
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]getMoreResults () returned
false
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]getUpdateCount () called
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]getUpdateCount () returned
-1
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]clearParameters () called
[jcc][SystemMonitor:start]
[jcc][Time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][Connection@13361385]commit () called
[jcc][t4][time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:1][Request.flush]
[jcc][t4] SEND BUFFER: RDBCMM (ASCII) (EBCDIC)
[jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
[jcc][t4] 0000 000FD00100010009 200E0005119FF2 ........ ...... ..}...........2
[jcc][t4]
[jcc][t4] [time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:101]Request flushed.
[jcc][t4] [time:2012-11-16-21:49:08.231][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:102]Reply to be filled.
[jcc][t4][time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:2][Reply.fill]
[jcc][t4] RECEIVE BUFFER: ENDUOWRM (ASCII) (EBCDIC)
[jcc][t4] 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF 0123456789ABCDEF
[jcc][t4] 0000 0030D0520001002A 220C000611490004 .0.R...*"....I.. ..}.............
[jcc][t4] 0010 001621104442305A 2020202020202020 ..!.DB0Z .......!........
[jcc][t4] 0020 2020202020200005 2115010005119FF2 ..!....... ...............2
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: SQLCARD (ASCII) (EBCDIC)
[jcc][t4] 0000 000BD05300010005 2408FF ...S....$.. ..}........
[jcc][t4]
[jcc][t4] RECEIVE BUFFER: SQLSTT (ASCII) (EBCDIC)
[jcc][t4] 0000 002FD00300010029 2414000000001F53 ./.....)$......S ..}.............
[jcc][t4] 0010 4554204355525245 4E5420534348454D ET CURRENT SCHEM ........+......(
[jcc][t4] 0020 41203D2027534732 343830373427FF A = 'SG248074'. ...............
[jcc][t4]
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:720]parseRLSCONV: 119f=f2
[jcc][Connection@13361385] DB2 LUWID: ::9.12.6.9.65123.CA7B405C24DB.0007
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:300]currXACallInfoOffset : 0commit
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:204]parseSQLSTTList :
{T4GTPK@e1897bf9: 9.12.4.138 39000 rajesh 3 DB0Z 1 153873318 38 false true DB2XADataSource@3cc7dc83:1,2147483647 a@e6c9ee52 true
com.ibm.db2.jcc.t4.vb@9a255c79[SET CURRENT SCHEMA = 'SG248074']
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:220]On Connection: : SET CURRENT
SCHEMA = 'SG248074'
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:220]On Transport : : SET CURRENT
SCHEMA = 'SG248074'
[jcc]determineMemberNumber [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:250]WebSphere WLM
Dispatch Thread t=007bd580 i= 0
560 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:222] freeing Transport - WebSphere
WLM Dispatch Thread t=007bd580 bestMemberIndex: 0 {SWLBN@b81cb1cf: /9.12.4.138 39000 33 0.67346936 false 10 1 2 51 0
{T4GTPK@e1897bf9: 9.12.4.138 39000 rajesh 3 DB0Z 1 153873318 38 false true DB2XADataSource@3cc7dc83:1,2147483647 a@e6c9ee52 true
com.ibm.db2.jcc.t4.vb@9a255c79[SET CURRENT SCHEMA = 'SG248074']
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:220]On Connection: : SET CURRENT
SCHEMA = 'SG248074'
[jcc][t4] [time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][tracepoint:220]On Transport : : SET CURRENT
SCHEMA = 'SG248074'
[jcc][Time:2012-11-16-21:49:08.233][Thread:WebSphere WLM Dispatch Thread t=007bd580][Connection@13361385]commit () returned null
[jcc][Thread:WebSphere WLM Dispatch Thread t=007bd580][SystemMonitor:stop] core: 2.367375ms | network: 1.9597499999999999ms | server:
0.0ms
[jcc][Time:2012-11-16-21:49:08.269][Thread:WebSphere WLM Dispatch Thread t=007bd580][Connection@13361385]isInDB2UnitOfWork () returned
false
[jcc][Time:2012-11-16-21:49:11.040][Thread:WebSphere WLM Dispatch Thread t=007bd580][Connection@13361385]clearWarnings () called
[jcc][Time:2012-11-16-21:49:11.041][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]getFetchSize () returned 0
[jcc][Time:2012-11-16-21:49:11.041][Thread:WebSphere WLM Dispatch Thread t=007bd580][PreparedStatement@41d88590]setString (1, uid:0)
called
Appendix F. Sample IBM Data Server Driver for JDBC and SQLJ trace 561
562 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
G
564 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
* <GROUP>RACFGRP1</GROUP>
* <GROUP>RACFGRP2</GROUP>
* <GROUP>RACFGRP3</GROUP>
* </GROUPS>
*
* SQL DDL....:
* CREATE FUNCTION GRACFGRP ()
* RETURNS VARCHAR(32000)
* CCSID EBCDIC FOR SBCS DATA
* SPECIFIC GRACFGRP
* EXTERNAL NAME GRACFGRP
* LANGUAGE ASSEMBLE
* PARAMETER STYLE DB2SQL
* SECURITY USER
* FENCED
* CALLED ON NULL INPUT
* NO SQL
* ALLOW PARALLEL
* DBINFO
* WLM ENVIRONMENT WLMENV1
* ASUTIME NO LIMIT;
*
* Interface..:
* select gracfgrp() from sysibm.sysdummy1;
* --> returns the XML document shown above
*
* pureXML query:
* ==============
* SELECT T.* FROM XMLTABLE
* ('$d/GROUPS/GROUP'
* PASSING XMLPARSE (DOCUMENT gracfgrp()) AS "d"
* COLUMNS
* "RACF User" VARCHAR(08) PATH '../USER/text()',
* "RACF Group" VARCHAR(08) PATH './text()'
* ) AS T
* ;
*
* pureXML query result:
* =====================
* RACF User RACF Group
* --------- ----------
* JOSEF RACFGRP1
* JOSEF RACFGRP2
* JOSEF RACFGRP3
*
***********************************************************************
YREGS 01740000
GRACFGRP CEEENTRY AUTO=WORKSIZE,BASE=R11,MAIN=NO,PLIST=OS 01750000
USING WORKAREA,R13 01760000
L R9,0(R1) get pointer TO return parm
USING RACFGRP,R9 01760000
L R7,4(R1) get pointer to indicator variable
MVC 0(2,R7),=AL2(0) indicate return value
MVC XML01#,XML01
MVC XML11#(XML11L),XML11
566 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
LA R4,SGRPNEXT POINT TO NEXT SECONDARY AUTHID 05540000
A0027 DS 0H BYPASS UPDATING SECONDARY LIST 05570000
LA R2,L'CGRPENT(,R2) POINT TO NEXT CONNECT GROUP 05590000
AH R7,=AL2(SGRPLEN)
BCT R3,A0026 BR UNTIL ALL GROUP NAMES EXAMINED 05600000
B A0060 MOVING IS COMPLETED 05610000
DROP R2 DROP CGRPENTD BASE REG 05620000
A0060 DS 0H Moving groups is complete 02940000
AH R7,=AL2(L'XML02)
STH R7,RACFLEN
MVC 0(L'XML02,R4),XML02
B A0099
A0080 DS 0H Can't do groups 02940000
*---------------------------------------------------------------------* 02910000
* TERMINATION * 02920000
*---------------------------------------------------------------------* 02930000
A0099 DS 0H Terminate 02940000
MVC 0(2,R1),=H'0' RC=0 03020000
CEETERM RC=0 03030000
SECLEN DC Y(SGRPNEXT-SGRP) LENGTH OF A SECONDARY AUTHID ENTRY
PPA CEEPPA
LTORG 19981000
XML01 DC C'<GROUPS>'
XML02 DC C'</GROUPS>'
XML11 DC C'<USER>'
XMLUSER DC CL8' '
DC C'</USER>'
XML11L EQU *-XML11
XML01L DC AL2(*-XML01-L'XML02)
XML12 DC C'<GROUP>'
DC CL8' '
XML122 DC C'</GROUP>'
XML12L EQU *-XML12
*---------------------------------------------------------------------* 18200000
* VARIABLES * 18210000
*---------------------------------------------------------------------* 18220000
WORKAREA DSECT 18290000
ORG *+CEEDSASZ Space for dynamic save area 18300000
SAVEREGS DS 16F Copy of caller's registers
* 18310000
SECCOUNT DS F COUNT OF SECONDARY IDS
DS 0D On doubleword boundary 19800000
WORKSIZE EQU *-WORKAREA 19810000
* 19820000
*---------------------------------------------------------------------* 19830000
* DSECTs * 19840000
*---------------------------------------------------------------------* 19850000
RACFGRP DSECT
RACFLEN DS H
RACFGRPS DS CL1024 LIST OF RACF GROUPS
ORG RACFGRPS
XML01# DC C'<GROUPS>'
XML11# DC C'<USER>'
XML11U# DC CL8' '
DC C'</USER>'
568 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
SECURITY DB2
STOP AFTER SYSTEM DEFAULT FAILURES
INHERIT SPECIAL REGISTERS
;
006200***************************************************** 00620000
006300 Data Division. 00630000
006400 Working-Storage Section. 00640000
006500* EXEC SQL INCLUDE SQLCA END-EXEC. 00650000
013600*==============================================================* 01360000
013700 LINKAGE SECTION. 01370000
013800 01 UDFPARM1. 01380004
015500 49 UDFPARM1-LEN PIC 9(4) USAGE BINARY. 01390004
015600 49 UDFPARM1-TEXT PIC X(8). 01400004
014400 01 UDFPARM2 PIC S9(18) USAGE COMP. 01440008
01 UDFPARM2-X REDEFINES UDFPARM2 PIC X(8). 01441003
570 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
020000 02 FILLER PIC X(2). 02000000
020100* Database Platform 02010000
020200 02 UDF-DBINFO-PLATFORM PIC 9(9) USAGE BINARY. 02020000
020300* # of entries in Table Function column list 02030000
020400 02 UDF-DBINFO-NUMTFCOL PIC 9(4) USAGE BINARY. 02040000
020500* reserved 02050000
020600 02 UDF-DBINFO-RESERV1 PIC X(24). 02060000
020700* Unused 02070000
020800 02 FILLER PIC X(2). 02080000
020900* Pointer to Table Function column list 02090000
021000 02 UDF-DBINFO-TFCOLUMN POINTER. 02100000
021100* Pointer to Application ID 02110000
021200 02 UDF-DBINFO-APPLID POINTER. 02120000
021300* reserved 02130000
021400 02 UDF-DBINFO-RESERV2 PIC X(20). 02140000
021500* 02150000
021600 01 APPLICATION-ID PIC X(32). 02160000
021700 Procedure Division using UDFPARM1, 02170000
021800 UDFPARM2, 02180000
021900 UDF-RIND1, 02190000
022000 UDF-RIND2, 02200000
022100 UDF-SQLSTATE, 02210000
022200 UDF-FUNC, 02220000
022300 UDF-SPEC, 02230000
022400 UDF-DIAG, 02240000
022500 UDF-DBINFO. 02250000
023500 A00-CONTROL SECTION. 02350000
023600 A0010. 02360000
025100 SET UDF-SQLSTATE-FAIL TO TRUE 02361001
023700 IF UDF-RIND1 >= 0 02370000
025100 SET UDF-SQLSTATE-OK TO TRUE 02510001
INITIALIZE UDFPARM2 02520001
025500 MOVE UDFPARM1-TEXT(1:UDFPARM1-LEN) 02550004
TO UDFPARM2-X(9 - UDFPARM1-LEN:UDFPARM1-LEN) 02560005
030500 END-IF. 03050000
030600 A0099. 03060000
030700 goback. 03070000
We used IBM Rational Application Developer for WebSphere Software for servlet
development and tested the servlet functionality in WebSphere Application Server V8R5 on
Windows and on z/OS. Upon successful servlet testing, we exported the ClientInfo dynamic
web application and created the ClientInfo.war web archive file (WAR file). The WAR file
includes the Java source and class files. You can download the ClientInfo.war file as
described in Appendix I, “Additional material” on page 587.
In this appendix, we describe the ClientInfo dynamic web project, how to access the ClientInfo
Java source files by using standard tools, and how to install the ClientInfo application in
WebSphere Application Server.
If you use the Java EE perspective of Rational Application Developer for WebSphere
Software, the structure of the ClientInfo dynamic web project with its Java source files looks
like the structure that is shown in Figure H-1.
574 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
H.2 Accessing the ClientInfo.war file from your workstation
After the WAR file is downloaded to your workstation, you can access its content by using
standard tools that can process archive files (see Figure H-2).
576 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
5. The Step 1: Select installation options window opens (Figure H-5). Leave the options at
their defaults and select Next.
6. The Step 2: Map modules to servers window opens (Figure H-6). Choose the server that
you want to install the application in and click Next.
7. The Step 3: Map context roots for Web modules window opens (Figure H-7). Enter the
/ClientInfo context root name and click Next.
Figure H-7 Step 3: Map context roots for Web modules window
578 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
10.A window opens and shows a successful application installation (Figure H-10).
Click Review.
11.In the New Application window that opens (Figure H-11), click Synchronize changes
with Nodes and click Save.
580 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
2. Upon successful completion, ISC confirms a successful start of the application by
displaying a green arrow in the application status column. The job log of the servant region
shows the runtime messages that are listed in Figure H-14 to confirm successful
application start.
ClientInfoJDBC30API http://<server>:<portno>/ClientInfo/JDBC30API
ClientInfoJDBC40API http://<server>:<portno>/ClientInfo/JDBC40API
ClientInfoWSAPI http://<server>:<portno>/ClientInfo/WSAPI
ClientInfoWLM http://<server>:<portno>/ClientInfo/WLM
For a description about the result that is returned by the ClientInfoJDBC30 servlet, see 5.5.3,
“Setting DB2 client information in a WebSphere Java application” on page 255.
582 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Figure H-16 Testing the ClientInfoJDBC40API servlet
For a description about the result that is returned by the ClientInfoJDBC40 servlet, see 5.5.3,
“Setting DB2 client information in a WebSphere Java application” on page 255.
H.7.1 Common pitfalls when using the JDBC 4.0 setClientInfo API
During our testing of ClientInfoJDBC40API, we received the error message that is shown in
Figure H-17, which indicates that the JDBC 4.0 java.sql.Connection.setClientInfo API was not
supported by the application server runtime environment, even though the JDBC provider we
were using explicitly had the db2jcc4.jar file in its class path.
For a description about the result that is returned by the ClientInfoWSAPI servlet, see 5.5.3,
“Setting DB2 client information in a WebSphere Java application” on page 255.
584 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
H.9 Testing the ClientInfoWLM servlet
Upon successful ClientInfoWLM servlet start, you receive the browser output that is shown in
Figure H-19.
For a description about the result that is returned by the ClientInfoWLM servlet, see 5.5.3,
“Setting DB2 client information in a WebSphere Java application” on page 255.
Select Additional materials and open the directory that corresponds with the IBM Redbooks
form number, SG248074.
588 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Related publications
The publications that are listed in this section are considered suitable for a more detailed
discussion of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topic in this
document. Some publications referenced in this list might be available in softcopy only.
Achieving the Highest Levels of Parallel Sysplex Availability in a DB2 Environment,
REDP-3960
DB2 9 for z/OS Data Sharing: Distributed Load Balancing and Fault Tolerant
Configuration, REDP-4449
DB2 9 for z/OS: Buffer Pool Monitoring and Tuning, REDP-4604
DB2 9 for z/OS: Resource Serialization and Concurrency Control, SG24-4725
DB2 10 for z/OS Performance Topics, SG24-7942
DB2 for z/OS: Data Sharing in a Nutshell, SG24-7322
DB2 for z/OS and WebSphere: The Perfect Couple, SG24-6319
A Deep Blue View of DB2 Performance: IBM Tivoli OMEGAMON XE for DB2 Performance
Expert on z/OS, SG24-7224
Extremely pureXML in DB2 10 for z/OS, SG24-7915
IBM Data Studio V2.1: Getting Started with Web Services on DB2 for z/OS, REDP-4510
IBM WebSphere Application Server V8 Concepts, Planning, and Design Guide,
SG24-7957
Implementing REXX Support in SDSF, SG24-7419
Security Functions of IBM DB2 10 for z/OS, SG24-7959
System z Parallel Sysplex Best Practices, SG24-7817
WebSphere Application Server V8.5 Concepts, Planning, and Design Guide, SG24-8022
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
DB2 10 for z/OS Administration Guide, SC19-2968
DB2 10 for z/OS Application Programming Guide and Reference for Java, SC19-2970
DB2 10 for z/OS Command Reference, SC19-2972
DB2 10 for z/OS Data Sharing: Planning and Administration. SC19-2973
Online resources
These websites are also relevant as further information sources:
DB2 10 for z/OS information
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp?topic=%2Fcom.i
bm.db2z10.doc.comref%2Fsrc%2Fcomref%2Fdb2z_comref.htm
Download initial Version 10.1 clients and drivers
http://www.ibm.com/support/docview.wss?rs=4020&uid=swg21385217
IBM developerWorks DB2 for z/OS preferred practices presentations
http://www.ibm.com/developerworks/data/bestpractices/db2zos/
pureQuery
http://www.ibm.com/developerworks/data/library/techarticle/dm-0708ahadian/
System z Solution Edition for Application Development
http://www.ibm.com/systems/z/solutions/editions/appdev/index.html
WebSphere Application Server z/OS V8 Resource Adapter Failover Lab
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102033
WebSphere glossary
http://pic.dhe.ibm.com/infocenter/wasinfo/v7r0/index.jsp?topic=/com.ibm.websphe
re.help.glossary.doc/topics/glossary.html
WebSphere Portal zone
http://www.ibm.com/developerworks/websphere/zones/portal/
New to WebSphere
http://www.ibm.com/developerworks/websphere/newto/
590 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
Help from IBM
IBM Support and downloads
ibm.com/support
594 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DSNR 170 G
DSNRLI 266 GDPS 5
DSNTEP2 535 GENERATED ALWAYS 335
DSNTIJMS 106, 359 Geographically Dispersed Parallel Sysplex
DSNTIPE 158 See GDPS
DSNZPARM 58, 136, 145, 395, 397–398, 491 Geographically Dispersed Parallel Sysplex (GDPS) 5
CACHEDYN 146 getConnection 89, 97, 257, 270, 299, 358, 452
CONDBAT 158 getMessage 98, 452
MAXDBAT 181, 403 getSqlca 454
Dynamic SQL 299, 400, 422 getSqlCode 454
dynamic SQL 126, 141, 147, 297, 299, 337–338, 342, getSqlState 454
362, 400, 422, 476 getWarnings 452
application 299, 400, 422, 476 global 58
package 301 global cache 59
program 299, 342 Global caching 145
statement 141, 148, 330–331, 400, 422, 476, 481 global dynamic statement cache 58
DYNAMICRULES 330 Global security 271
GRACFGRP 176, 564
E GRANT 17, 129, 171
EAR 28–29, 307, 315
EBCDIC 14, 152, 385, 468, 514, 532, 535, 557, 564 H
EDMSTMTC 142, 148 High Performance DBAT 161
EGL 9 high-performance cryptography 18
EJB 26, 28, 302, 337–338, 386 HiperSockets 3
EJB 2.0 specification 338 HOME 168, 487
EJB container 312, 343 HTML 29, 587
ejb-jar.xml 344 HTTP 29, 35, 47, 175, 355, 371, 374, 521, 547
ENABLE 129, 174, 177 HTTP server 404
Enclave 113, 384
enclave 14, 108, 369, 547
enclaves 110, 443–444 I
encoding schemes 14 I/O 5, 15, 48, 107, 122, 334, 410, 470, 531
Encryption 18 I/O activity 426
ENDQRYRM 469, 558 I/O operation 410
ENDUOWRM 560 IBM Data Server Client 19, 87–88
ENQ 444 IBM DB2
Enterprise Application 517 driver 364
enterprise beans 28, 344 IBM Tivoli OMEGAMON XE for DB2 Performance Expert
Enterprise Generation Language on z/OS 543, 590
See EGL ICF 197
Enterprise JavaBeans 29, 338, 344, 386 IDTHTOIN 86–87, 139, 181–183
environment 2, 4, 22, 93–94, 99, 101, 207, 298, 307, IFCID 126, 406, 479, 494–495, 531
310, 338, 340, 343, 361, 363, 370, 451, 458, 484–485, 3 406, 418
491, 512–513, 523, 525, 529, 531–532, 583 IFI 117, 428, 431, 470, 492, 501
Environment entries 379 IIOP 345
ETL 527 Implicit prepare 141, 144
Example 161, 300, 342 IMS xxix, 2, 4, 26, 114–115, 138, 326, 376
Exceptions 529 INACONN 68, 95, 158–159
EXPLAIN 481 inactive connection 93
extract 36, 202, 481, 527–528, 536, 588 INADBAT 68, 95, 158–159
INDOUBT 102, 401–402, 404, 475
information xxix, 2, 14–15, 21, 24, 81, 84, 99–100, 103,
F 209, 221, 234, 300, 312, 337–338, 361–362, 451–452,
FOR 70, 77, 116–117, 330, 333, 374, 453, 470, 485, 492, 483, 494, 496, 525, 527–528, 546, 555, 573
565 InitialContext 257
FOR READ ONLY 333–334 INSERT 97, 126, 187, 329, 347, 400, 421, 541–542
FOR UPDATE 333 Integrated Facility for Linux (IFL) 5
Full caching 146 Integrated Information Processor
Full prepare 141 See zIIP
full prepare 58–59 integration 1, 4, 34, 307, 358
Index 595
Intelligent Resource Director (IRD) 6, 16 keepDynamic 61, 294–295
IP address 48, 54, 69, 82–84, 154, 220, 430, 517 KEEPDYNAMIC(YES) 161
IPNAME 67–68, 95, 154–155, 161, 220, 236 Key 14, 319
IRLM 148, 242, 406, 470
iSeries 191
ISOLATION 330, 332, 423 L
Language Environment 8
See LE
J LDS 118
J2EE 10, 26, 89, 338, 377, 512, 546, 548 LE 8, 383
J2EE container 381 LIBPATH 163, 298
Java xxix–xxx, 1, 5, 21, 24–25, 81, 88, 106–107, 207, LINKLIST 243
246, 255, 297, 337, 363, 379, 452, 458, 512, 524, 573 Linklist 164
for z/OS 8–9, 49, 81, 108, 309, 406, 589 Linux xxx, 5, 26, 40, 191
Software Development Kit (SDK) 9 Linux on System z 14
Java 2 Enterprise Edition 10, 338 load 346, 527
Java application xxix, 61, 92, 96, 114–115, 162, 255, LOB 91, 139, 149, 356, 397, 424, 471
298, 346 local cache 59, 294
Java Database Connectivity 49, 211, 298, 302, 345 Local caching 143
Java Naming and Directory Interface 251 local DB2
Java Persistence API 338 subsystem 114
Java Persistence Query Language 341 local Java applications 91
Java perspective 320 LOCATION 183
Java project 316, 343 location name 84, 92, 153, 198, 220, 236, 515
Java Transaction API 89 locking 13, 39, 101, 297, 396, 482, 528
Java Transaction Service 89 lockout trace 479
Java Virtual Machine 5, 18, 286 LOCKSIZE
for z/OS 5 ANY 535
java.lang.String 308 logical partition 5, 90
java.sql 256, 308–309, 363, 452, 574 Logical Unit of Work 64, 469
java.util.Properties 308 LPAR 17, 49–50, 53, 90, 102, 104, 116, 118, 238–239,
javac 169 404, 407, 411, 491–492, 494
JavaServer Pages 28 LU name 157
javax.naming 257, 347 LUNAME 67–68, 95, 154–155, 161, 220, 236, 434
javax.sql 257, 327, 349, 458 LUW 64, 87, 90, 123, 159, 346, 470, 514
JCC 88, 191, 359, 362, 458, 555 LUW Version
JCC properties 282 9.5 FixPack 3 87
JCL 107, 151, 243, 359, 443, 485, 514, 528, 532, 535 LUWID 469, 560
JDBC 2, 49, 87–88, 99, 106, 207, 297, 337, 362–363,
452, 513, 523, 540, 555, 575
JDBC driver 49, 59, 99, 106, 207–208, 299, 342, 443, M
454 maintenance 3–4, 14, 92, 123, 201, 504, 539
JDBC Driver Provider 50, 211, 307, 328 Manage WebSphere variables 373
JDBC packages 65, 153, 329, 375–376 MAX REMOTE ACTIVE 140
JDBC provider 50, 211, 213, 307, 516, 575 MAX USERS 139
JDBC Type 2 connectivity 293 MAXDBAT 11, 86, 140, 158, 161, 182
JDBC Type 4 connectivity 292 maximum number 87, 137, 139–140, 403, 405, 466, 486
JNDI 50, 251, 327, 345, 350, 575 MAXKEEPD 146
JNDI lookup 259 MDB 345
JNDI name 327, 350, 358 MDBAT 68, 95, 158–159
join 341, 426 message 74, 116, 129, 287, 311–312, 352, 373, 433,
JSP 28 452, 489, 492, 583
JSPs 28 META-INF 306, 352
JSR 168 28 middleware 4, 22, 126
JTA 89, 310, 350 Min Connections 51, 275
JTS 89 MSRT 478
JVM 27, 29, 275, 282, 300, 308, 343–344, 362, 386, 463 MVC 565
K N
KEEPDYNAMIC 54, 60, 139, 141–142, 144, 397 NamedQuery 342
596 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
ND 30 Q
Node 36, 38, 328, 383 QRYDTA 469, 558
NULLID 165, 375–376, 479, 557 QUALIFIER 330
NULLID collection 165 QUEDBAT 68, 95, 158–159
NUMBER 178, 192, 242, 400, 493, 497–498, 566
R
O RACF 7, 17, 128, 258, 487, 564
ODBC 87–88, 162 RACF passtickets 488
OLE DB 88, 92 RACF password 18
Omegamon 201, 528 Rational
open source 24–25, 88, 308 COBOL Generation Extension 9
openjpa 311 COBOL Generation for zSeries 9
operating system 4, 22, 105, 243, 298, 499 Rational Application Developer 28, 302, 340
OPNQRY 468, 557 RDBCMM 469, 560
OPNQRYRM 469, 557 RDBMS 2
OPTIONS 68–69, 96, 157, 161, 197, 220, 236, 538–539, read 13, 35, 63, 82, 121, 124, 149, 301, 309, 341,
564 409–410, 488
Oracle 25–26, 514 Read stability 309
OWNER 124, 177, 487 read stability 291, 309
real storage 139, 411
P Reap Time 51, 56, 275
Package 87–88, 113, 166, 315, 324, 430, 433, 529–530 Recoverable Resource Services 64
paging 139, 411 RECTRACE 482
Parallel Sysplex 4, 53, 83, 100 Redbooks website 589
cluster 13 Contact us xxxi
data 13 REGION 443, 486, 491, 536–537
data sharing 4 relational database 2, 191, 207, 298, 452
environment 5–6, 100 relational database management system 13
operation task 5 RELEASE 65, 139, 275, 420, 429–430, 470–471
system 5, 13, 100 REMARKS column 184
z/OS images 6 remote connection 162, 183
parameter name 311 remote location 195, 531
Password 514 Repeatable read 309
PATH 124, 259, 564 resiliency 4, 64, 208
performance xxix–xxx, 10, 13, 24, 46, 48, 86–87, 89, Resource Access Control Facility (RACF) 7, 17
100–101, 107, 300–302, 341, 352, 361, 469–470, 512, Resource manager 449
527–528 Resource Recovery Services
performance database 527 See RRS
performance trace 136, 470 response time 6, 33, 110, 361
Phase 2 404 response time goal 445
PKLIST 376 ResultSet 97, 169, 256, 333, 343, 452, 558
pkList 293 ResultSets 52
Plan 113, 170, 312 RESYNC 68, 95, 159, 161, 220, 236
plan 101–102, 125, 300–301, 330, 363, 376, 539 resync port 83–84, 96
PMI 377, 386 resynchronization port 83, 156, 158
POOLINAC 139 RETURN 117, 120, 155, 415, 476, 508, 541
port number 50, 69, 83, 156, 220 REXX 120, 492, 499
prepared statement cache 60 RFC 3261 28
PreparedStatements 52, 268 RMF 101, 110, 362–363, 369
prepareStatement() 58 RMI 345
printTrace 454 rollback 12, 54, 144, 205, 332, 472
processing power 3 RR 149, 332
PROFILE_ENABLED 187 RRS 64, 103, 226, 266, 371, 396, 540
PROFILEID 182 RRSAF 77, 79, 114–115, 476
property 59 RS 63, 149, 291, 332, 482
protocol 2, 16, 49, 298, 408, 472, 475, 531
PRPSQLSTT 468, 557, 559 S
pureQuery 60, 65, 92, 297, 302, 338, 353 SAP 5, 26, 397
Purge Policy 51, 275 SCA 101, 242
Index 597
scalability 32 SQL port 156, 198
SDSNLOD2 163, 243 SQL request 402
SECPORT 67–68, 95, 154–155, 161, 220, 236 SQL statement 14, 56, 58, 86, 94, 110, 126, 141, 246,
Secure Sockets Layer (SSL) 18 259, 299–300, 329, 363, 409, 453, 456
continued excellence 18 parameter markers 422
Security 7, 17, 66, 77, 270, 326 SQLCA 453, 569
security 7, 29, 62, 126–127, 169, 221, 237, 301, 338, SQLCARD 468–469, 557–558
344, 487, 514 SQLCODE 129, 133, 149, 436, 452, 493, 507
SEGSIZE 535 -514 142
SELECT 97, 151, 169, 258, 301, 329, 331, 342, 400, -518 142
421, 439, 453, 455, 493, 496–497, 541, 565 SQLDA 151
Servant 243, 384 SQLException 95–96, 256, 329, 452
SERVER 68–70, 95, 113, 135, 261, 402, 470, 477, 488, SQLJ 2, 19, 49–50, 53–54, 56, 87–88, 123, 151,
546, 548 162–163, 211–213, 217, 297–299, 337–338, 363–364,
server xxix, 1, 3, 22, 83, 86–87, 103, 207, 209, 298–300, 452, 454, 513, 555
337–338, 340, 362–363, 454, 457, 514, 517, 521, runtime 300
524–525, 546–547, 555–556, 575, 587 sqlj 89, 163, 261, 301
server list 92–93 SQLJ translator 301
server sprawl 3 SQLRULES 330
service name 69 SQLSTATE 69, 129, 180, 452, 570
service request block 417 07003 142
Service Request Block (SRB) 14 26501 142
service-oriented architecture (SOA) 9 SQLWarning 452
See also SOA 9 SRB 14–15, 110, 417
Web services 9 SSID 175, 494–495, 536
Servlet 28, 256, 382, 389, 546–547, 573 ssidDIST 416–418
servlets 28 SSL 18, 155, 157
SET CURRENT SCHEMA 534, 556 SSL protocol 18
setAutoCommit 97, 354, 473 secure communications 18
setDB2ClientAccountingInformation 261, 363, 469, 473, Statement cache size 270, 393
557, 574 Static SQL 300, 353
setDB2ClientApplicationInformation 97, 261, 363, 469, static SQL
473, 556, 574 model 300–301
setDB2ClientUser 97, 261, 363–364, 469, 473, 556, 574 package 431
setDB2ClientWorkstation 97, 261, 363–364, 469, 473, statistics 119, 125–126, 362, 386–387, 479, 483, 528
556, 574 statistics trace 151, 192, 406, 531, 536
setJccLogWriter 458 STEPLIB 154, 239, 359, 485–486, 535–536
SETROPTS RACLIST 170, 178, 488–489 storage subsystem 5, 16
setXXX 309 Stored procedure 342
Share lock 332 Stored procedures 490
SHAREPORT 84, 156–157 Subsystem Type 109, 111, 371, 375
Short prepare 141 SUMMARY 105, 116, 448, 477
short prepare 59 Sun Microsystems 8
single point of control symbolicrelate 494
failure recovery 5 SYSADM 17
SMF 110, 362, 495, 528, 545 SYSPLEX 444–446
SMF 120 record 382 Sysplex 4, 39, 48, 53, 66, 82–83, 100, 383, 492
SMF data 126, 398 sysplex 5, 13, 39, 89, 100, 207, 516
SMF type 101 396 Sysplex Distributor 48, 82–83, 93, 156
SMF type 30 110 connection 93
SMP/E 123 Sysplex workload balancing 62, 94, 147, 156
SOA SYSPRINT DD SYSOUT 495, 499, 535, 538
See also also service-oriented architecture SYSSTAT 166, 171
Software Development Kit (SDK) 9 system management 117, 377
SP 417, 429, 477 System z xxix, 1, 53, 90, 100, 208, 406
SPM 370 application 3
SPUFI 167 architecture 4
SQL xxix, 14, 52, 55, 83, 110, 114, 207, 220, 236, 259, availability GDPS technology 5
297, 337–338, 362–363, 452, 493–494, 497, 505, book 101
528–529, 532, 564, 574 capacity 9
598 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
data 3 TRACE_RESULT_SET_META_DATA 465
database 2 TRACE_STATEMENT_CALLS 465–466
DB2 for z/OS on 2 TRANSACTION_READ_COMMITTED 291, 308–309,
hardware 3–4 355
hardware and software synergy 5 TRANSACTION_REPEATABLE_READ 205, 291, 309,
hardware platform 13 332
high availability family solution 5 TRANSACTION_SERIALIZABLE 309
host 14 transform 527
image 6 transports 93
leading relational database 12 troubleshooting 27
machine 3 TSO 77, 138, 167, 376, 400, 475–476, 499–500, 514
mainframe 3 two-phase commit 114, 475
managing 6
memory 10, 118
middleware 4 U
operating system 4 Uncommitted read 309
platform 1–2, 406 Unicode 13, 549, 551
platform benefit 18 conversion 13
platform compression 16 handling 13
platform database product DB2 5 Universal JDBC Driver Provider 50, 211, 219, 328
platform Security Server 17 unmanaged environment 326
processor 3 Unused Timeout 51–52, 275
software 6 UPDATE 149, 197–198, 299–300, 400, 420–421,
strengths 2 452–453, 488–489, 541–542
using to reduce complexity 3 update 13, 44, 123, 137, 300, 303, 312, 341–342,
WebSphere Application Server 10 425–426, 435, 474, 482, 524
z/OS 1, 118 UR 115
System z platform user ID 126, 221, 237, 364, 370, 475
about 3 user IDs 129, 487
System z9 14 USING 121, 129, 174, 445–446, 485, 535, 565
T V
task control block 417 virtual machine 27, 345, 386
TCP/IP 14, 82–83, 115, 154, 298, 429–431, 433, 471 virtualization 3, 47
data sharing 83, 156 VSAM 114, 118, 486, 491, 528
DVIPA 83, 154, 156 VSE
TCP/IP KeepAlive See z/VSE
value 87 VTAM 154–155
TCP/IP network 82, 162, 169
TCP/IP port 154–155 W
TCPKPALV 86–87 WAS 355, 382, 462, 493, 546
team xxix, 10, 206 Web application archives 307
TEXT 172, 569 Web container 549
the environment 4, 45, 66, 77, 118, 163, 283 Web modules 577
thread 14, 26, 52, 59, 86, 138, 259, 270, 275, 334, 370, Web Service 173
385, 387, 475, 528–529 Web services 26
thread monitorin 188 WebSphere xxix–xxx, 2, 21–22, 81, 88, 96, 99, 103, 207,
TIMESTAMP 172, 175, 199, 335, 479, 508 300, 302–303, 337–338, 361, 457–458, 512, 523–524,
tools xxix, 7, 9, 12, 25–26, 88, 119, 302–303, 340, 545, 555, 573
361–362, 457, 524, 573 WebSphere Application
TPF Server 2, 4, 22–23, 103, 207, 302–303, 306, 338,
See z/TPF 362, 365, 371, 460, 462, 512, 514, 524–525, 573
TRACE 120, 126, 311, 399, 434, 470, 479, 486 WebSphere Application Server 22–23, 98–99, 103, 207,
TRACE_ALL 458, 555 302–304, 337–338, 362, 371, 457, 460, 512–514,
TRACE_CONNECTION_CALLS 465–466 523–525, 573, 590
TRACE_CONNECTS 465–466, 556 for z/OS 103, 207, 375
TRACE_DRDA_FLOWS 459, 465 on z/OS 12, 49, 174, 208, 313, 444, 463
TRACE_DRIVER_CONFIGURATION 465–466 servant regions 174, 378
TRACE_NONE 465 workload 33, 110, 404
TRACE_RESULT_SET_CALLS 465–466
Index 599
WebSphere connection pooling 53
WebSphere Information Center 394
WebSphere MQ 34, 114–115
websphereDefaultIsolationLevel 288
Windows 19, 26, 36, 40, 191, 301, 339, 352, 573
WITH HOLD 54, 144, 160, 397
WLB 68–69, 93–94
WLM 6, 34, 48, 82, 84, 92, 106, 207, 298, 312, 362, 365,
369, 463, 547–548, 555, 564, 581
enclave 110
policy 16, 110, 369
service class 107, 375
velocity goals 108
WLM classification 94, 110, 369, 373
workload balance 93, 239
workload management 6, 16, 30, 362
Workload Manager
See WLM
workstation name 363, 370, 475
write xxxi, 9, 82, 124, 337, 378–379, 396, 454, 457
wtsc63 129, 174, 177, 308–310
X
XA 50, 89, 207, 209, 390
XA driver 68–69
XA transaction xxix
XML 5, 14, 26, 29, 36, 152, 167, 340, 344, 355, 371, 373,
424, 429, 471, 564
XQuery 14
Z
z/OS xxix, 21, 30, 81–82, 100, 207, 297–298, 337–338,
362, 454, 457, 499, 509, 512, 529, 532, 573
DB2 for z/OS 4, 50, 90, 104, 364, 470
Java products 8
Java Virtual Machine 5
utility function 15
z/OS data 2, 54, 118, 220
access 62
z/OS server 145, 190
DB2 9 190
z/OS subsystem 82, 114
z/OS system 12, 83, 103, 220, 369, 514
management 103
z/OS V8 23, 514
z/VM 5
z990 13
zAAP 5, 12, 117–118, 378, 382, 546–547
zIIP 5, 14, 91, 117, 384, 542
zIIP processor 14, 542
600 DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DB2 for z/OS and WebSphere
Integration for Enterprise Java
Applications
DB2 for z/OS and WebSphere Integration
for Enterprise Java Applications
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
DB2 for z/OS and WebSphere Integration for Enterprise Java
DB2 for z/OS and WebSphere Integration for Enterprise Java Applications
DB2 for z/OS and WebSphere
Integration for Enterprise Java
Applications
DB2 for z/OS and WebSphere
Integration for Enterprise Java
Applications
Back cover ®