SG 247972
SG 247972
Paolo Bruni
Isabelle Bruneel
Angie Greenhaw
Dougie Lawson
Jorge Alberto Luz Ribeiro
Egide Van Aershot
ibm.com/redbooks
International Technical Support Organization
October 2011
SG24-7972-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xxiii.
This edition applies to Version 12, Release 1 of IBM IMS Transaction and Database Servers (program number
5635-A03) and Version 2, Release 1 of IBM Enterprise Suite (program number 5655-T62).
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxvii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xxviii
Contents v
6.2.3 IMS Connect definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.2.4 New and modified IMS Connect commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.2.5 Generic MSC name support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6.2.6 IMS Connect security using RACF PassTickets . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.2.7 Configuration summary for MSC using TCP/IP. . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.3 Other IMS Connect changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.3.1 IMS Connect Recorder Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.3.2 RACF user ID caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
6.3.3 RACF return codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
6.3.4 XML Converter Refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
6.3.5 Partial read status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
6.3.6 Load modules for exits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
6.3.7 QUERY MSLINK STATISTICS changes for TCP/IP MSC links . . . . . . . . . . . . . 194
6.4 IMS Connect type-2 Single Point of Control commands . . . . . . . . . . . . . . . . . . . . . . . 195
6.4.1 Display commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
6.4.2 Start commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.4.3 Stop and close commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
6.4.4 Set, reset, and refresh commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Contents vii
9.3.2 API for C reference for data types and functions . . . . . . . . . . . . . . . . . . . . . . . . 327
9.3.3 Reference material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
9.3.4 Prerequisites for Connect API for C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
9.4 IMS Enterprise Suite Connect APIs for Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
9.4.1 Overview of API for Java . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
9.4.2 API for Java enhancements in Enterprise Suite V2.1 . . . . . . . . . . . . . . . . . . . . . 329
9.4.3 API for Java classes and methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
9.4.4 Reference material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
9.5 IMS Enterprise Suite DLIModel utility plug-in IMS Enterprise Suite V2.1 . . . . . . . . . . 332
9.6 IMS Enterprise Suite Explorer for Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
9.6.1 Building a project with IMS Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
9.6.2 Importing the resources for DBD and PSB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
9.6.3 Input for IMS Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
9.7 IMS Enterprise Suite SOAP Gateway 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
9.7.1 SOAP Gateway components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
9.7.2 Web services security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
9.7.3 SOAP Gateway V2.1 security implementation . . . . . . . . . . . . . . . . . . . . . . . . . . 351
9.7.4 Security setup for Enterprise Suite V2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
9.7.5 SOAP Gateway Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
9.7.6 SOAP Gateway management utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
9.7.7 Implementing a call-in web service for IMS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
9.7.8 SOAP Gateway Administrative Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
9.8 IMS Enterprise Suite Java Message Service API . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
Contents ix
x IBM IMS Version 12 Technical Overview
Figures
Figures xiii
10-2 IMS tools portfolio overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
10-3 IMS High Performance Fast Path Utilities components . . . . . . . . . . . . . . . . . . . . . . 391
10-4 Smart Reorg Driver services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
10-5 Data Recovery Facility at a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
10-6 Instantaneous view of the log records with IMS PI . . . . . . . . . . . . . . . . . . . . . . . . . . 403
10-7 Drilling down in a log record with IMS PI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
10-8 IMS PA at a glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
10-9 Event instrumentation with IMS Connect Extension . . . . . . . . . . . . . . . . . . . . . . . . . 406
10-10 GUI operation console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
10-11 Transaction Analysis Workbench reading all information in a z/OS environment. . 409
10-12 INDEXBLD command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
10-13 FPA INDEXBLD user scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
10-14 FPA INDEXBLD user scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
10-15 Secondary index definition report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
10-16 Secondary Index processing report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
10-17 Secondary Index Analysis Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
10-18 Pointer Segment Dump Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
10-19 FPA ANALYZE command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
10-20 FPA ANALYZE user scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
10-21 FPA ANALYZE function verification in user scenario 1. . . . . . . . . . . . . . . . . . . . . . 420
10-22 FPA ANALYZE function user scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
10-23 FPA ANALYZE verification in user scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
10-24 FPA ANALYZE function user scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
10-25 FPA ANALYZE function verification in user scenario 3. . . . . . . . . . . . . . . . . . . . . . 423
10-26 MSC transaction life cycle under IMS PI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
10-27 Detail of a log record with MSC connection information . . . . . . . . . . . . . . . . . . . . . 426
10-28 Detail of a field showing ICON to ICON support . . . . . . . . . . . . . . . . . . . . . . . . . . 427
10-29 IMS CEX console GUI view with IMS 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
10-30 Logger Statistics in IRUR report with IMS 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
10-31 Transaction list report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
10-32 Transaction summary report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
A-1 IMSplex configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
Examples xvii
7-5 SCI initialization PROCLIB member CSLSI12D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7-6 SCI startup procedure JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7-7 OM initialization PROCLIB member CSLOI12D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
7-8 OM startup procedure JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
7-9 RM initialization PROCLIB member CSLRIRMX . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7-10 RM startup procedure JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
7-11 System Definition PROCLIB member DFSDF12D . . . . . . . . . . . . . . . . . . . . . . . . . . 213
7-12 FRP messages displayed upon Repository Server initialization . . . . . . . . . . . . . . . . 216
7-13 Defining an IMSRSC repository to the RS catalog repository with FRPBATCH . . . . 217
7-14 RM successfully connecting to IMSRSC repository . . . . . . . . . . . . . . . . . . . . . . . . . 218
7-15 Populating an IMSRSC repository from an RDDS . . . . . . . . . . . . . . . . . . . . . . . . . . 219
7-16 RDDS to Repository utility (CSLURP10) job output . . . . . . . . . . . . . . . . . . . . . . . . . 219
7-17 JCL to allocate a non-system RDDS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
7-18 Populating an IMSRSC repository from MODBLKS using DFSURCM0 and CSLURP10
222
7-19 Job output from running DFSURCM0 and CSLURP10. . . . . . . . . . . . . . . . . . . . . . . 223
7-20 Batch ADMIN LIST command to display the information of a single IMSRSC repository
225
7-21 Output for the batch ADMIN LIST REPOSITORY command . . . . . . . . . . . . . . . . . . 226
7-22 The batch ADMIN LIST STATUS command to display all IMSRSC repository
information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
7-23 JCL to populate a repository with stored definitions within an RDDS . . . . . . . . . . . . 228
7-24 JCL to populate an RDDS with stored definitions within an IMSRSC repository . . . 228
7-25 JCL that executes the ADD, START, and LIST functions of the batch ADMIN utility 229
7-26 UPDATE RM command syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
7-27 UPDATE RM command dynamically updating the AUDITACCESS setting . . . . . . . 231
7-28 QUERY RM command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
7-29 Displaying the audit access setting using the QUERY RM command . . . . . . . . . . . 232
7-30 Displaying the status of RM using the QUERY RM command . . . . . . . . . . . . . . . . . 232
7-31 Displaying all information associated with RM using the QUERY RM command . . . 232
7-32 UPDATE IMS command syntax for DRD-related functions . . . . . . . . . . . . . . . . . . . 233
7-33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . QUERY IMS command syntax233
7-34 Showing repository attributes using the QUERY IMS command . . . . . . . . . . . . . . . 233
7-35 Showing the automatic export setting using the QUERY IMS command . . . . . . . . . 234
7-36 Syntax for displaying attribute values for resources or descriptors. . . . . . . . . . . . . . 234
7-37 Displaying both repository and local IMS definition information with QUERY. . . . . . 234
7-38 Displaying only repository IMS definition information with QUERY . . . . . . . . . . . . . 235
7-39 Displaying only local IMS definition information with QUERY. . . . . . . . . . . . . . . . . . 235
7-40 Displaying a list of IMS systems that have a specific resource defined with QUERY 235
7-41 Displaying repository and local IMS definitions with IMSIDs using QUERY . . . . . . . 236
7-42 EXPORT command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
7-43 Displaying the time a resource was created with QUERY and SHOW(TIMESTAMP) . .
236
7-44 Exporting a resource to the repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
7-45 Failed export attempt due to a missing program resource . . . . . . . . . . . . . . . . . . . . 237
7-46 Query transaction resource to determine associated program . . . . . . . . . . . . . . . . . 237
7-47 Successful export of resources to repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
7-48 DELETE DEFN command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7-49 Command sequence for deleting runtime and stored IMS resource definitions . . . . 239
7-50 IMPORT command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
7-51 Syntax for the batch ADMIN utility ADD command . . . . . . . . . . . . . . . . . . . . . . . . . . 241
7-52 JCL to add a user repository to the RS catalog repository . . . . . . . . . . . . . . . . . . . . 242
7-53 Syntax for the batch ADMIN utility UPDATE command . . . . . . . . . . . . . . . . . . . . . . 242
Examples xix
9-13 Purge command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
9-14 Messages on the console for purge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
9-15 Options for the iogmgmt utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
9-16 BPXbatch job control language (JCL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
9-17 Starting the managed tool without parameters on Windows. . . . . . . . . . . . . . . . . . . 373
9-18 Updated correlator file for extendedProperty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
9-19 Creation of the connection bundle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375
9-20 Create, delete, update options for the connection bundle . . . . . . . . . . . . . . . . . . . . 376
9-21 Additional options available on create connection bundle . . . . . . . . . . . . . . . . . . . . 376
9-22 Parameters not used in combination with AT-TLS in TCP/IP . . . . . . . . . . . . . . . . . . 376
9-23 Deployment of the web service with a SAML11SignedTokenTrustAny token . . . . . 378
9-24 Changes in IMS Connect configuration for SOAP gateway . . . . . . . . . . . . . . . . . . . 379
9-25 XML adapter as a BPE exit routine in member HWSEXIT0 . . . . . . . . . . . . . . . . . . . 380
9-26 Excerpt of IMS Connect BPE member . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380
9-27 Command to retrieve information from the file system configuration of a stopped server
381
9-28 Start the SOAP Gateway Administrative Console from browser. . . . . . . . . . . . . . . . 381
9-29 WebSphere MQ and IMS extension classes for JMS . . . . . . . . . . . . . . . . . . . . . . . . 382
9-30 JMS Code excerpt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
10-1 JCL for the INDEXBLD command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
10-2 FPA INDEXBLD user scenario 1 JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
10-3 FPA INDEXBLD scenario 2 JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
10-4 JCL for FPA ANALYZE function user scenario 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
10-5 JCL for FPA ANALYZE function user scenario 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 421
10-6 JCL for FPA ANALYZE function user scenario 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
A-1 z/OS LPAR configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
A-2 Coupling facility configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
A-3 Stage 1 macros for MSC in IMS system I12A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
A-4 Stage 1 macros for MSC in IMS system I12B . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
A-5 Stage 1 macros for MSC in IMS system I12C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
A-6 Stage1 macros for MSC in IMS system I12D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
A-7 MSC links (output from the MSC Verification Utility) . . . . . . . . . . . . . . . . . . . . . . . . . 441
A-8 Application and transaction definitions (only I12A shown) . . . . . . . . . . . . . . . . . . . . . 441
A-9 DBRC job control language (JCL) using BPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
A-10 IMS.PROCLIB member DSPBII2X . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
A-11 Repository JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
A-12 IMS Connect JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
A-13 IMS Connect BPE configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
A-14 IMS Connect IM12AHW1 configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
A-15 IMS Connect IM12BHW1 configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
A-16 IMS Connect IM12CHW1 configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
A-17 IMS Connect IM12DHW1 configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
A-18 IMS Connect configuration for IMS SOAP Gateway . . . . . . . . . . . . . . . . . . . . . . . . 449
A-19 IMS shared queues coupling facility structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
A-20 IMS shared queues z/OS log streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
A-21 IMS CQS JCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
A-22 CQSSG12X global shared queues configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
A-23 CQSSL12A local shared queues configuration for I12A. . . . . . . . . . . . . . . . . . . . . . 451
A-24 CQSSL12C local shared queues configuration for I12C . . . . . . . . . . . . . . . . . . . . . 452
A-25 CQSIP12A CQS initialization parameters for I12A . . . . . . . . . . . . . . . . . . . . . . . . . . 452
A-26 CQSIP12C CQS initialization parameters for I12C. . . . . . . . . . . . . . . . . . . . . . . . . . 453
A-27 IMS.PROCLIB member DFSSQ12A . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
A-28 IMS.PROCLIB member DFSSQ12C . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
BookManager® IBM® System Storage®
CICS® IMS™ System z9®
DB2® Language Environment® System z®
Distributed Relational Database MVS™ SystemPac®
Architecture™ OMEGAMON® TotalStorage®
DRDA® Parallel Sysplex® VTAM®
DS8000® ProductPac® WebSphere®
ECKD™ RACF® z/Architecture®
Enterprise Storage Server® Rational® z/OS®
Enterprise Workload Manager™ Redbooks® z/VM®
FlashCopy® Redbooks (logo) ® z9®
GDPS® RETAIN®
Geographically Dispersed Parallel RMF™
Sysplex™ S/390®
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel
SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
With IMS 12, integration and open access improvements provide flexibility and support
business growth requirements. Manageability enhancements help optimize system staff
productivity by improving ease of use and autonomic computing facilities and by providing
increased availability. Scalability improvements have been made to the well-known
performance, efficiency, availability, and resilience of IMS by using 64-bit storage.
IBM IMS Enterprise Suite for z/OS® V2.1 components enhance the use of IMS applications
and data. In this release, components (either orderable or downloaded from the web) deliver
innovative new capabilities for your IMS environment. They enhance connectivity, expand
application development, extend standards and tools for a service-oriented architecture
(SOA), ease installation, and provide simplified interfaces.
This IBM Redbooks® publication explores the new features of IMS 12 and Enterprise Suite 2.1
and provides an overview of the IMS tools. In addition, this book highlights the major new
functions and facilitates database administrators in their planning for installation and migration.
Paolo Bruni is an ITSO Project Leader specializing in IMS with the ITSO and is based in the
Silicon Valley Lab in San Jose, CA. Since 1998, Paolo has authored Redbooks publications
about IMS, IBM DB2® for z/OS, and related tools and has conducted workshops worldwide.
During his many years with IBM in development and in the field, Paulo’s work has been
mostly related to database systems.
Isabelle Bruneel is a Technical Sales Specialist in IMS and IMS Tools with IBM Software
Group in France. She has 25 years of experience in IT, providing customer services, product
support, and education for IBM Global Services in z/VM®, DB2 for z/OS, and IMS. Before
joining IBM, she worked for 6 years as a consultant at a large French bank with a large
workload in shared queues. From this experience, she gained a deep understanding of
customer expectations in high availability environments. She is now involved in several
projects that include IMS tools and IMS modernization with SOAP Gateway.
Angie Greenhaw is an IT Specialist in the IBM IMS Advanced Technical Skills group. She is
a primary resource in the areas of IMS security, dynamic resource definition, Common
Service Layer, and Online Change. Previously, she worked in IMS development, specializing
in the Online Change function, contributing to new IMS functionality, and devising solutions as
a Level 3 Service Representative. She also spent three years as the IMS Development
Representative for SHARE. Angie has written a white paper about global online change
implementation and has coauthored three Redbooks publications. Angie joined IBM in 2000
after receiving a bachelor degree in Computer Information Systems from Arizona State
University.
Jorge Alberto Luz Ribeiro is an IT specialist in IMS with IBM Global Services in Brazil. He
joined IBM in 2009 after providing IT services for some of the largest national and
multinational companies in Brazil. He has more than 30 years of experience in the IT field with
expertise in support for IMS, IBM CICS®, and z/OS software, in system application
development, and in software quality assurance. Jorge is a Six Sigma Black Belt certified
professional. He holds a Master of Business Administration degree from the University of São
Paulo and a Finance specialization from Methodist University of Piracicaba (UNIMEP) in
Brazil.
Egide Van Aershot is a contractor for Zinteg C.V., where he teaches classes about IBM
System z® WebSphere® Application Server and WebSphere MQ in Northern Europe. His
technical experience began in 1967 when he joined IBM and became responsible for many
computer installations related to teleprocessing and database management in Belgium. In
1997, he moved from IBM Belgium to IBM France, where he worked as an architect and
consultant at the IBM Program Support Center in Montpellier. Since 1997, he has specialized
in Java, SOA, IMS, and WebSphere applications, mainly on z/OS systems, and has
participated in many projects related to the Internet. Egide is co-owner of the patent
“Methods, systems, program product for transferring program code between computer
processes.” Egide holds an engineering degree in Electricity and Nuclear Physics from the
University of Leuven, Belgium.
Rich Conway
Bob Haimowitz
Emma Jacobs
IBM ITSO
Carlos Alvarado
Jim Bahls
John Barmettler
Marilyn Basanta
Tom Bridges
John Butterweck
Dave Cameron
Kyle Charlet
Cedric Chen
Himakar Chennapragada
Nathan Church
Demetrios Dimatos
Bach Doan
Jeffrey Fontaine
Shirley Gaw
David Hanson
Bill Huynh
Barbara Klein
Terry Krein
Janet Leblanc
Rose Levin
Ademir Galante
IBM Brazil
Rafael Avigad
Fundi Software, Perth, Western Australia
Yoshiko Yaegashi
Information Management, IBM Japan
Kenneth Blackman
Glenn Galler
Rich Lewis
Nancy Stein
Suzie Wendler
Advanced Technical Skills, Americas
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xxvii
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
With IMS 12, IBM offers enhancements to both major components (transaction manager and
database manager) and to other operational areas, to ensure you have the growth,
availability, and systems management that newer environments and cost measures require.
You can find more details about each of these areas in the other chapters of this book.
IMS Database Manager uses hierarchical organization technology. DB2 and some other
systems use relational database technology. Hierarchical and relational databases have each
continued to grow, with their specific characteristics and different roles to play. Hierarchical is
best used for mission-critical work and for work that requires the utmost in performance.
Relational is best used for decision support.
Hierarchical databases can offer a significant performance edge over relational databases
when queries are known beforehand. Query optimization for relational databases ensures
good performance where the query is not known in advance. Each type is best at what it
does. The products supporting these technologies are enhanced to address different
application requirements, and they continue to overlap more and more in their capabilities.
However, the product originally designed for a particular capability will inherently be the best.
Relational and hierarchical technologies can work together for optimum solutions. Users can
efficiently store operational data in hierarchical form, which can be accessed easily by their
favorite relational decision support tools, with minimal impact on the production hierarchical
data. IMS data can be accessed directly or propagated and replicated with relational data for
summarizing, enhancing, and mining. IBM provides standard application interfaces for
accessing IMS and other data. Both relational and hierarchical IMS data can be most
efficiently accessed, together or independently, using the IMS Transaction Manager (TM) and
WebSphere servers.
IMS Transaction Manager and WebSphere servers are both strategic application managers,
and they are enhanced to take advantage of each other. They each have inherently different
characteristics. IMS is more efficient in application management, data storage, and data
access but applies strict rules for this access. WebSphere makes it easier to serve the web
and integrate data that might have been less defined in advance. Thus, they play different
roles in the enterprise. Clients use both application managers: WebSphere for newer
web-based applications, and IMS for more mission-critical high performance and availability
and low-cost and transactional applications and data.
IMS and WebSphere products together have been providing tools to make this combination
an optimum environment. Using IMS Connect and IMS TM Resource Adapter, WebSphere
development tooling can develop web applications that can serve the web and easily access
existing and new critical IMS applications. Using JDBC and IMS Open Database Access,
WebSphere applications can also access IMS DB data directly.
IBM demonstrates its commitments to you by continuing to enhance IMS, as highlighted in the
following sections.
The main goal of the IMS repository is to simplify resource management by eliminating the
need for multiple resource definition data sets (RDDS) for each IMS system. All IMS systems
in the IMSplex share the same IMS repository for their resource definitions. These IMS
systems can be cloned or non-cloned. With the IMS repository, users can maintain different
attributes for the same resource name for each IMS in the IMSplex.
If you have never implemented DRD, you can use the definitions that exist in your MODBLKS
data set as a starting point, eventually porting them to the new repository. Much like the
RDDS, a repository contains resource definitions for database, program, routing code and
transaction resources and descriptors.
IMPORT UPDATE
IMS 12 brings the new UPDATE option for the type-2 IMPORT command. This enables users to
change runtime resources and descriptors with the attributes from an imported definition. If
no runtime resource or descriptor exists when the IMPORT OPTION(UPDATE) command is
issued, IMS creates the resource or descriptor with the attributes from the imported definition.
For the update of an existing runtime resource or descriptor to succeed, the resource or
descriptor must not be in use.
LOGGER
IMS 12 improves the functionality of IMS logging by adding Extended Format support for
online data sets (OLDS) and system log data sets (SLDS), allowing them to be striped and
providing the buffer allocation beyond the 31-bit, among other improvements.
Also, IMS 12 changes the way that write-ahead data set (WADS) writes are done. The
concept of track groups is not used with IMS 12. This changes the calculation for the space
required for the WADS and changes the data written by log ahead requests. The benefit of
this enhancement is to increase logging rates while releasing storage in the extended
common service area (ECSA) for other users.
With z/OS 1.12, two new data set types allow applications such as IMS to recognize when
data sets are located in the EAS of an EAV. Data sets in EAS are addressed using a new
28-bit cylinder/track addressing format. With EAV volumes, it is now possible to have more
than 65 K cylinders, and specific IMS overflow sequential access method (OSAM) data sets
can benefit from this capability. Support was added for VSAM data sets to reside in EAS in
IMS 11.
The overall benefits of these enhancements are improved problem diagnosis and resolution.
All existing trace entries that are not related to client activity are moved to one of the new
trace tables and are expanded to benefit from the larger 64-byte trace entry length. The
existing CQS structure trace table (STR) now contains only client activity trace entries. The
benefit of these separate traces is to provide the IMS support team with valuable diagnostic
data for client problem resolution, with the possibility that shared queues-related problems
might be resolved faster.
From an MSC perspective, the operation of TCP/IP physical links and VTAM physical links is
similar. Depending on various factors such as network traffic and the distance between the
two connected IMS systems, a TCP/IP physical link is likely to provide better performance
than a VTAM physical link.
Previous IMS versions did not coordinate the numbers in RECON and the partition data set.
The reorganization number in the data set was updated from the RECON value by the first
IMS subsystem that updated the partition.
The Index Builder tool created index pointers based on the reorganization number in the
data set.
Index entries needed healing when the reorganization number was changed by the
updater.
The IMS 12 Database Recovery utility sets the reorganization number of the partition based
on the value in RECON.
GENJCL enhancements
DBRC uses, along another elements, partitioned data set (PDS) members as templates or
skeletal job control language (JCL) to correctly run recovery utilities. Usually, you modify the
skeletal JCL to reflect your installation's system configuration.
When you issue a GENJCL command, it uses a skeletal JCL execution member, which contains
symbolic keywords. You can define your own symbolic keywords and use the symbolic
keywords that already exist. DBRC substitutes current information for the symbolic keywords.
IMS 12 increases the number of user keys in skeletal JCL from 32 to 64, keeping the same
conventions and restrictions applied to earlier versions. In addition, the existing %DBTYPE
key can be used when selecting database data sets (DBDS) allocation.
IMS 12 supports an output larger than 32 K for the /RMLIST online command, but only when
the command is entered through OM API. The output size is restricted by the DBRC private
storage available for buffering the output message or OM limitations.
IMS 12 provides the NORCVINF keyword for LIST.DB and LIST.DBDS commands. This
keyword suppresses recovery-related information. ALLOC, IC, RECOV, and REORG records
are not listed, which reduces command output.
IMS 12 adds full precision timestamps and more information about the HALDB type of
databases such as active DBDS, DDNames of inactive DBDS, and the current reorganization
number for the partition in the LIST.HISTORY command output.
IMS 12 includes the number of registered databases in the LIST.RECON output so that users
can see whether they are nearing the 32, 767 limit of registered databases.
IMS Connect
The biggest change is for Open Database. IMS Connect becomes the entry point for IBM
Distributed Relational Database Architecture™ (IBM DRDA®) requests for access to IMS
databases. This means that IMS Connect becomes important also for database-only users of
IMS. Two Java drivers, named the IMS Universal drivers, provide Open Database Access
(ODBA) either directly using ODBA (if running on the same LPAR) or using DRDA with IMS
Connect (if not running on the same LPAR). The drivers handle the various protocols so that
the programmer does not have to.
Open Database
The main change introduced by IMS Open Database is the ability to easily access IMS
databases from platforms other than z/OS. This access is through TCP/IP by using the DRDA
protocol; applications are shielded from the complexity of DRDA by a new set of Java drivers,
known as IMS Universal drivers.
User exits
IMS 11 introduces three user exits:
An initialization/termination exit (invoked at IMS initialization and termination)
An IMS Common Queue Server (CQS) event exit (invoked when IMS receives a CQS
event)
An IMS CQS structure event exit (invoked when IMS receives a CQS structure event)
DBRC
Using IMS 11, you can benefit from Base Primitive Environment (BPE) features for your
existing DBRC user exits, such as refreshing them dynamically. You can also collect DBRC
performance data by writing a new (or modifying an existing) BPE statistics exit. If you have
implemented RECON security, IMS 11 enables you to more easily use copies of the RECON
that you might have made for recovery or problem diagnosis. The CLEANUP.RECON new
command helps you remove obsolete data from the RECONs, which can be tedious to
remove by using other means.
IMS commands
IMS 11 introduced four type-2 commands that help you to manage your OTMA environment.
These commands provide information about message volumes and associated transaction
instances, so that you can take action to prevent potential problems from arising because of
high-message volumes OTMA enhancements.
OTMA
IMS 11 introduced OTMA capabilities that improve consistency in the IMS shared queues
environment, reduce overall transaction processing costs, and increase resiliency when
certain conditions arise that, if left untreated, can result in delays.
IMS 12 adds support to the IMS repository function and enhances the UPDATE option of the
IMPORT DEFN command. The associated benefit simplifies the need to manually coordinate
individual resource definition across the IMSplex.
The Time Sharing Option (TSO) SPOC application provides a set of resource management
panels that you can use to query IMS resources. Additonally, if dynamic resource definition
(DRD) is enabled in your IMS systems, you can also use the TSO SPOC panels to create,
delete, export, import, and update IMS resources. You can access the TSO SPOC panels
directly from the IMS Application Menu (Figure 2-1).
Help
______________________________________________________________________________
IMS Application Menu
Command ===>
F1=Help F12=Cancel
The TSO SPOC communicates with a single Operations Manager (OM), which in a IMSplex
is a Common Service Layer (CSL) component that provides an application programming
interface (API) for automated operator programs (AOPs). Through the Structured Call
Interface (SCI), which is a CSL component that manages communications between the
IMSplex members, OM then communicates with all of the other IMS control regions in the
IMSplex. TSO SPOC communicates with a single OM.
O IMS
Control
SPOC M Region
SCI
You can issue both IMS type-1 commands and type-2 commands by using the TSO SPOC
interface. You can issue commands in the IMS TSO SPOC application in the following ways:
By using the command line
By retrieving a command
By defining and using command shortcuts
More than one type of SPOC can be in an IMSplex, and any number of SPOCs can be active.
For information about the commands on TSO SPOC, see IMS Version 12 Commands,
Volume 1: IMS Commands A-M, SC19-3009, and IMS Version 12 Commands, Volume 2: IMS
Commands N-V, SC19-3010.
QUERY commands
You use the IMS QUERY commands to display information about IMS resources. The QUERY
commands are type-2 commands and return information based on the keyword specified.
You can create, delete, export, import, and update the following IMS resources and resource
descriptors in DRD-enabled systems:
Application programs
Databases
Fast Path routing codes
Transactions
For more information about DRD, see 2.2, “Dynamic resource definition” on page 17.
You can use the IMPORT command to create resource and descriptor definitions or replace
existing resource and descriptor definitions in an online IMS system by using the definitions in
a resource definition data set (RDDS) or the IMSRSC repository. The IMPORT command can
be issued through TSO SPOC or the Manage Resources options in the IMS Applications
menu. This command can also be issued to an IMSplex by using the batch SPOC utility.
The UPDATE option indicates that if the definition being imported is for a resource or descriptor
that already exists in IMS, the imported definition should be used to replace the existing
runtime resource or descriptor definition. If the definition being imported is for a resource or
descriptor that does not exist, the imported definition should be used to create the runtime
resource or descriptor definition. If the UPDATE option is not specified and a runtime definition
already exists for the resource or descriptor, the import of the resource or descriptor definition
fails.
To minimize the likelihood of the import of a resource definition failing, complete the following
steps before issuing the IMPORT command:
1. Stop the resource.
2. Query the resource to check for work in progress.
3. Complete the work, if any.
If the imported definition is for a resource or descriptor that already exists in IMS, the import
time stamp in the existing runtime definition is replaced with the time the IMPORT command
was received by OM. If one or more of the attributes in the existing runtime definition are
different from the attributes in the imported definition, the update time stamp is also updated
with the time the IMPORT command was received by OM. The access and create time stamps
in the existing runtime definition are unchanged.
If the imported definition is for a resource or descriptor that does not exist in IMS, the import
time stamp in the newly created runtime definition is set to the time the IMPORT command was
received by OM. The create time stamp is obtained from the imported definition and stored in
the new runtime definition.
If the imported definition is for a descriptor that has DEFAULT(N) defined and the runtime
descriptor is the current default descriptor, the default value is not updated. The runtime
descriptor remains the default descriptor. Other attributes are updated, but the default value
remains unchanged. To change the default descriptor so that it is no longer the default
descriptor, you must update another descriptor to be the default descriptor. If the imported
definition has DEFAULT(Y) defined, the updated runtime descriptor becomes the current
default descriptor. The DEFNTYPE of a newly created definition is set to IMPORT. The DEFNTYPE is
set to IMPORT when an existing definition is replaced with a new definition.
Figure 2-5 shows the IMPORT UPDATE option through TSO SPOC. In this example, the
resource already exists.
The runtime resource and descriptor definitions that are created by the IMPORT command
exist in the online system until IMS terminates, unless they are deleted using a DELETE
command. They are recoverable across an IMS warm start or emergency restart.
To preserve the resource and descriptor definitions across a cold start, export the definitions
to an RDDS or the repository before IMS terminates. Then import the stored definitions from
the RDDS or the repository back into IMS either during cold start processing by using the
automatic import function or, when IMS is up and running, by using the IMPORT command.
The aggregated benefits of providing support to the IMS Resource Definition Repository
(Repository) are to simplify the management of IMS system and to provide a single safe
source for information about IMS resources.
IMS systems uses the online change process to add, change, and delete resource definitions
while the system is running. The online change process requires that you perform the
following tasks:
Generate the resource into IMS and store them in an MODBLKS staging library data set.
Run the Online Change Copy utility (DFSUOCU0) to copy into an inactive library.
Run a series of online change commands to cause the change to take effect.
Phase I
Phase II
DRD is an IMS function that enables users to create, update, query, and delete IMS
resources and their descriptors dynamically, without using the batch system definition or
online change processes. Using the DRD, you can handle the following IMS resources:
Transactions
Application programs
Databases
Fast Path routing codes
With DRD enabled, the APPLCTN, DATABASE, RTCODE, and TRANSACT macros are optional in the
IMS system generation input deck. You can either use type-2 commands to define resources
to IMS dynamically or import resource definitions into IMS from a RDDS.
DRD Flow
IMS
O R Control User
SPOC
M M Region
DDRS
You can issue these type-2 commands either from a TSO SPOC or by using the IMS Manage
Resources application that is available from the IMS Application Menu at option 2.
Help
______________________________________________________________________________
IM12X IMS Manage Resources
Command ===>
F1=Help F12=Cancel
Resource descriptors are templates that can be used to define new resources and
descriptors. IMS supplies four resource descriptors, one for each resource type. The
descriptors contain IMS-system default values for each resource attribute. These
IMS-supplied descriptors cannot be deleted or modified. The following resource descriptors
are IMS-supplied:
DFSDSDB1 (database descriptor)
DFSDSPG1 (application program descriptor)
DBFDSRT1 (Fast Path routing code descriptor)
DFSDSTR1 (transaction descriptor)
When you create a resource without specifying a model (that is, you do not specify the LIKE
keyword), any attribute values that are not specified on the CREATE command are inherited
from the default descriptor. When you create a resource that is modeled from a descriptor
(using the LIKE keyword), any attribute values that are not specified on the CREATE command
are inherited from the descriptor.
Similarly, when you create a resource using an existing resource as a model (using the LIKE
keyword), any attribute values that are not specified on the CREATE command are inherited
from the existing resource.
These resource definitions can then be imported from the RDDS during cold start processing.
To ensure that your changes are recovered across a cold start, set up your system to
automatically export your resource and descriptor definitions at checkpoint time and to
automatically import the definitions from the RDDS or the repository during cold start
processing.
Specify parameters related to DRD in the DFSDFxxx member of the IMS.PROCLIB data set.
The DFSDFxxx member of the IMS.PROCLIB data set contains parameters for the IMS CSL,
shared queues, databases, restart exit routines, DRD, the IMSRSC repository, dynamic
database buffer pools, the Fast Path 64-bit buffer manager, and the IMS abend search and
notification procedure.
The MODBLKS= keyword enables either DRD (MODBLKS=DYN) or the online change
process (MODBLKS=OLC). The MODBLKS= keyword can only be changed as part of a cold
start. MODBLKS=OLC and MODBLKS=DYN are mutually exclusive. You can specify the
MODBLKS= keyword in either the DFSCGxxx member, or in the CSL section of the
DFSDFxxx member. If you specify a value for the MODBLKS= keyword in the DFSCGxxx
member, that value overrides the value specified for the MODBLKS= keyword in the
DFSDFxxx member.
If the online change process is disabled (DRD is enabled), the IMS, DBC, and DCC
procedures no longer require the DD statements for the IMS.MODBLKS data sets,
IMS.MODBLKSA, and IMS.MODBLKSB.
The following Manage Resources ISPF panels are enhanced in IMS 12:
DELETE DEFN
EXPORT DEFN
IMPORT DEFN
QUERY DB
QUERY DBDESC
QUERY PGM
QUERY PGMDESC
QUERY RTC
QUERY RTCDESC
QUERY TRAN
QUERY TRANDESC
For information about the IMPORT command with the new UPDATE option, see 2.1, “TSO SPOC”
on page 12. For information about the IMS resource definition (IMSRSC) repository, see
Chapter 7, “IMS repository” on page 199.
If you use the DRD function for the first time when running on IMS Version 12, you can use
the IMS repository function without ever using a RDDS. Although support for RDDSs
continues in IMS Version 12, the IMSRSC repository is a strategic alternative to the RDDS.
Following are examples of creating and updating resources in DRD enabled IMS
environment.
More: +
* Resource . . . . . . . . . 4 1. Database
2. Program
3. Routing Code
4. Transaction
F1=Help F12=Cancel
Figure 2-10 shows the panel to define the properties of the DLINQ transaction.
Command ===>
More: +
Source . . . . . . . . : RDDS
Resource type
Enter "/" to select types
ALL All RSCs & DESCs
ALLRSC All resources ALLDESC All descriptors
DB Database resource DBDESC Database descriptor
PGM Program resource PGMDESC Program descriptor
RTC Routing code resource RTCDESC Routing code descriptor
/ TRAN Transaction resource TRANDESC Transaction descriptor
OPTION
Enter "/" to select options
ABORT Abort import if error ALLRSP Show all responses
/ UPDATE Update runtime defs
F1=Help F12=Cancel
Attention: DRD changes are not necessarily made across all IMS systems in an IMSplex.
Changes might be successful on some IMS systems but fail on others. You must verify that
changes have been made across all systems.
If you plan to use DRD in your IMS system, be aware that several restrictions apply to your
use of DRD, as explained here:
When DRD is enabled, you cannot perform online change for the resources that are
typically defined in the IMS.MODBLKS data set. If you have a simple system such as a
single IMS and online change meets your requirements, consider continuing to use online
change and not enabling DRD.
In an IMSplex with only one IMS and DRD enabled, the /MODIFY PREPARE MODBLKS and
INITIATE OLC PHASE(PREPARE) TYPE(MODBLKS) commands are rejected. The /MODIFY
PREPARE ALL and INITIATE OLC PHASE(PREPARE) ALL commands do not apply to the
IMS.MODBLKS data set.
When DRD is disabled, CREATE, DELETE, IMPORT, and most UPDATE commands that change
the definitional attributes of a resource are rejected. The following parameters for the
UPDATE TRAN command are permitted regardless of whether DRD is enabled:
– CLASS(class)
– CPRI(value)
– LCT(value)
– LPRI(value)
– MAXRGN(number)
– MSNAME(name)
– NPRI(value)
– PARLIM(value)
– PLCT(value)
– SEGNO(number)
– SEGSZ(size)
– TRANSTAT(Y | N)
You cannot delete an IMS-supplied descriptor. The only attribute you can update on an
IMS-supplied descriptor is the DEFAULT attribute.
Because of the default OM routing in a sysplex, DRD commands are routed to all the IMS
systems, unless you specify otherwise. However, the commands are not coordinated
across the sysplex, so a command can succeed on some IMS systems and fail on others.
The Multiple Systems Verification utility (DFSUMSV0) can verify resources defined only by
the batch system definition process; it cannot verify resources that were created using
DRD. Use the /MSVERIFY command to verify resources that are created dynamically.
You can update a resource or a descriptor definition, but the changes to that resource or
descriptor definition are not propagated to the resource or descriptor definitions that were
built from the updated resource or descriptor definition.
You can import a resource or descriptor and it affects any resource or descriptor explicitly
specified in the IMPORT DEFN command or RDDS, but it does not affect any resources or
descriptors built from the specified resource or descriptor.
The benefit is that in an IMSplex environment, you can use the application control block
(ACB) member online change (OLC) function with OPTION(NAMEONLY) specified on the
appropriated command to process only the DBDs and PSBs that are specified.
With global online change, the master IMS control region coordinates online changes of the
following resources:
Databases (data management block (DMB) in the ACB library (ACBLIB))
Database directories (DL/I database directory (DDIR) in MODBLKS)
MFS formats (FMTLIB)
Programs (PSB in ACBLIB)
Program directories (PSB directory (PDIR) in MODBLKS)
Transactions (scheduler message blocks (SMBs) in MODBLKS)
Routing codes (Fast Path routing codes (RCTEs) in MODBLKS)
Use the INITIATE OLC command to initiate the global online change process.
The INITIATE OLC command master usually performs the online change phase locally. If it
fails locally, the command master usually skips sending the online change phase to the other
IMS systems, sets a completion code for each other IMS indicating that the online change
phase was not attempted, and terminates command processing. However, if the INITIATE
OLC PHASE(COMMIT) command fails on the local IMS because of work in progress for
resources that are directly affected by the online change, the command master still sends the
commit phase 1 to the other IMS systems. The purpose is to report work in progress for all
the IMS systems in the IMSplex, to facilitate completion of the work in progress.
Before you can use online change, you must create three copies of each of the following
libraries:
IMS.MODBLKS This library contains the control blocks to support online change of
databases, programs, transactions, and MFS formats.
IMS.ACBLIB This library contains database and program descriptors.
IMS.FORMAT This library contains your MFS maps produced by the MFS Language
and Service utilities.
These libraries are for the exclusive use of IMS offline functions and are called the staging
libraries. Two copies are made of each library, producing data sets with a data set name
suffixed with A and B, for example, IMS.MODBLKSA and IMS.MODBLKSB. These two copies
of each library are used by the IMS online system.
The following components are required for a global online change in the IMSplex:
An IMSplex defined with CSL and at least one OM
Common Queue Server (CQS), if there is a resource structure in the IMSplex
OLCSTAT data set, which must be initialized by the Global Online Change utility
(DFSUOLC0)
OLC=GLOBAL and OLCSTAT= parameters in the DFSCGxxx PROCLIB member data set
CSLG= parameter in the IMS EXEC statement
If you exclude RM from the IMSplex by specifying RMENV=N, each IMS system must have its
own OLCSTAT data set and that OLCSTAT data set must contain the IMSID of the IMS
system that owns it and no other IMSIDs.
At least one RM and a resource structure are recommended for global online change, but
they are not required. Take into consideration that, if you do not use an RM in your IMSplex,
the OLCSTAT data set can contain only the IMSID of the IMS system that owns that
OLCSTAT data set. Any attempt to restart an IMS system that contains an OLCSTAT data set
with a different or multiple IMSIDs results in an abend. IMS rejects INITIATE OLC and
TERMINATE OLC commands that are issued by an IMS system other than the IMS system that
owns the OLCSTAT data set.
To enable an IMSplex for global online change, you must perform the following tasks:
Run the Global Online Change utility (DFSUOLC0) to initialize the OLCSTAT data set.
For each IMS in the IMSplex, perform these steps:
– Remove the MODSTAT DD and MODSTAT2 DD statements from the IMS control
region JCL.
– Define DFSCGxxx IMS.PROCLIB member parameters related to global online change,
or in the CG section of DFSDFxxx member.
The INITIATE OLC (initiate online change) command is provided to support global online
change, where online change is coordinated across IMS systems in the IMSplex. The
INITIATE OLC command is similar to the /MODIFY PREPARE and /MODIFY COMMIT commands,
except that it applies to an IMSplex-wide global online change.
If the INITIATE OLC PHASE(COMMIT) command fails for any IMS before the OLCSTAT data set
is updated, either correct the errors and try the commit again or terminate the online change
with a TERMINATE OLC command. If the INITIATE OLC PHASE(COMMIT) command fails for any
IMS after the OLCSTAT data set has been updated, correct the errors and try the commit
again. The online change cannot be terminated
If you want to fall back to the previous version of the changed resource, perform a full online
change process with a full library switch.
With IMS 12, the ACB member online change function is enhanced to process only the DBDs
and PSBs that are specified in NAME() keyword of the INIT.OLC TYPE(ACBMBR) command.
If a DBD is being changed and OPTION(NAMEONLY) is not specified, you do not have to specify
the associated PSBs on the command because all of the PSBs that are associated with the
changed DBD are copied automatically from the staging ACB library to the active ACB library.
If a DBD that is being changed or added has external references and OPTION(NAMEONLY) is
not specified, the secondary index DBD does not have to be specified on the acbmbr
parameter. The INIT OLC TYPE(ACBMBR) command processing copies all externally referenced
members of the DBD from the staging ACB library to the active ACB library.
The commit phase consists of commit phase 1, the OLCSTAT data set update, commit
phase 2, and commit phase 3. The OLCSTAT data set is updated with the new current online
change libraries and the list of IMS systems that are current with the current online change
libraries. The commit phase 2 switches the online environment from the active ACBLIB,
FORMAT, or MODBLKS libraries to the inactive libraries containing the new or changed
resource descriptions.
Establish an OLCSTAT data set recovery procedure to deal with the loss of the OLCSTAT
data set. After every successful global online change, record the modify ID, the active online
change library suffixes, and the list of IMS systems that are current with the online change
libraries. If the OLCSTAT data set is destroyed, run the initialize function of the Global Online
Change utility with the saved data to re-initialize the OLCSTAT data set.
The /DIAGNOSE command alleviates this situation by allowing you to take a snapshot of IMS
system resources at any time without affecting system availability. The /DIAGNOSE command
enables users to retrieve diagnostic information for system resources such as IMS control
blocks, user-defined nodes, or user-defined transactions at any time without taking a console
dump.
The /DIAGNOSE command SNAP function takes a current snapshot of system resources and
displays the response into the issuing LTERM. Optionally, the resource information can be
sent to either an online log data set (OLDS) or trace data sets as type X'6701' log records
The /DIAGNOSE command SNAP function captures information for the following resources:
A specific IMS control block.
A user-defined resource.
Primary control blocks for a dependent region.
Any area of storage within the region address space.
Prolog information for an IMS load module.
A user-defined shared queues structure.
The /DIAGNOSE command SNAP function takes a current snapshot of system resources at any
time without negatively impacting the IMS system. The SNAP function of the /DIAGNOSE
command captures storage information and shows information about the issuing LTERM.
Optionally, the resource information can be sent to either an OLDS or trace data sets as type
X'6701' log records.
The /DIAGNOSE command is a standard type-1 command. It can be issued from an IMS
terminal, a console WTOR, APPC and OTMA clients, an AOI program, MCS/EMCS consoles,
and any OM command clients including SPOC.
The /DIAGNOSE command SNAP function captures information for the following areas:
A specific IMS control block. The /DIAGNOSE SNAP BLOCK(CSCD) command captures
storage information for the APPC/OTMA SMQ SCD Extension control block.
A user-defined database, communication line, logical link, node, program, transaction,
logical terminal (LTERM), or USER.
Primary control blocks for a dependent region.
Any area of storage within the control region address space (by specifying the address of
that storage area).
Prolog information for an IMS load module. The /DIAGNOSE SNAP MODULE(modname)
command identifies the entry point address and captures prolog information for the
specified IMS module. The prolog information contains the current maintenance level for a
module on your system, which can help you to determine if any maintenance is missing.
A user-defined shared queues structure. The /DIAGNOSE SNAP STRUCTURE(structurename)
command captures storage information for the DFSSQS control block storage for the
specified shared queues structure.
The SNAP function of the command captures storage, both addresses and raw data, for the
requested IMS control blocks and resources. The information in the blocks is copied to a copy
storage area to avoid holding enqueues, locks, latches, and others. The environment is
further protected by a separate ESTAE routine that protects the copy process and also
prevents an IMS failure.
You can also use the /DIAGNOSE command SNAP function to perform these tasks:
Show filtered resource information captured by the SNAP function.
Specify a limit for the number of lines to display.
Specify the control blocks to be captured by the SNAP function
Table 2-1 shows the valid environments for commands and keywords.
/DIAGNOSE X X X
ADDRESS X X X
AOSLOG X X
AREA X X
BLOCK X X X
DB X X
JOBNAME X X X
LINE X X
LINK X X
LTERM X X
MODULE X X X
NODE X X
OPTION X X X
PGM X X X
REGION X X X
SET X X X
SHOW X X X
SNAP X X X
STRUCTURE X X
TRAN X X
USER X X
The /DIAGNOSE command SNAP function has the following new options:
A DISPLAY option to route output back to the issuing LTERM
A LIMIT option to restrict the number of lines of output going to LTERM
A SHOW parameter to control the type and amount of output produced
The /DIAGNOSE SNAP AREA() command is available in a DB/DC or DBCTL environment where
Fast Path is defined. The DEDB extended area control block (EMAC) is available only in an
RSR tracker environment.
The /DIAGNOSE SNAP LINE() command is available in a DB/DC or DCCTL environment. If the
/DIAGNOSE SNAP LINE() command is issued in a DBCTL environment, a DFS110I error
message is issued in response. The extended communication name table (ECNT) is available
only in an IMS system where Fast Path is defined.
The /DIAGNOSE SNAP LINK() command is available in a DB/DC or DCCTL environment. If the
/DIAGNOSE SNAP LINK() command is issued in a DBCTL environment, a DFS110I error
message is issued in response.
The dependent region might also be identified using the SNAP JOBNAME(jobname) format of the
SNAP REGION() resource type. The jobname parameter specified must be alphanumeric, no
longer than eight characters, and identify a currently active dependent region. Multiple
jobname parameters can be specified with each parameter separated by a comma or a blank.
The REGION(region#) and JOBNAME(jobname) formats can both be specified on the same
command.
Resource: LINE(3)
Example 2-2 shows the sample JCL to extract these DIAG records with DFSERA10 and print
them with exit DFSERA30.
Example 2-2 JCL for printing DIAG records with exit DFSERA30
//P010 EXEC PGM=DFSERA10
//STEPLIB DD DISP=SHR,DSN=IMS12Q.SDFSRESL
//SYSUT1 DD DISP=SHR,DSN=IMS12Q.IMS12A.OLP05
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
CONTROL CNTL
OPTION PRINT O=5,V=6701,L=2,C=M,E=DFSERA30
OPTION PRINT O=9,V=DIAG,L=4,T=C,C=E,E=DFSERA30
END
/*
IMS 12 improves the functionality of IMS logging by adding Extended Format support and
providing the buffer allocation beyond the 31-bit, among other improvements. The benefits of
this enhancement is to increase logging rates while releasing storage in the extended
common service area (ECSA).
Extended format data set is a type of SMS-managed physical sequential data set that,
externally, has the same characteristics as sequential data sets. However, records are not
necessarily stored in the same format or order as they appear. Sequential data striping is
implemented through the use of extended format data sets.
Data striping is the technique of segmenting logically sequential data, such as a file, in a way
that accesses of sequential segments are made to different physical storage devices.
DASD Volumes
OLDS
IMS uses a set of OLDSs in cyclical processing, which enables IMS to continue logging after
an individual OLDS is filled. Also, if an I/O error occurs while writing to an OLDS, IMS can
continue logging by isolating the defective OLDS and switching to another one. After an
OLDS is used, it is available for archiving to a SLDS on DASD or tape by the IMS Log Archive
utility.
The utility can be executed automatically through an IMS startup parameter (ARC=). When
IMS is close to filling the last available OLDS, it warns you so that you can ensure that
archiving completes for used OLDSs or add new OLDSs to the system. You can also
manually archive the OLDSs to SLDSs using the IMS Log Archive utility. An SLDS can reside
on DASD or tape. After an OLDS is archived, it can be reused for new log data. You use
SLDSs as input to the database recovery process.
IMS uses the OLDS only in the online environment. The OLDS contains all the log records
required for restart, recovery, and both batch and dynamic backout. The OLDS holds the log
records until IMS archives them to the SLDS. Define all of the OLDSs in the IMS procedure
library (IMS.PROCLIB) using the OLDSDEF statement. The OLDS must be preallocated on a
direct-access device. You can also dynamically allocate additional OLDSs while IMS is
running by using the /START OLDS command.
IMS uses the Basic Sequential Access Method (BSAM) to write log records to the OLDS, and
the Overflow Sequential Access Method (OSAM) to read the OLDS when IMS performs
dynamic backout. Although referred to as a data set, the OLDS is actually made up of multiple
data sets that wrap around, one to the other. You must allocate at least three, but no more
than 100 data sets for the OLDS.
You can specify that the OLDS use dual logging, which is the duplication of information for two
logs. When you use dual logging, an I/O error on the primary or secondary data set causes
IMS to close the non-error OLDS and mark the error OLDS in the recovery control (RECON)
data set as having an I/O error and a close error. IMS then continues logging with the next
IMS uses as many OLDSs as you allocate. IMS issues a message each time it changes the
current OLDS. This message identifies the OLDS being closed and the next OLDS to be
used.
IMS continually reuses WADS space after writing the appropriate log data to the OLDS. The
log write-ahead function ensures that all log records are on the log before IMS writes changes
to a database. IMS updates a database in any of the following situations:
When IMS needs to reuse the database buffer (if this is before commit)
During commit
During VSAM background write
If IMS fails, you use the log data in the WADS to complete the content of the OLDS and then
close the OLDS as part of an IMS emergency restart or as an option of the Log Recovery
utility. If you close the OLDS during emergency restart, you must include the WADS in use at
the time of the failure.
You can change any of the following specifications for the WADS during an IMS restart:
Number of WADSs
Sequence of WADSs
WADS names
Use of single or dual WADSs
The DFSVSMxx PROCLIB member contains the log data set definition information, and it
specifies the allocation of OLDS and WADS and the number of buffers to be used for the
OLDS. It also specifies the mode of operation of the OLDS (single or dual).
Data striping
Usually, sequential access processing does not allow for any type of parallelism for I/O
operations. This means that when an I/O operation is executed for an extent in a volume, no
other I/O activity from the same task or same data set is scheduled. In a situation where I/O is
the major bottleneck, and there are available resources in the channel subsystem and
controllers, it is a waste of these resources. Data striping addresses this sequential access
performance problem by adding two modifications to the traditional data organization:
The records are organized in stripes along the volumes.
Parallel I/O operations are scheduled to sequential stripes in different volumes.
Sequential data striping can reduce the processing time required for long-running batch jobs
that process large, physical sequential data sets. Smaller sequential data sets can also
benefit because of the improved buffer management of DFSMS for QSAM and BSAM access
methods for striped extended-format sequential data sets. By striping a data set, the access
method can spread simultaneous I/Os across multiple devices.
With this format, a single application request for records in multiple tracks and records can be
satisfied by concurrent I/O requests to multiple volumes. The result is improved performance
by achieving data transfer into the application at a rate greater than any single I/O path.
Data striping distributes data for one data set across multiple volumes.
A data set can have a maximum of 59 stripes.
Each stripe must reside on one volume and cannot be extended to another volume.
I/O can be done in parallel with striped data for better performance.
To specify extended format, data set type (DSNTYPE) in data class needs to be set to
“Extended” and sustained data rate in storage class needs to be set to the number of
stripes needed.
Striping
An OLDS can be defined as a DFSMS extended-format, striped data set. Set the data type of
the OLDS data class to EXT to define it as an extended-format data set, and set the storage
class SDR to a value that results in multiple stripes. In JCL allocation, the data class is
Example 2-3 shows the sample JCL to allocate OLDS as a striped data set.
Example 2-4 shows a sample of the DFSVSMxx using the new BUFSTOR parameter.
WADS management
IMS 12 changes the way that WADS writes are done; it no longer uses the concept of track
groups. This changes the calculation for the space required for the WADS and also changes
the data written by log ahead requests.
In IMS 12, the WADS should be sized to provide enough space for the data in the OLDS
buffers that have not yet been written to disk at any, plus one track. In previous versions, the
WADS was sized by using the WADS track group concept.
A track group was the OLDS block size/WADS segment size plus 1. A WADS segment size
was 2 K for OLDS buffers below the bar in real storage and 4 K for OLDS buffers above the
bar in real storage.
In previous versions of IMS, the WADS was written in segments from the OLDS buffers.
Successive writes were to different tracks. The scheme is much simpler in IMS 12. Each
WADS write is to the next block in the data set. The data written to the WADS includes the
data that was not previously written up to the last record in the buffer.
Using striped data sets for OLDS and moving log buffers above the 2 GB boundary increases
the log rate and frees ECSA storage for other users, respectively. Striping can be beneficial
when your IMS system is experiencing a significant number of Wait-for-Writes or
Wait-for-Buffers. For Geographically Dispersed Parallel Sysplex (GDPS®) users, keep the
number of stripes low.
IMS 12 adds three separate trace tables for shared queues. In addition, you can move Base
Primitive Environment (BPE) tracing to an external data set. The benefit of these separate
traces is to provide the IMS support team with valuable diagnostic data for client problem
resolution, which can help to resolve shared queues-related problems more quickly.
The CQS handles elements on the shared queue structures for its clients. CQS is a general
purpose programming facility that provides clients such as IMS with an API for accessing the
shared queues.
All IMS subsystems register and communicate with the CQS to receive messages from or
send messages to the shared queues. CQS notifies registered clients when there is work for
them on the shared queue structures. In an IMS shared-queues environment, the IMS online
subsystem identifies itself to the CQS subsystem during IMS initialization or restart.
The BPE configuration parameter member of the IMS PROCLIB data set can be used
whenever you are using an IMS address space that uses BPE, such as CQS.
DATABASE
Coupling Facility
SHARED
QUEUES
Several different resources can be stored in the resource structure. It is important to create a
resource structure of sufficient size to accommodate these resources. An IMS system with a
defined resource structure stores transaction names in the resource structure. Additional
resources can be stored in the resource structure, depending on the IMS functions that you
enable.
Each resource is stored on the resource structure using a 128-byte entry, and either zero,
one, or more 512-byte data elements. The 128-byte entry contains 64 bytes for z/OS control
information, and 64 bytes for user data in an adjunct area. Both IMS and CQS use a portion of
the 512-byte data element as a prefix. The remaining bytes are available for client data.
Use the entry-to-element ratio when allocating the resource structure to reserve portions for
entries and data elements. The more accurate the ratio is for actual resources stored on the
resource structure, the less storage is wasted. The number of entries is equal to the number
of resources. The number of data elements depends on the number of resources for each
resource type.
CQS produces SDUMPs for internal errors. The CQS dumps are in the SYS1.DUMP data
sets. CQS can also produce LOGREC data set entries for errors.
CQS-related problems
For a CQS environment, related problems might include:
IMS WAIT problems
CQS WAIT or HANG problems
CQS checkpoint problems
CQS restart problems
CQS structure rebuild problems
Trace record eye-catchers in a formatted dump provide clues about which functions resulted
in errors. You might be able to correct environmental problems immediately. Refer internal
IBM problems to IBM with appropriate documentation, such as system console logs and
dumps.
ERR 1 Errors
Each CQS trace record is 32 bytes long, except records in the SEVT and OFLW tables.
Those tables use an expanded 64-byte format. In a standard 32-byte trace entry, the first byte
is the trace code and the second byte is the trace subcode. The expanded 64-byte format
used by trace records in the SEVT and OFLW tables contains 16 words of trace data.
Use the BPE configuration parameter member of the IMS PROCLIB data set to define the
BPE execution environment, as explained here:
Whether trace entries are written to an external data set
The external data set to which trace entries are written
The language used for BPE and IMS component messages
The trace level settings for BPE and IMS component internal trace tables
The name of a BPE exit list member of the IMS PROCLIB data set where configuration
information for IMS component user exit routines is stored
The time interval between calls to the BPE statistics exit routines
Specify the member name by coding BPECFG=member_name on the EXEC PARM= statement in
the address space startup JCL (Example 2-5).
These two new trace event tables are automatically generated by IMS to contain structure
event trace entries and structure overflow trace entries. These new tables retain critical trace
entries for a longer period of time, which improves CQS serviceability.
All existing trace entries that are not related to client activity are moved to one of the new
trace tables and are expanded to take advantage of the larger 64-byte trace entry length. The
existing CQS structure trace table (STR) now contains only client activity trace entries.
The following IMS components are updated to support the new trace event table:
The BPE configuration PROCLIB member
The CQS BPE EXTERNAL TRACE FORMATTING MENU panel of the IMS Dump
Formatter
The BPE DISPLAY TRACETABLE and UPDATE TRACETABLE commands
The BPE DISPLAY TRACETABLE and UPDATE TRACETABLE commands are enhanced to
display information about the trace tables for the Repository Server address space and the
new Repository Server DIAG trace table.
TIP: Request response processing for authorized CQS clients in IMS 12 is executed under
enclave service request blocks (SRBs). In IMS 12 and subsequent releases, IMS will
request z/OS to process such work on an available System z9 Integrated Information
Processors (zIIP).
Example 2-6 shows the BPE configuration with new trace entries.
You can dynamically change the external trace data set specification in the BPE configuration
PROCLIB member and refresh the member while an address space is running. For example,
if you are running without an external trace data set, you can edit your BPE PROCLIB
member, add an external trace data set specification, and start using external trace without
having to restart the address space.
You can dynamically modify BPE tracing by using the z/OS MODIFY command with the UPDATE
TRACETABLE command as shown in Example 2-7.
The SEVT includes activity related to CQS structure events. CQS defines one SEVT trace
table for each structure pair defined to CQS. Trace entries in this table are 64 bytes long. The
default number of pages for this table is 12.
The client (STR) event trace table includes CQS client activity events. CQS defines one STR
trace table for each structure pair defined to CQS. The default number of pages for this table
is 8.You cannot set the level for the ERR trace table. BPE forces the level to HIGH to ensure
that error diagnostics are captured.
IBM raised the limit on volume sizes introducing a new DASD family with more than 65,520
cylinders in capacity and increasing the amount of addressable DASD storage per volume
beyond 65,520 cylinders by changing how tracks on extended count key data (ECKD™)
volumes are addressed.
Exploiting the capabilities of a new 3390 device model A on IBM System Storage® DS8000
storage subsystems, EAV is designed to provide a new architectural limit of hundreds of TB
per volume, while keeping the fully compatible access to data residing on cylinders below
65,520.
In z/OS V1.10, support is for SMS and non-SMS managed VSAM data sets (entry-sequenced
data set (ESDS), key-sequenced data set (KSDS), relative record data set (RRDS), and
linear data set (LDS)) at any location on an EAV. In z/OS, Version 1 Release 12 supports is for
non-VSAM data sets.
A track address is a 32-bit number that identifies each track within a volume.
The combination of the 16-bits and 12-bits for the low order and high order cylinder number
represents a 28-bit cylinder number.
For an EAV, the EAS is cylinders whose addresses are equal to or greater than 65,536. The
ccc portion is non-zero for the cylinders of EAS. These cylinder addresses are represented by
28-bit cylinder numbers. For compatibility with older programs, the ccc portion is hexadecimal
000 for tracks in cylinders whose addresses are below 65,536. These cylinder addresses are
represented by 16-bit cylinder numbers. This is the base addressing space on an EAV.
The cylinder-managed space is space on the volume that is managed only in multicylinder
units (a multi-cylinder unit is a fixed unit of disk space that is larger than a cylinder).
Cylinder-managed space begins at cylinder address 65,520. Each data set occupies an
integral multiple of multicylinder units. Space requests targeted for the cylinder-managed
space are rounded up to the next multicylinder unit. The cylinder-managed space only exists
on EAVs.
The track-managed space is space on a volume that is managed in tracks and cylinders.
Track-managed space ends at cylinder address 65,519. Each data set occupies an integral
multiple of tracks. Track-managed space exists on all volumes.
Additionally, the user must specify an EAV on the appropriate volume specification using:
AMS DEFINE CLUSTER (VOL (xxxxxx))
ALLOCATE
JCL (VOL=SER=xxxxxx)
Dynamic allocation
Data class
The following sections describe how some IMS sequential data sets can benefit from EAV
support.
Define the initial set of OLDSs to be acquired by restart initialization in the OLDSDEF control
statement in the DFSVSMxx member of IMS.PROCLIB. You can dynamically allocate this set
of OLDSs, or specify them through DD statements.
DASD space for each OLDS must be contiguous, and secondary extents are not permitted.
Pairs of OLDSs (primary and secondary) must have the same space allocation.
You can enable your WADS to use EAVs that are available in z/OS V1.12 or later by
specifying an EAV volume on the VOLSER parameter of the DFSWADSnn DD statement
when you allocate the data set. In addition, you can specify the attribute EATTR to indicate
whether the data set supports extended attributes.
A minimum of five tracks must be allocated to the restart data set because it contains the
checkpoint-ID table and other control information.
Allocate message queue data set space in terms of contiguous cylinders for most efficient
operation. Secondary allocation is ignored unless the secondary space has been
preallocated (that is, multiple volume data sets with preallocated space on both volumes).
You can allocate up to 10 data sets for the long message queue and 10 data sets for the short
message queue. Each data set requires an additional DD statement. Ensure that all data sets
of a given message queue type are the same size. If the data sets have different sizes, the
smallest size is used for all.
Important: Even if you are not going to use the non-VSAM support, IMS 12 requires that
you install DFSMS APAR/PTF OA33409/UA55338 on z/OS V1R11.
Data sets with EATTR=OPT specified cannot be shared with an IMS Version 10 or IMS
Version 11 system because those IMS versions do not support extended attributes.
In IMS 12, the storage for certain database pools is now obtained in 31-bit virtual storage
backed by 64-bit real storage. Also, IMS 12 has changed the storage requirements for OSAM
data extent blocks (DEBs).
As an immediate benefit, large database pools can now be fixed. Previously, these pools were
unable to be fixed due a shortage of 31-bit real storage.
For an IMS system, each application program requires its PSB, the PSBW, the intent list, and
the DMBs to be scheduled. These control blocks are loaded into some of the buffer pools
existing in the IMS system.
The PSB pool holds all PSBs for scheduled application programs. A specific scheduled PSB
remains in the pool until space is needed to load another PSB. Then, if all space is used, the
least-referenced inactive PSB is freed.
PSBs that are made resident and are not parallel scheduled are exceptions to this rule. These
PSBs do not take from the pool allocation and are usually designated for highly referenced
(usually, preloaded) programs. If the DL/I address space option is used, two resident PSB
pools exist, one in the z/OS common area and one in the DL/I address space.
A portion of the PSB work pool is required by each active PSB, but the total size does not
need to be larger than the maximum. When you use the technique of starting large and
reducing to some value greater than the maximum amount used, scheduling stops when any
pool space failure occurs.
The DMB pool holds the DMB that describes and controls a physical database. We
recommend that you make the DMBs for all databases resident. If all the DMBs are made
resident, the inactive DMBs can be allowed to page out without significantly affecting the
active DMBs, but paging them back into storage can suspend processing in the control region
or the DL/I secondary address space (SAS). Opening and closing databases suspends either
the IMS control region or the DL/I SAS, depending on the specification of the LSO= option,
and increases DL/I call elapsed time.
Allocate sufficient buffers in the IMS buffer pools to prevent I/O. If the IMS pools are subject to
paging, you can consider page fixing the significant buffers, if necessary.
If the DL/I address space option is used, two PSB pools exist: DLMP in the z/OS common
area, and DPSB in the DL/I address space.
DMB This specifies the size of the DMB control block pool. The default is
10,000 bytes. The maximum allowable specification is 9,999,000
bytes. The minimum allowable specification is 8 bytes.
PSB This specifies the size of the PSB control block pool if the DL/I address
space option is not used. The SASPSB parameter specifies the size of
the PSB control block pool when the DL/I address space option is
used. The default is 10000 bytes, with a maximum of 9,999,000 bytes.
The minimum allowable specification is 8 bytes.
PSBW This specifies the size of the PSB work area pool. The default is
10,000 bytes. The maximum allowable specification is 9,999,000
bytes. The minimum allowable specification is 8 bytes.
SASPSB This is used only if the DL/I separate address space option is selected.
If you are not using this option, the size of the single PSB control block
pool is specified with the PSB parameter. With the DL/I address space
option, two PSB control block pools exist. Size1 is the size of the pool
in the z/OS common storage area (CSA). Size2 is the size of the pool
in DL/I local storage. The maximum allowable for either is 9,999,000
bytes.
Using EXEC parameters, you can override definitions for data communication that were initially
set during IMS system definition. Table 2-10 shows the cross reference between BUFPOOLS
macro definitions and EXEC parameters.
DBWP
DMB DMB
PSB PSB
PSBW PSBW
SASPSB(size1) CSAPSB
SASPSB(size2) DLIPSB
Use the DFSFIXnn member of the IMS PROCLIB data set to specify that portions of the
control region (for example, certain control blocks, buffer pools, loaded modules, and part of
the IMS nucleus) are to be fixed in address space during.
Figure 2-21 shows the BLOCKS specification syntax. For more information, see IMS Version
12 System Definition, GC19-3021.
POOLS The dynamic area acquired by IMS during initialization and used for
various buffer pools.
DBWP DMB work pool.
DLDP DMB pool.
DLMP With the DL/I address space option, that portion of the PSB pool in the
z/OS common area.
DPSB With the DL/I address space option, the portion of the PSB pool in DL/I
local storage. Specification is ignored if the DL/I address space option
is not in effect.
GLBP The WKPL, DLMP, PSBW, DLDP, and DBWP storage pools are
included. If the DL/I address space is used, DLDP and DBWP are not
included.
The 31-bit virtual storage for the following database pools is now backed by 64-bit real
storage:
DBWP DB work pool
DLDP DMB pool
DLMP PSB CSA pool
DPSB DLI PSB pool
PSBW PSB work pool
Large database pools that were unable to be page fixed previously due to 31-bit real storage
constraints, might now be able to be fixed because the fixed pages are backed by 64-bit real
storage.
The Offline Dump Formatter utility is invoked as a verb exit from the Interactive Problem
Control System (IPCS).
The Offline Dump Formatter utility modules are included in the dumped storage to ensure that
the modules used for formatting the dump match the level of the dumped IMS control blocks.
These modules can be relocated from the dumped storage, or a fresh copy can be loaded
from the program library.
Using the IMS Dump Formatter gives you a menu-driven way to run the Offline Dump
Formatter utility without complicated editing of the DFSFRMAT file. IPCS uses menus to run
the IMS Dump Formatter. With these menus, you can specify the information to be contained
in the dump. The IMS Dump Formatter calls the Offline Dump Formatter utility to perform the
required formatting tasks. The output is returned in a format that you can read on the
terminal.
IMS 12 adds support to IMS Dump Formatter for the following areas:
The Repository Server address space by using the Other IMS Components (6) section of
the Dump Formatter
The Repository Client address space by using the Other IMS-Related Products (7) section
of the Dump Formatter
The OTMA C/I by using the Other IMS Components (6) section of the Dump Formatter
IMS 12 adds support for the end-of-task (EOT) process similar to that employed by the EOM
process. This function traces the step-by-step flow through the EOT process by updating a
nibble trace double-word in the IDT (IDTEOTTR) of the region with a unique marker for each
successful step processed or for each decision point reached and evaluated.
The EOM/EOT trace information captured by the service is recorded, using WTO, in message
DFS0798I, which is issued at the end of the End of Memory call (no action is required):
DFS0798I eee PROCESSING COMPLETE FOR jjjjjjjj RC=0000 RSN=00000000 ASCB=aaaaaaaa
ASID=dddd TRC=tttttttttttttttt:zzzzzzzz
This message indicates that an EOM event for an IMS dependent region has been detected
and processed. In the message text, note the following explanation:
eee SSI call type: EOM (end of memory) or EOT (end of task).
jjjjjjjj Dependent region job name.
aaaaaaaa Dependent region ASCB (address space control block).
dddd Dependent region ASID (address space identifier).
tttttttttttttttt Trace string (IDTEOMTR or IDTEOTTR).
zzzzzzzz Address of the IDT entry.
System action The results of EOM processing are displayed in the message. The
region has terminated, clean up processing was successful, and the
IDT and VTD entries for the region are cleared.
Response No action is required.
If something goes wrong, the EOM or EOT trace information captured by the service is
recorded, using WTO, in message DFS0798W, which is issued at the end of the EOM call.
Contact IBM Software Support for help.
DFS0798W eee PROCESSING COMPLETE FOR jjjjjjjj RC=rrrr RSN=ssssssss ASCB=aaaaaaaa
ASID=dddd TRC=tttttttttttttttt:zzzzzzzz
This message indicates that an EOM event for an IMS-dependent region has been detected
and processed. In the message text, note the following explanation:
eee SSI call type: EOM or EOT.
rrrr Return code.
ssssssss Reason code.
jjjjjjjj Dependent region job name.
aaaaaaaa Dependent region ASCB (address space control block).
dddd Dependent region ASID (address space identifier).
tttttttttttttttt Trace string (IDTEOMTR or IDTEOTTR).
zzzzzzzz Address of the IDT entry.
System action The results of EOM processing are displayed in the message.
IMS 12 has changed the storage requirements for OSAM DEBs. IMS 12 eliminates one of the
DEBs for each OSAM data set. Because each DEB requires approximately 300 bytes, this
reduction might be significant for users with many OSAM database data sets.
IMS 12 starts to remove as many aliases from IMS load modules as possible.
IMS 12 has both functional and performance enhancements in the database support area.
The performance enhancements benefit from the z/Architecture (64 real and virtual bit
support, new instructions), and also offer new implementations that are transparent for
existing applications. Functional enhancements introduce additional exploitation capabilities
of the existing databases.
For reference documentation that relates to this chapter, see the following publications:
IMS Version 12 Database Administration, SC19-3013
IMS Version 12 Exit Routines, SC19-3016
IMS Version 12 System Utilities, SC19-3023
The 31-bit virtual storage for the following database pools is now backed up by 64-bit real
storage:
DBWP: DB work pool
DLDP: DMB pool
DLMP: PSB CSA pool
DPSB: DLI PSB pool
PSBW: PSB work pool
Large database pools that were unable to be page fixed before, due to 31-bit real storage
constraints, might be able to be fixed because the fixed pages are backed by 64-bit real
storage.
You can use the DFSFIXnn member of the IMS PROCLIB data set to specify the buffer pools
to be fixed in address space during initialization.
In IMS 11 and earlier releases, the Virtual Storage Access Method (VSAM) and overflow
sequential access method (OSAM) buffer pool definitions were only stored in the DFSVSMxx
proclib member. This member is loaded only one time during IMS initialization. No facility is
available to change the buffer pool definitions without first changing the DFSVSMxx member
in proclib and then restarting IMS.
IMS 12 offers a new feature for dynamically adding, updating, and deleting VSAM and OSAM
buffer pools. The initial VSAM and OSAM buffer pool specifications still exist in the
DFSVSMxx proclib member and they are loaded during normal restart. However, new VSAM
and OSAM buffer pools can be added and existing buffer pools can be changed using
specifications in one or more DFSDFxxx proclib members with the type-2 UPDATE POOL
command.
Example 3-1 shows the initial and pool specifications for VSAM.
OSAM buffers can be treated the same way. Example 3-3 shows the initial pool specifications
for OSAM.
The specifications in Example 3-3 can be dynamically changed using buffer pool
specifications in the DFSDFxxx proclib member section (Example 3-4).
For OSAM the specifications in DFSDFxxx are also located behind a <SECTION=xxxxxxx>
indication. The information is similar to that in DFSVSMxx
The VSAM and OSAM buffer pool specifications can be placed into different DFSDFxxx
members in proclib, because with the UPDATE POOL command, the user can specify the
MEMBER keyword identifying the suffix of the DFSDFxxx proclib member in the proclib data
set.
Alternatively, the user can specify multiple VSAM and OSAM sections within one or more
DFSDFxxx members.
The update of the pool specifications is done with the type-2 UPDATE POOL command
identifying the statement sections. The UPDATE POOL command can be issued individually for
specific VSAM and OSAM sections, or the command can be issued for both VSAM and
OSAM sections in the same command. The UPDATE POOL command can also reference a
specific DFSDFxxx proclib member in the proclib data set using the MEMBER(yyy) keyword.
In this case, yyy is the suffix used in DFSDFyyy. The default for yyy is 000.
It is possible to delete a VSAM buffer pool by specifying a POOLID in a VSAM section with the
VSRBF statement for the size of the buffer and a “0” for the number of buffers. The UPDATE
POOL command is needed to complete the deletion of the VSAM buffer pool.
The database data set association with a subpool is established when the database data set
is opened. If a database data set using a subpool is to be deleted, the UPDATE POOL command
must wait until the access to the subpool is completed before it can delete the subpool.
The QUERY POOL command (Example 3-6) can be used to query information about the new
and changed VSAM and OSAM buffer pools. The user can specifically limit the output to:
OSAM or VSAM buffer pools
Buffers of a particular size
Specific pool IDs
By using the options for SHOW, you can show only statistical information that is similar to the
current /DIS POOL DBAS command. Alternatively, you can show the proclib member
information used to add or update a buffer pool specification. It is also possible to show both
statistical and member information using the ALL parameter. Example 3-7 shows a Time
Sharing Option (TSO) single point of control (SPOC) example. The command can also be
issued by an OM API command, in which case the output is XML tagged.
The buffer pool statistics are handled differently for VSAM and OSAM following an UPDATE
POOL command. For VSAM, the buffer pool statistics are reset and the old statistics are not
carried over. Use the QUERY POOL command for the VSAM buffer pool statistics before issuing
the UPDATE POOL command. The OSAM statistics are carried over and are not reset with the
UPDATE POOL command.
Committed buffer pool changes are written to restart data set (RDS).
Emergency restart restores buffer pools using RDS.
Normal restart initializes buffer pools from DFSVSMxx.
The UPDATE POOL command logs information in the x’22’ log record for informational purposes
only. The UPDATE POOL command itself is non-recoverable.
For HALDB, no dynamic allocation members are used; instead, all HALDBs must be
registered in the recovery controls (RECONs). The dynamic allocation is based on
information in the RECON and the partition ID that is stored in the RECON and in the “A” data
set of the partition for verification. This partition ID is defined by the creation pattern of the
HALDBs and its value has no sequence meaning.
DDNAMES are defined by the partition name appended with the functional letters as shown
in Figure 3-2. Data set names contain a prefix, chosen by the user, followed again by the
functional letter and a partition ID. All allocations are dynamic, except for a few areas where
the DDNAMES must be used.
HALDB databases can be reorganized online, and have a HALDB self-healing pointer
process.
In IMS 12, the RELOLROWNER-Y/N parameter in the <DATABASE> section of the DFSDFxxx
PROCLIB member indicates whether OLR is restarted on the non-owning IMS system
(Example 3-8).
RELOROWNER=N is the default and does not release ownership when the IMS system
terminates. The RELOLROWNER= value can be overridden by specifying one of the following
values on the INIT OLREORG, /INIT OLREORG, UPD OLREORG, or /UPD OLREORG command:
OPTION(REL)
OPTION(NOREL)
When RELOLROWNER=Y is specified, OLR is not automatically restarted unless it was overridden
with OPTION(NOREL) on the command. If the OLR is not automatically restarted by IMS restart,
it must be restarted with the INIT OLREORG or /INIT OLREORG command.
An execution of HD Unload can be restricted to a range of keys when the MIGRATE=YES control
statement is included for migration to HALDB. The low (FROMKEY) and high (TOKEY) values
must be specified in hexadecimal. These values can be obtained from the output of a LIST.DB
DBRC command for the partition.
Before this enhancement the migration unload of a logically related database often took a
long time. When a logical child segment is unloaded, its logical parent must be read in most
cases. This is a random read. These random reads account for almost all of the elapsed time
of the unload.
The logically related database must be read unless the unload is for the physical logical child
of a bidirectional logical relationship with virtual pairing when the PHYSICAL option was
specified the database description (DBD) to include the concatenated key in the physical
logical child. The logical related database is always read with the following logical
relationships:
Unidirectional logical relationships
Physically paired bidirectional relationships
Virtually paired bidirectional logical relationships when reading the virtual logical child
Virtually paired bidirectional logical relationships when reading the physical logical child
and the VIRTUAL option has been specified in the DBD (the concatenated key of the
logical parent is not stored in the physical logical child).
HALDB Partitions
Non-HALDB Database
HD Reload Keys A-E
HD Unload
Keys A-E
HD Reload
Keys Q-Z
The four HD Unload jobs process different key ranges in the non-HALDB database. They are
run in parallel. Their individual outputs are fed to four different HD Reload jobs which load the
four partitions in the new HALDB database. The reload jobs are also run in parallel.
The new capability cannot be used for unloading HALDB databases. Also, MIGRATX=YES
cannot be used with the KEYRANGE statement. MIGRATX=YES is used only for databases with
secondary indexes. It cannot be used with the KEYRANGE statement because the key
ranges appropriate for a secondary index would not be those used for the unload of the
indexed database. If your database has secondary indexes, you can use MIGRATE=YES and
create the secondary indexes with a tool such as IBM IMS Index Builder.
The records unloaded must match the HALDB partition boundaries. This is true because only
one output data set from HD Unload can be used as input to HD Reload. Similarly, all of the
records for a partition must be unloaded with one execution of HD Unload.
The job control language (JCL) must specify DISP=SHR for the unloads to run in parallel. If the
data sets are VSAM, you must also specify share options that allow concurrent reads.
SHAREOPTIONS(3 3) can be used for this purpose.
This capability was retrofitted into IMS 10 (APAR PM06635) and IMS 11(APAR PM06639).
Before IMS Version 12, when a HALDB partition was removed from the HALDB master and
information about the partition was removed from the RECON data set, IMS retained
information about the deleted partition name and prevented the name from being reused for a
non-HALDB database.
IMS 12 now discards the residual information about the partition name so that it can be
reused for a non-HALDB database. When a HALDB master is deleted, all of its partition
names become available for reuse.
The partition reorganization number is used to ensure that secondary index and logical
relationship pointers are accurate. The reorganization number for a partition is stored in the
partition data set. It is incremented by each reorganization of the partition. The reorganization
number is also stored in the EPS of secondary index entries and logical children. If the value
in the EPS does not match the value in the partition data set, the pointer is healed by
updating it from the ILDS. The reorganization number is also stored in the partition database
record in the RECON.
When a timestamp recovery is done to a time before the last reorganization, the
reorganization number in partition data set is returned to its previous value by the actions of
the Database Recovery utility (DFSURDB0) in previous versions of IMS. IMS 12 changes this.
The IMS 12 Database Recovery Utility takes the reorganization number from the RECONs,
increments it, and stores it in both the RECONs and the partition data set. This makes the
reorganization numbers in the partition data set and the RECON match. Previously, a
mismatch occurred until the first update job for the partition was executed. An update batch
job or online system takes the value from the RECON and writes it to the partition data set.
This is specified with new values on the EXIT= parameter of the DBD or SEGM macro of
DBDGEN (Example 3-10).
IMS 12 provides new options to log entire DEDB segments when a REPL call replaces some
of the data in a segment. Previously, only the changed data was logged in the x’5950’ log
record. The option to log the entire segment is specified for either the DEDB database or the
area.
Example 3-11 shows the new keywords for the INIT.DB, CHANGE.DB, INIT.AREA, and
CHANGE.AREA DBRC commands.
The new option is especially useful for users of the Log Archive exit who want access to the
entire segment for REPL calls. Without this enhancement, users must invoke Asynchronous
Changed Data Capture. It writes x’99’ log records in addition to the x‘5950’ log records.
Notify messages are used to send messages between data sharing IMS systems. For
example, they are sent to synchronize the initialization of the use of unit of work (UOW) locks.
These messages are sent by one IMS system to all of its data sharing partners. All partners
must respond before the processing can continue. IMS sends a DFS3770W message if all
partners do not respond within a time limit. The DFS3770W tells the user that the timeout
situation has occurred but does not identify which IMS system has not responded.
If a DFS3770W message is issued, the user can identify the system that has not responded by
determining for which system the DFS0066I message has not been sent.
All DFS0066I messages are issued by the IMS system that sent the original notify message.
This new message makes it easier for users to identify the failing system and to resolve the
problem more quickly. You might need to cancel the IMS that has not issued this message.
In IMS 12, you can have a secondary index on a primary DEDB database to process a
segment type in a sequence other than the one defined by the segment's key. A DEDB
database can be composed of 1 (optional) sequential dependent segment (SDEP) and up to
127 direct dependent (DDEP) segments, spread over up to 15 levels.
You can have a maximum of 32 secondary indexes per segment and 255 secondary indexes
per DEDB. Secondary indexes are only supported for DDEPs, and not for the SDEP. The
secondary index databases must be “root-only” VSAM-based HISAM or SHISAM databases.
HISAM databases must be used if you are confronted with duplicate secondary index values.
A SHISAM database is composed of 1 KSDS data set. A HISAM database can have overflow
in an ESDS data set.
Having duplicate secondary index values, if too many, can provoke performance problems.
Techniques can be used to avoid the creation of duplicate keys by concatenating the logical
key value with generic /CK values, thereby making the real stored physical key value unique.
Creation of the secondary index pointers can be suppressed by using an exit routine or a
NULLVAL parameter; this is known as “sparse” secondary indexing.
When designing a secondary index on a database, you must deal with three segments:
Target segment This is a segment in the primary DEDB, regardless of whether it is the
root, that you try to access by using the secondary key.
Source segment This is a segment in the primary DEDB that is the source for building
the search (secondary) key. The source segment can be the target, or
it can be a dependent of the target.
Index (pointer) segment
This is the root segment of the secondary index; the starting point for
the alternative processing access.
The pointer from the secondary index to the DEDB is a symbolic pointer. That is, it is the
concatenated key of the target segment. By using a symbolic key the secondary index is not
affected by reorganizations of the DEDB.
Indices are updated when a source segment is inserted, deleted, or replaced. For the
creation of secondary indexes on an existing DEDB, however, a tool (such as IBM IMS Index
Builder) or program is required to add a secondary index to an existing database.
When accessing a segment through the secondary index, the key feedback area uses the
secondary index key for the key of the target segment (the root) and uses the dependent
segment keys when they are accessed. The concatenated key in the key feedback area is
composed of the secondary index key and the keys of the dependent segments.
Figure 3-5 on page 80 illustrates the physical structure of a database and the structure as
viewed when accessing the database through the secondary index. In this case, they are the
same because the root segment is also the target segment.
A Target A Target
B D B D
C E G J C E G J
F H I F H I
The key feedback area is composed of the secondary index key and the keys of the
dependent segments. The key feedback area for segment C is composed of the secondary
index key, the key of segment B and the key of segment C.
For DEDB inverted structure access, the target segment, its direct parent segments from the
target segment to the root segment, and all its child segments from the target segments are
accessible. SDEPs are not accessible through the secondary index. When the target
segment is not the root, SDEPs are dependents of the root. They have no children and
cannot be source segments. This means that they cannot be target segments.
Important: Fast Path is different from full function. With full function, you have access to all
children of the root segment, even those that are not direct parents or dependents of the
target segment. The DEDB inverted structure access is limited to a subset of segments.
Figure 3-6 on page 81 illustrates the physical structure of a database and the structure as
viewed when accessing the database through the secondary index. In this case, the target is
not the root segment. The target is segment G.
The structure as seen from the secondary index path has the target, segment G, as the root.
Segment D is the parent of G in the physical structure, so it is the first parent in the secondary
structure. Its parent is A, so it is a dependent of D. Segments H and I are the only children of
segment G in the physical structure. They are also children in the secondary structure.
The illustration shows the key feedback areas for segments A and H. The key feedback area
for A includes the secondary index key, the key of segment D, and the key of segment A. The
key feedback area for H includes the secondary index key and the key of segment H.
A G Target
B D D H I
C E G Target J A
F H I
Key Feedback for A:
Secondary index key + key of D + key of A
Attention: With full function databases, the SENSEG statements must be in the
secondary structure order.
In Figure 3-7, the secondary structure order is G, D, A, H, and I. Nevertheless, the SENSEG
statement order is A, D, G, H, and I. This matches the physical structure order for these
segments.
Figure 3-7 SENSEG sequence different from full function, but not processing sequence
When accessing a database through the secondary index, the SENSEG statements in PSB
must be in the physical structure order of the DEDB.
This is different from the requirements for full function databases. With full function
databases, the SENSEG statements must be in the secondary structure order. In this
Attention: Even though the SENSEG statements in the PSB are in physical structure
order, the segments are accessed in the secondary structure order when accessing the
database through the secondary index. For example, a program using unqualified GN calls
accesses segments in the order G, D, A, H, and I, which does not match the order of the
SENSEG statements of A, D, G, H, and I.
The order of the segments in the PCB is different, and the order in which the programmer
accesses them is unique with DEDB secondary indexes.
The LCHILD statement follows the SEGM statement for the segment that is the target of
the secondary index. The NAME= parameter specifies the segment in the secondary index
and the secondary index database name. The PTR= parameter must specify SYMB. This
indicates that the pointer is symbolic; the pointer is the concatenated key of the target
segment.
The XDFLD statement follows the LCHILD statement. The XDFLD statement defines the
field or fields on which the secondary index is built.
If the source segment is not the target segment, it is specified in the SEGMENT= parameter.
The source segment can be either the target segment or one of its dependents. If
SEGMENT= is not included, the source segment is the target segment.
The field or fields on which the secondary index is built are defined in the SRCH= parameter.
Up to five fields can be specified. All fields must be in the same segment. The DDATA=,
SUBSEQ=, and NULVAL= parameters are optional.
– DDATA= defines duplicate data fields. These are fields in the source segment that are
also included in the secondary index segment. They are available to programs that
process the secondary index as a database.
– SUBSEQ= defines subsequence fields. These fields are used to sequence secondary
index segments that have the same secondary index key. Concatenated key fields
(/CKxxxxx) fields can be specified as SUBSEQ= fields, which is explained in the following
section.
– Index suppression is optional. It can be specified in two ways, with the NULLVAL=
parameter and with the EXTRTN= parameter. Either, neither, or both can be specified.
NULLVAL= is a one-byte field. If it is specified, a SRCH= value that has this value in all of
its bytes will cause index suppression. Index suppression causes no index entry to be
created for the source segment. If multiple fields are specified in SRCH=, all must have
this value for index suppression to be invoked.
The secondary index has only one SEGM statement. This statement specifies the
segment name, PARENT=0, and the size (BYTES=) of the segment. The segment size
includes the size of the secondary index key plus the size of the concatenated key of the
target segment. It also includes the size of any subsequence and duplicate data fields. It
can be larger than the sum of these fields. If it is larger, the remaining space can be used
for user data.
One or more FIELD statements must be included. The secondary index key is defined with
SEQ included in the NAME= parameter. If the key including the subsequence field is unique,
specify U. If it is not unique, specify M. The BYTES= parameter specifies the size of the
sequence field. This includes the search fields and subsequence fields. Always specify
START=1.
The LCHILD statement NAME= parameter specifies the target segment name and the target
database. The INDEX= parameter value must match what is specified in the NAME=
parameter on the XDFLD statement in the target database. PTR=SYMB must be specified.
See Example 3-15.
The /SX field: Secondary indexes for full function databases can use a /SX field to create
unique keys. The /SX field is either the four-byte relative byte address (RBA) of the source
segment or the 8-byte indirect list key (ILK) of the segment. The ILK is used by HALDB.
The /SX field cannot be used with DEDBs. DEDB segments do not have an ILK, and they
can be changed by a replace (REPL) call.
Duplicate data can be included in a secondary index by including DDATA= on the XDFLD
statement. Duplicate data is only available to application programs when they process the
secondary index as a database. It is not available to them when they use the secondary index
to process the DEDB.
SHISAM does not support CI reclaim with data sharing. Therefore, when all index entries
from a large range of keys are deleted, data sharing users might prefer to choose HISAM
even when the secondary index has unique keys.
HISAM secondary indexes support non-unique keys. They require an overflow data set.
When using non-unique keys, the DATASET statement includes both the DD1= parameter
and the OVFLW= parameter. They specify the DD names for these data sets. If unique keys
are used, the DATASET statement specifies only the DD1= parameter. Unique keys are
supported with both HISAM and SHISAM.
When accessing a segment through the secondary index, the key feedback area uses the
secondary index key for the key of the target segment. The key feedback area uses the keys
of the other segments when they are accessed. The concatenated key in the key feedback
area is composed of the secondary index key and the keys of the other segments in the path
from the secondary index.
DEDB (EDUCATDB)
target COURSE
COURSENO(6)
COURNAME(10)
CLASS target
CLASSNO(6)
COURSE
COURSENO
COURNAME
CLASS
INSTXSEG
CLASSNO
INSTSKEY EDUCATDB
The index is built on the INSTPHNO field. In this example, a /CK field is used to create unique
keys.
Figure 3-10 DBD for DEDB with target ROOT, source INSTRUCT
The LCHILD and XDFLD statements follow the SEGM statement for COURSE. The LCHILD
segment specifies the secondary index segment, INSTXSEG, and database, INSTSXDB.
PTR=SYMB is specified, as required.
The XDFLD statement specifies the name of the search field, XINST, for use with the
secondary index. Because the source segment is not the target segment, the following
parameters are used:
SEGMENT=INSTRUCT is specified to indicate that INSTRUCT is the source segment.
SRCH=INSTPHNO specifies that field INSTPHNO is used to build the search field.
SUBSEQ=/CKAAAAA is used to create a unique key.
CLASS
INSTXSEG
CLASSNO
INSTSKEY EDUCATDB
Figure 3-11 DBD for index with target ROOT, source INSTRUCT
Figure 3-11 shows the DBD for the secondary index database, which is HISAM. Therefore,
ACCESS=(INDEX,VSAM) is specified.
DD1= on the DATASET statement specifies the DD name of the secondary index data set.
Because the key is unique, the OVFLW= parameter is not specified on the DATASET
statement.
The SEGM statement defines the segment in the secondary index. Its size is 30 bytes. It is
composed of the secondary index key (10 bytes), the concatenated key of the INSTRUCT
segment (/CKAAAAA field which is 15 bytes), and the symbolic pointer to the target (5
bytes).
The FIELD statement defines the sequence field in the secondary index. Because the
keys are unique, U is specified in the NAME= parameter. The size of the sequence field is
25 bytes. This is the size of the INSTPHNO field (10 bytes) in the INSTRUCT segment and
the /CKAAAAA field (15 bytes).
The LCHILD statement specifies the target segment, INSTRUCT, and database,
EDUCATDB, in the NAME= parameter. The INDEX= parameter specifies the NAME= value on
the XDFLD statement of the target database. This is XINST. PTR=SYMB is required for Fast
Path secondary index databases.
EDUCATDB
COURSE
COURSENO
COURNAME
CLASS STUDXSEG
CLASSNO STUDSKEY
STUDSXDB
INSTRUCT STUDENT
INSTNO STUDNO
INSTPHNO STUDPHNO
STUDNAME
The index is built on the STUDPHNO field. We do not use a /CK field, so we can have
synonyms for the key.
Figure 3-13 DBD for DEDB with target CLASS, source STUDENT
Figure 3-14 shows the DBD for the secondary index database. It must be HISAM because of
the duplicated secondary index keys, therefore, ACCESS=(INDEX,VSAM) is specified.
EDUCATDB
COURSE
COURSENO
COURNAME
CLASS STUDXSEG
CLASSNO STUDSKEY
Figure 3-14 DBD for index with target CLASS, source STUDENT
DD1= on the DATASET statement specifies the DD name of the secondary index data set.
Because the key is not unique, the OVFLW= parameter must be specified on the DATASET
statement.
The SEGM statement defines the segment in the secondary index. Its size is 21 bytes. It is
composed of the secondary index key (10 bytes) and the symbolic pointer to the target
(11) bytes).
The FIELD statement defines the sequence field in the secondary index. Because the
keys are not unique, M is specified in the NAME= parameter.
The LCHILD statement specifies the target segment CLASS and database EDUCATDB in
the NAME= parameter. The INDEX= parameter specifies the NAME= value on the XDFLD
statement of the target database. This is XSTUD. PTR=SYMB is required for Fast Path
secondary index databases
Secondary indexes for Fast Path databases are invisible to the application program. When a
DEDB database needs to be accessed using its Fast Path secondary index, the PROCSEQD=
When a primary DEDB database is accessed through its secondary index using the PCB with
the PROCSEQD= parameter, the primary DEDB database is processed in an alternate
sequence. The way in which the application program perceives the database record changes.
If the target segment is a root segment in the primary DEDB database, the inverted structure
in the primary DEDB database that is using the secondary index is the same as the physical
structure of the primary DEDB database.
If the target segment is not a root segment in the primary DEDB database, the hierarchy in
the database record is conceptually restructured as an inverted structure. The DEDB inverted
structure access is limited to a subset of segments. For DEDB inverted structure access, the
target segment, its direct parent segments from the target segment to the root segment, and
all its child segments from the target segments are accessible.
Figure 3-15 shows the inverted sequence for target segment CLASS.
CLASS
COURSE STUDENT
INSTRUCT STUDENT
Nevertheless, the SENSEG statements in the PCB must be specified in the hierarchical
sequence of the main DEDB (Example 3-16).
Example 3-16 PCB for processing by using the secondary index STUDXSEG
PCB TYPE=DB,DBDNAME=EDUCATDB,PROCOPT=G,KEYLEN=30,PROCSEQD=STUDXSEG
SENSEG NAME=COURSE,PARENT=0
SENSEG NAME=CLASS,PARENT=COURSE
SENSEG NAME=STUDENT,PARENT=CLASS
PSBGEN LANG=ASSEM,PSBNAME=CLASSTUD
END
When you use a Fast Path secondary index, you must specify the PROCSEQD= parameter on
the PCB statement in the PSB. Notice that you must specify PROCSEQD, and not PROCSEQ. The
D is added to indicate that the index is for a DEDB.
The order of the SENSEG statements is the physical order in the indexed database, not
the secondary index processing order. In this example, the SENSEG statements appear in
the order COURSE, CLASS, and STUDENT.
The secondary index processing order is CLASS, COURSE, and STUDENT.
Secondary indexes for Fast Path databases have e capabilities that are not available with
secondary indexes for full function databases.
The first of these capabilities is user data partitioning (Figure 3-16) so that a secondary index
can be spread across multiple physical databases. This supports large indexes.
Each index database contains a range of keys. Either HISAM or SHISAM can be used, but all
“partitions” in an index must be the same type. Index keys are assigned to an index database
by a user partition selection exit routine. The index that is passed is the secondary index key.
Each database in an index must have the same structure and attributes. The databases might
have different sizes to accommodate the number of entries that can exist in the different key
ranges.
Secondary Index
INDXDB1 INDXDB2 INDXDB3 INDXDB4 INDXDB5
DEDB
To use this facility, you must use special keywords in the DBD definition of the DEDB
(Example 3-17).
User data partitioning is defined by specifying multiple database names on the LCHILD
statement in the indexed database. The order of the databases in the LCHILD statement
determines the order in which the index partitions are processed by get next calls. This
means that a get next call that reaches the end of an index database will continue to the next
database defined in the LCHILD statement.
The partition selection routine is called when an insert or qualified get call is issued. The
routine determines which secondary index database is used for the call. That is, it is used to
specify in which database a secondary index key is stored. The routine can be shared by
multiple secondary indexes. These indexes might be on different databases.
The partition selection option controls whether only one secondary index user partition is
used for a call. It is only used with secondary index processing. This means that PROCSEQD=
must be specified in the PCB used for the call.
The second of these capabilities is support for multiple secondary index segments. This is
one index created from different fields in the same source segments.
Because the index has one key size, the search fields must be the same size. With this
capability, you can build one index from similar data that is stored in different fields. An
example is an index based on telephone numbers. Multiple phone numbers, such as home
phone, work phone, and cell phone can be stored in different fields, but only one index is
used. The index might have an entry for each phone number.
ACCTDB
PHONINDX
ACCT
PHONSEG
OWNER
PHONEKEY
OWNNAME
HOMEPHN
WORKPHN
CELLPHN
Multiple secondary index segments is the support for one index created from different fields in
the same source segments. Each search field (or set of search fields) is used to create an
entry in the secondary index. These fields or sets of fields must be the same size. In this
example, telephone numbers in the OWNER segment have three fields. The index might
contain an entry for each phone number.
Multiple secondary index segments are defined by including multiple LCHILD and XDFLD
statement pairs in the indexed database (Figure 3-18).
The LCHILD segment must include the MULTISEG=YES parameter. MULTISEG=NO is the default
for this parameter.
The NULLVAL= parameter is useful for avoiding the creation of secondary indices without a
value. With this parameter, you suppress the creation of index pointer segments when the
index source segment data used in the search field of an index pointer segment contains the
specified value.
PHONINDX
DBD NAME=ACCTDB,ACCESS=DEDB,RMNAME=RMD4
AREA DD1=ACCT1,SIZE=1024,UOW=(100,10),
PHONSEG
ROOT=(236,36)
SEGM NAME=ACCT,PARENT=0,BYTES=100 PHONEKEY
FIELD NAME=(ACCTNO,SEQ,U),BYTES=12,START=1
LCHILD NAME=(PHONKEY,PHONINDX),PTR=SYMB,MULTISEG=YES
XDFLD NAME=XPHON,SEGMENT=OWNER,SRCH=HOMEPHN ACCTDB
LCHILD NAME=(PHONKEY,PHONINDX),PTR=SYMB,MULTISEG=YES
XDFLD NAME=XPHON,SEGMENT=OWNER,SRCH=WORKPHN ACCT
LCHILD NAME=(PHONKEY,PHONINDX),PTR=SYMB,MULTISEG=YES
XDFLD NAME=XPHON,SEGMENT=OWNER,SRCH=CELLPHN
SEGM NAME=OWNER,BYTES=300,PARENT=ACCT OWNER
FIELD NAME=OWNNAME,BYTES=40,START=1 OWNNAME
FIELD NAME=HOMEPHN,BYTES=10,START=41 HOMEPHN
FIELD NAME=WORKPHN,BYTES=10,START=51 WORKPHN
FIELD NAME=CELLPHN,BYTES=10,START=61 CELLPHN
DBDGEN
Multiple LCHILD and XDFLD statements refer to the same secondary index. The LCHILD
statements are the same. The XDFLD statements have the same NAME= and SEGMENT=
parameters. The SRCH= parameters point to the various fields that are used for indexing.
Example 3-19 DBD for index database with multiple secondary indices
DBDSX DBD NAME=PHONINDX,ACCESS=(INDEX,VSAM)
DATASET DD1=PHONKSDS,OVFLW=PHONOVFL
SEGM NAME=PHONSEG,PARENT=0,BYTES=22
FIELD NAME=(PHONEKEY,SEQ,U),BYTES=10,START=1
LCHILD NAME=(OWNER,ACCTDB),INDEX=XPHON,PTR=SYMB
DBDGEN
END
In this example, the XDFLD statements in the DEDB DBD might include the NULLVAL= ' '
parameters to specify that a secondary index entry is not built when the phone field contains
blanks.
You need to write a program to load the secondary index databases or use a tool. This
program or tool must fill the segments of the secondary indices. Several fields must be filled
depending on the XDFLD description of the contents of the SI.
Figure 3-19 shows an overview of the different segment layouts that are possible, depending
on the index organization (HISAM, SHISAM), or whether we have duplicate (only with
HISAM) SI values.
The IMS High Performance Fast Path Utilities offering is used for validating and building
secondary indexes on Fast Path databases. Although this tool is described in Chapter 10,
“Tools for IMS 12” on page 385, a brief overview is provided here.
FPA Secondary Index (FPSI) support provides build and analyze capabilities for the Fast Path
secondary index databases that are supported in IMS 12 with the following features:
HISAM/SHISAM secondary index databases
Unique Key/Non-Unique Key
Sparse Indexing
Symbolic Pointer
Multiple Secondary Index Segments
Partition Selection
For more information, see IBM Fast Path Solution Pack for z/OS V1R1, IMS High
Performance Fast Path Utilities User’s Guide, SC19-2914.
The HPFPU Build Index function (INDEXBLD) is related to Fast Path secondary indexes. This
function (Figure 3-20 on page 96) builds all FPSI databases of DEDB areas when the
secondary indexes are defined against the existing DEDB. It can build multiple secondary
index databases in one job step, and can rebuild the specific secondary index databases in
case of system failures. APAR PM37894 has enabled the new INDEXBLD function for users
of the IMS Database Recovery Facility under IMS 12.
HPFPU
AREA Indexbld
Statistics
reports
AREA1
Index
Database
ACBLIB
This function reduces the amount of time that it takes to build multiple secondary index
databases using both parallel sorting and parallel loading. It also provides the following new
reports:
Secondary Index Definition Report
Secondary Index Processing Report
FPSI data set names are identified with the member in the DFSMDA library. You can also
specify in the JCL DD statements. The NOTIFY.REORG command is issued for FPSI databases
when the DBRC=YES parameter is specified (Example 3-20).
To create a new DEDB with a secondary index and to create entries in the secondary index,
complete these steps:
1. Create a DEDB DBD with the LCHILD and XDFLD statements.
2. Create a secondary index DBD.
3. Allocate the DEDB and secondary index data sets.
4. Create a PSB with the PROCSEQD= parameter in PCB.
5. Run the DBDGEN and PSBGEN utilities, and then run the ACBGEN utility.
An alternative method for adding a secondary index to an existing DEDB without requiring
tool is to unload the DEDB database, and then use the same steps as documented here.
IMS 11
In any IMS with the database manager, buffers are needed to fulfill requests from database
calls are obtained from a global pool called the Fast Path buffer pool.
If you are using the Fast Path 64-bit buffer manager, IMS creates and manages the Fast Path
buffer pools for you and places DEDB buffers in 64-bit storage. The buffers are placed in
virtual storage above 2G, above the bar. The 64-bit buffer manager creates multiple subpools
with different buffer sizes. The 64-bit buffer manager creates an initial allocation of subpools
based on the number of areas of each CI size. It automatically created more buffers in a
subpool when they were needed. This is only possible when you have the Z/architecture
enabled. When the Fast Path 64-bit buffer manager is enabled, you do not need to design
DEDB buffer pools or specify the DBBF, DBFX, and BSIZ parameters that define Fast Path buffer
pools.
In IMS 11, you can enable the Fast Path 64-bit buffer manager by specifying FPBP64=Y in the
Fast Path section of DFSDFxxx PROCLIB member (Example 3-21).
When the Fast Path 64-bit buffer manager is enabled, IMS ignores the DBBF, DBFX, and BSIZ
parameters, if specified.
IMS 12
IMS 12 enhances the Fast Path 64-bit buffer manager, which was introduced in IMS 11. With
IMS 12, you can specify the initial amount of 64-bit storage used for the buffer pool. Buffer
pools are pre-expanded, that is, expanded in anticipation of future needs. They are
compressed when the use of a subpool drops. IMS 12 moves some buffers that were still in
extended common service area (ECSA) to 64-bit storage. Finally, IMS 12 enhances the QUERY
POOL TYPE(FPBP64) command output.
IMS 11 expands a 64-bit subpool when a DL/I call requires a buffer but none is available.
IMS 12 has an option to expand subpools in anticipation of the need for more buffers. When a
subpool is almost out of available buffers, the extension process is initiated asynchronously.
As the volume of buffer requests increase, the subpool extension process will increase the
pace at which subpools are extended. By the time the additional buffers are required, the
subpool should have been extended, avoiding wait-for-buffer conditions.
IMS 12 also adds the capability to compress subpools when there are substantial unused
buffers. IMS resizes the subpool after 24 hours if it is grossly overallocated in size. Subpools
are compressed by reducing the number of buffers in the subpool, which is the default.
Subpools can also be deleted. That is, all of the buffers of a certain size can be deleted. This
will only be done when none of the buffers of this size have been used for more than 24 hours.
If buffers of this size are needed again, the subpool will be rebuilt. The compression and
deletion actions are the default. They can be disabled by specifying FPBP64C=N in the
FASTPATH section of the DFSDFxxx PROCLIB member.
The use of the 64-bit Fast Path buffer manager in IMS 11 did not put all buffers in 64-bit
storage. Buffers in ECSA were used for MSDBs, SDEP inserts and FLD calls by IMS 11.
Additionally, ECSA storage was used for buffer headers and control blocks.
IMS 11 emergency restart uses ECSA buffers for all SDEP processing.
IMS 12 uses 64-bit buffers for FLD calls.
IMS 12 emergency restart uses 64-bit buffers for SDEPs unless FPBP64SR=N is
specified.
IMS 12 increases the size of the extents if too many extents exist, which keeps the number of
extents to a minimum.
IMS 12 enhances the QUERY POOL TYPE(FPBP64) command from IMS 11. The new
SHOW(STATISTICS) option provides a subset of the data returned with the SHOW(ALL) option,
The QUERY command in IMS 12 also provides data on extended private (EPVT) use. It
provides new status information for subpools. The status shows if a pool is being
compressed, expanded, or deleted.
If you are not using the Fast Path 64-bit buffer manager, you must specify the characteristics
of the pool yourself during IMS system definition and during IMS startup.
The scalability enhancements in CICS Transaction Server Version 4.2 fall into two broad
areas: increased exploitation of Open Transaction Environment (OTE) and increased
exploitation of 64-bit storage. For more information, see IBM CICS Scalability: New Features
in V4.2, REDP-4787.
CICS OTE is an architecture that was introduced for the following purposes:
To allow parallelism in using the mainframe resources, increasing the throughput of work
through the system
To improve the performance of existing applications accessing external resource
managers, such as IMS and DB2
To enrich the CICS application programming interface by providing application interfaces
supplied by other software components and allowing CICS applications to use these
interfaces
To benefit from OTE, your applications need to be threadsafe. OTE enhancements make
more of the CICS API and system programming interface (SPI) threadsafe, including access
to IMS databases through the CICS-DBCTL interface. This CICS threadsafe function was
added with CICS V4.2 for IMS V12 with APARs PM31420 and PM47327.
With these APARs, IMS 12 provides the coordinator controller (CCTL) database resource
adapter (DRA) Open Thread TCB (OTT) enhancement. IMS 12 now provides the option for
CCTL exploiters to direct the DRA not to attach dedicated DRA thread task control blocks
(TCBs). This option avoids the overhead of TCB switching for IMS database calls and lead to
improved parallel processing (Figure 3-21 on page 100).
CICS TS 4.2 extends the threadsafe support to the DBCTL/DRA interface taking advantage
of the DRA OTT support. The enhancement applies to both EXEC DLI and CALL DLI.
Without this enhancement, the IMS CCTL DRA attaches dedicated thread TCBs in the CCTL
address space. PSB schedule requests are assigned one of these thread TCBs. All
subsequent thread-related DL/I requests result in the application's task being suspended, and
processing switches onto the DRA thread TCB to complete the DL/I request. Upon returning,
the application's task is resumed, switching processing off the DRA thread TCB which is then
suspended awaiting the next DL/I request. This sequence of events is repeated for each DL/I
The expected resulting benefits are lower CPU usage and increased throughput.
Figure 3-21 TCB switching with threadsafe support for CICS accessing IMS database
Where:
LOADED appears when the routine is loaded as a result of the open of the database.
SHARED appears when the routine is already in memory due to its use by another
database.
The DFS2838I message is issued for full function databases as a result of the following
commands:
/DBR DB dbname
/DBD DB dbname
/STO DB dbname
/STA DB dbname
UPDATE DB NAME(dbname) STOP(ACCESS\UPDATES\SCHD)
UPDATE DB NAME(dbname) START(ACCESS)
In message DFS2838I:
GONE appears when the routine is deleted from memory.
SHARED appears when the routine remains in memory due to its use by another
database.
If an HALDB database uses a partition selection exit routine, the DFS2406I message is issued
when the database is opened or closed as the result of a command. When the database is
opened, either LOADED or SHARED is displayed in the message. The following commands are
among those that might cause the DFS2406I message to be issued:
/START DB HALDBmaster OPEN
UPDATE DB NAME(HALDBmaster) START(ACCESS) OPTION(OPEN)
UPDATE DB NAME(HALDBmaster) STOP(ACCESS|UPDATES|SCHD)
/DBR DB HALDBmaster
/DBD DB HALDBmaster
These messages are especially useful when replacing a shared exit routine. They clearly
indicate if the old routine has been deleted and if a new routine has been loaded
(Example 3-24).
When the database is opened, either LOADED or SHARED appears in the message.
LOADED appears when the routine is loaded as a result of the opening of the database.
SHARED appears when the routine is already in memory due to its use by another
database.
When the database is closed, either GONE or SHARED appears in the message.
GONE appears when the routine is deleted from memory.
SHARED appears when the routine remains in memory due to its use by another
database.
If the wait for a lock exceeds the IMS LOCKTIME value when using the internal resource lock
manager (IRLM), then the lock request is rejected and either the waiter is abended with a
U3310 or a BD status code is returned to the program.
The first example is for the multiple line message. If the same lock has other waiters, they are
also listed with the word WAITER where VICTIM appears in this example. The second
example is for the single line or “short” message.
In this example, BMP IMSBLK65 using PSB DFSIVP65 on IMS I12B holds a local and global
root lock in database IVPDB1. The elapsed time of this update is now 35 seconds. Another
BMP IMSDLK65 using the same PSB DFSIVP65 on IMS I12D is waiting on this lock. Its
elapsed time is now 35 seconds. The victim on IMS I12A is a BMP IMSALK64 using PSB
DFSIVP64 on IMS system I12A.
Example 3-26 shows a similar problem except this time IMS is configured to only issue the
short form of the DFS2291I message.
Previous IMS releases provide information only through IBM RMF™ reports. The RMF II
ILOCK (IRLM Long Lock Detection) Report includes information about waiters and blockers
when a lock request exceeds the IRLM TIMEOUT value. RMF records (type 79.15) can be
formatted for more information about the task that is holding (or waiting for) the lock.
By using the IRLM Lock Timeout function, you can interrupt processes that are waiting for
locks for longer than a specified number of seconds, with the integer value for the LOCKTIME
parameter, specified in the DFSVSMxx member of the IMS PROCLIB data set (or
DFSVSAMP DD statement for IMS batch procedures). See Example 3-27.
The number of seconds can also be changed after IMS initialization by issuing the following
command:
MODIFY irlmproc,SET,TIMEOUT=nn,imsid
The DFS2291I messages are only issued if they are requested by specifying MSG2291=ISSUE or
MSG2291=SHORT in the DIAGNOSTIC_STATISTICS section of the DFSDFxxx member
(Example 3-28).
If the user has chosen the ABEND option for timeouts, IMS TM transactions are retried with
three exceptions (IFP regions, CPI-C driven applications, and protected conversations).
With IMS 12, batch jobs can survive when these structure accesses fail. Like online systems,
they wait for the resolution to the problem. When the problem is resolved, the batch jobs
continue processing. For example, when a rebuild of a structure completes, the batch jobs
continue.
The reason code in the message is used to identify the type of failure that occurred when the
batch job attempted to access the structure.
With this enhancement, you can rebuild OSAM and VSAM cache structures while your data
sharing batch jobs are executing. You might run this task to address coupling facility failures
or to move structures between coupling facilities for reconfigurations. In previous versions of
IMS, batch jobs did not survive these rebuilds.
LOG is a parameter of the DBD macro in the DBDGEN utility (Example 3-29).
In previous versions of IMS, the IBM Resource Access Control Facility (IBM RACF®) user ID
only appears in changed data capture log records (type x’9904’) when the log is produced by
an online system. IMS 12 adds the RACF user ID to these log records when they are
produced by batch (DLI or DBB) jobs. Changed data capture writes log records when the
LOG (asynchronous) option is chosen in the DBDGEN.
The RACF user ID is specified by including the USER= parameter on the JOB statement of the
batch job.
VSAM buffer pools are defined with POOLID statements in the DFSVSMxx member or
DFSVSAMP data set. With IMS 12, you can specify up to 255 of these POOLID statements.
IMS 12 has changed this processing. When such problems occur, the database is closed and
marked recovery needed. The abend does not occur. Additionally, message DFS0730I is
issued for open or close problems. X is included as the first character in the reason code. The
yy and z values identify the actual problem (Example 3-30).
In previous versions, the DFS0730I message was not issued with the U0080 abend, which
made it more difficult to determine the database and data set with the problem.
When the data set cannot be extended as part of EOV processing, message DFS0842I is
issued:
DFS0842I OSAM DATASET CANNOT BE EXTENDED, REASON=x, z dbdname
DFS0842I ddname, dsname
This message is enhanced in IMS 12 to include a subcode (z) to further explain the reason for
extension failure.
The DFS993I message is issued when the PSB Work, CSA PSB, or DLI PSB pool is too small.
With IMS 12, DBCTL users can easily determine why a PSB schedule failure occurs because
of insufficient space in one of these pools.
CA CA CA CA
Data
Component
By using the CA Reclaim feature in z/OS 1.12, you can reuse the free CA space. With CA
Reclaim, space fragmentation caused by erasing records from a KSDS will be minimized to
reduce the need to reorganize the data set. When the freed CAs are placed in a free chain to
be reused, the index structure can be shrunk to facilitate faster data accesses. When space is
needed for a new CA, a CA from the free chain is reused so there will be fewer calls to EOV to
extend the KSDS.
There is no requirement for all of the systems in a sysplex to be at the same z/OS release
level. z/OS 1.10 and z/OS 1.11 have compatibility maintenance so that they can process data
sets for which CA reclaim is being used with z/OS 1.12. However, CA reclaim is only
processed on systems that have z/OS 1.12.
These codes change the actions that might otherwise occur for calls qualified on the key of
the root segment. Consider a call qualified with key >= 1000 and key <=2000. Without the A
or G command codes, the call randomizes using key=1000. It returns the first segment found
if it satisfies the SSAs. Otherwise, it returns a GE (not found) status code. If a segment is
returned, successive calls using the same SSAs move forward in the database. If a segment
not satisfying the call is encountered, a GE status code is returned. However, the use of the A
command code causes the call to begin at the beginning of the database. The G command
code causes the call to ignore roots which do not satisfy the SSAs and continue to following
roots until one is found that satisfies the SSAs.
If you use these new command codes, use the A command code on the first GN or GHN call
and do not use it on successive calls, because if A is used on every call, the calls always start
at the beginning of the database and return the same segment.
The JDBC driver uses the new command codes when converting a JDBC call to a DL/I call.
This is done so that the JDBC call returns the same results with an IMS database that it
returns with a relational database. Without these command codes, a search on a range of
root segment keys is done with logic that assumes the roots are in key sequence. A call
qualified on a range of root segment keys attempts to begin the search with the key at the
bottom of the range.
For example, a search for roots with keys >= 1000 and <= 2000 begins by using the partition
selection exit routine or the randomizer with key 1000. Subsequent GN or GHN calls with the
same qualification move forward in the database. If a root segment with key>2000 is found,
the search ends. The JDBC driver uses the new command codes to change the logic of the
search. With the new command codes all root segments in the database are examined. This
provides the expected results from the JDBC call.
Searching the entire database comes with a performance cost. The performance cost for the
search can be eliminated by using an alternative. If many of these calls are issued by a
program, you can create a secondary index on this key. Then you can use the secondary
index for the calls by specifying PROCSEQ=, referencing the secondary index for full function
databases or PROCSEQD=, referencing the secondary index for DEDBs. This way, the search
can be performed without examining root segments that are not in the key range. It avoids the
sequential scan of the database. This approach is analogous to placing an index on a column
in a relational database. In fact, with a relational database, create the index when this type of
JDBC call is made.
IRLM 2.3 is required by DB2 10 for z/OS. However, IRLM 2.2 can be used by the IMS
database manager when DB2 is using IRLM 2.3. IRLM 2.3 supplies a 64-bit caller interface
that is required by DB2 Version 10. IMS does not use this interface.
IRLM 2.3 must run under z/OS 1.10 or later. IRLM 2.3 provides some improved performance.
An ACB is built for each DBD and each PSB. Each ACB is stored as an ACB library member
in the IMS.ACBLIB data set. ACBs are loaded into storage for online use either when the IMS
control region is initialized or when an application program that requires the ACB is
scheduled, depending on whether the ACB is resident or non-resident.
In online storage, the ACBs for PSBs and the ACBs for DBDs are stored separately in a PSB
pool and DMB pool, respectively. IMS loads the non-resident ACB into 31-bit storage only
when an application program that requires it is scheduled. Non-resident ACB members
persist in storage only until the storage allocated for the non-resident ACBs is exhausted, at
which time IMS frees storage by removing the ACBs that have remained unused for the
longest period of time.
To enable your ACBs to use 64-bit storage, specify a ACBIN64=ggg parameter in the
<DATABASE> section of the DFSDFxxx PROCLIB member. The value specified (ggg) is the
amount of 64-bit storage (in GB) to be allocated for PSB and DMB ACB members. Because
PSBs and DMBs are stored together in 64-bit storage pools, the size of a 64-bit storage pool
must be large enough to contain all of the non-resident PSBs and DMBs combined. The size
of an ACB member is reported by the ACB Maintenance utility when the ACB members are
built.
To migrate ACB libraries to use dynamic allocation, you create DFSMDA members for the
ACBLIBA and ACBLIBB data sets. Remove the IMSACBA and IMSACBB DD statements
from the IMS and DL/I JCL procedures. IMS must be stopped and restarted with the new
ACBIN64 parameter in DFSDFxxx and with the DFSMDA members.
You can display information about the ACBs cached in 64-bit storage by using the type-2
QUERY POOL TYPE(ACBIN64) command.
For batch application programs, IMS does not support 64-bit caching of ACBs.
Before IMS 12, this local database number is never reused when its database is deleted by
online change or DRD. A cold start is required when the local database number reaches the
limit of 32 K-1. In IMS 12, the local database numbers deleted by online change or DRD can
be reused when databases are added by online change or DRD. This change affects only
local DMB numbers.
DBRC RECON data set: Concurrent access to databases by systems in one or more
z/OS operating systems is controlled with the shared Database Recovery Control (DBRC)
RECON data set.
IMS systems automatically sign on to DBRC, ensuring that DBRC knows which IMS
systems and utilities are currently participating in shared access. A given database must
have a DMB number that uniquely identifies it to all the sharing IMS systems. Global DMB
numbers are reused in previous versions of IMS. A global DMB numbers is assigned to a
database when it is registered with DBRC. The reuse of global DMB numbers was
introduced in IMS 9. The DMB number that DBRC records in its RECON data set is related
to the order in which databases are registered to DBRC.
The capability to use shared queue for synchronous messages was introduced with IMS 8
and was based on Resource Recovery Services (RRS) to support the cascaded transactions
between front-end and back-end IMS systems. However, RRS has a CPU cost and
complicates the diagnostic in case of problem. Various clients had implemented pseudo wait
for input (WFI) regions for a part of the workload to privilege a front-end IMS, and to reduce
cascading in transaction processing and lower RRS consumption.
OTMA requests from remote clients can be either Send-Receive or Send-Only. The
Send-Receive interaction supports the use of both Commit Mode 1 and Commit Mode 0.
Send-Only interactions only support Commit Mode 0. Commit Mode 1 is considered to be
a synchronous transaction request and Commit Mode 0 is asynchronous.
IMS 12 removes the RRS dependency for the synchronous transactions with a
synchronization level NONE or CONFIRM. IMS 12 uses cross-system coupling facility (XCF)
communication between shared queue’s front-end and back-end systems instead of RRS.
The benefit of this enhancement is a performance improvement and a simplification of the
syncpoint process.
As mentioned, the synchronous APPC and OTMA shared queues enhancement was
introduced with IMS 8. Before IMS 12, APPC and OTMA shared message queue enablement
used RRS Multisystem Cascaded Transaction support to synchronize IMS systems in a
sysplex.
RRS had to be turned on with the DFSPBxxx RRS parameter. Then, the control of this
enablement was done through the DFSDCxxx AOS parameter. It had three possible values
for this sharing:
Y Activate
N Not activate
F Force
The values Y, N, and F are still available with IMS 12, and three new parameters offer the
possibility to use XCF communications for transactions using SL0 and SL1.
Shared queues environments that are set with IMS versions before IMS 12 can still run with
the new IMS 12 members with no new settings. You can plan the synchronous shared queues
migration to XCF as a separate project.
The AOS= parameter includes the options listed in Figure 4-1 to manage the synchronous
APPC/OTMA shared message queue (SMQ) support.
With IMS 12, the AOS parameter has new values to indicate that XCF will be used rather than
RRS to process APPC and OTMA synchronous transactions.
The front-end designates an IMS system that receives a message, sets the XCF indicator
according to the DFSDCxxx settings, sets or unsets affinity, and then puts the message in the
shared queue.
The back-end designates an IMS system that gets a message from the shared queue,
schedules this message in a region, and then sends the response back. In some cases, the
back-end is the same as the front-end.
The result of the AOS setting depends on the registration of IMS to RRS, which is controlled
with the RRS parameter in the DFSPBxxx proclib member. It can be set to Yes or No.
Table 4-1 explains the meaning of the combination of setting the two parameters.
For APPC and OTMA transactions, the new DFSDCxxx AOSLOG parameter specifies
whether the front-end system writes an X'6701' log record for the following cases (using a
value of Y or N as appropriate):
A response message is returned from the back-end system by XCF for transactions with
all synchronization levels.
An error message is returned from the back-end system by XCF for transactions with all
synchronization levels of NONE, CONFIRM, and SYNCPT.
Important: Do not mix RRS=Y and RRS=N in a shared queue group with members having
AOS=Y or AOS=F.
SL2 messages always need RRS. If RRS is not present, SL2 messages cannot be
processed.
Color guide for Figure 4-2: Figure 4-2 uses the following color scheme:
IMS members in blue have RRS=N, and IMS members in purple have RRS=Y.
The upper blue dash-dotted lines represent how a member enqueues the message to
the shared queue, with affinity or not, with the XCF bit set or unset. In such cases, the
member is front-end.
The lower green dotted lines represent all the kinds of messages a member can
schedule from the shared queue.
The red dashed lines are two cases where you get abendU0711. In such cases, it is
back-end.
SQ with MESSAGE
CM1 SL0/SL1/SL2
AOS=N
AOS=Y RRS=N/Y
RRS=N
Affinity
AOS=F
RRS=N
AOS=X/B/S
No Affinity RRS=Y
XCF off
AOS=Y
No Affinity
RRS=Y
XCF on
AOS=X/B/S
RRS=N
AOS=F
RRS=Y
Enqueue ( FE ) Schedule ( BE )
Schedule ( BE ) AbendU0711
Important: IMS systems in a shared queues group must be at MINVERS 12.1 to enable
this enhancement.
If MINVERS is not at version 12.1, message DFS2088I RSN40 might be sent, depending on the
AOS/RRS combination.
You can check RECON MINVERS by using the LIST.RECON STATUS command (Example 4-1).
Important: New proclib AOS parameter values need /NRE or /ERE to be activated.
Keep the setting RRS=Y until all members in the shared queue group are AOS=X. At that point,
and only if there are no applications or functions working with RRS or SL2 messages, there is
no reason to keep it active. However, if the workload includes SL2 messages, use AOS=B for
4.1.3 Operations
This section illustrates the old process flow using RRS and the new branded process flow
using XCF and how it is seen through the OLDS using the interactive IMS Problem
Investigator IBM tool. For more information about this tool, see “IMS Problem Investigator” on
page 402.
Assume that tests have been run on a two-way shared queue. IMSplex and IVP transactions
have been sent from an IMS Connect client using a REXX program, sending a message
including an IRM with CM1 and SL0.
For more considerations about processing in this environment, see “Managing APPC and
OTMA messages in a sysplex environment” in IMS Version 12 System Administration,
SC19-3020.
Front-end Back-end
Allocate-send-rcv
1 Receive input message
or
Send-then-Commit Determine if synchronous
SQ environment Retrieve msg from global queue
- register with RRS
wait 3
2
Request commit 8
or
RRS RRS Syncpoint ph 1 / ph 2
9 or
Backout
Backout
Deallocate
10 Deallocate or get next msg
or
Send
Example 4-2 Startup messages and OTMA CM1 SL0 transaction monitoring messages with RRS
First IMS Startup messages:
05.36.04 STC02823 DFS0653I PROTECTED CONVERSATION PROCESSING WITH RRS/MVS ENABLED I12A
05.36.04 STC02823 DFS2089I APPC/OTMA SMQ Enablement active. RRS is used I12A
/DIS A DC shows:
05:40:24.77 STC02823 00000090 DFS000I APPC/OTMA SHARED QUEUE STATUS - LOCAL=FORCE-RRS
GLOBAL=ACTIVE-RRS I12A
05:40:24.78 STC02823 00000090 DFS000I APPC/OTMA SHARED QUEUES LOGGING=N I12A
/DIS A DC shows:
999 + APPC/OTMA SHARED QUEUE STATUS - LOCAL=FORCE-RRS GLOBAL=ACTIV
999 E-RRS
999 APPC/OTMA SHARED QUEUES LOGGING=N
Example 4-3 shows the log records that you can find at the front-end.
Example 4-3 Message flow on the front-end of I12A for an RRS scenario
---- ------------------------------------------------------ ----------------
01 Input Message 11.45.04.859413
UTC=11.45.04.859406 TranCode=IVTNO Userid=IMSR6 LTerm=7100
Terminal=7100 OrgUOWID=I12A/C8324BB4B3CEA546 Port=7100
LogToken=C8324BA78ABD2D04 SSN=07 Socket=TRAN CM=1 SL=0 Source=Connect
Message associated with CQSPUT MSGUFLG1=x’80’
----------------------------------------------------------------------------
35 Input Message Enqueue 11.45.04.859425
UTC=11.45.04.859406 TranCode=IVTNO Userid=IMSR6 LTerm=7100
Terminal=7100 OrgUOWID=I12A/C8324BB4B3CEA546 Port=7100
LogToken=C8324BA78ABD2D04 SSN=07 Socket=TRAN CM=1 SL=0
Message enqueued to the shared queue
----------------------------------------------------------------------------
33 Free Message 11.45.04.859694
OrgUOWID=I12A/C8324BB4B3CEA546
----------------------------------------------------------------------------
Example 4-4 shows the log records you can get at the back-end.
Error condition
When an output IOPCB reply is sent, it is always sent before sync point. If the client partner is
unavailable or the output reply cannot be sent, the following actions occur:
If the transaction that replies to the IOPCB executes on the front-end, the default action is
to abend the transaction with a U0119 and discard the output reply. User exit DFSCMUX0
(Message Control/Error Exit Routine) can be implemented to change the default action.
The exit can change the default abort action (except for sync-level of sync point, which
must continue with the abort). This is how errors with synchronous message replies have
been processed for several releases of IMS.
If the transaction that replies to the IOPCB executes in a back-end IMS and the front-end
IMS cannot deliver the message, an RRS Take Backout command is issued which forces
backout processing to occur on the back-end. The back-end aborts the transaction with
U0711-1E.
For more information about coding DFSCMUX0, see IMS Version 12 Communications and
Connections, SC19-3012, and IMS Version 12 Exit Routines, SC19-3016.
3 Retrieve Input
2 Queue input SQ
FE IMS BE IMS
1 App. Tx
Process message
6 Receive Confirmation
Commit Completion
8 7 Commit Completion
XCF Communications
Figure 4-5 illustrates the general process flow for a synchronous SL1 transaction.
3 Retrieve Input
2 Queue input SQ
FE IMS BE IMS
1 App. Tx Process
message
5 Reply 4 Reply
Client
6 7 ACK
ACK
XCF Communications
Example 4-5 Startup messages and OTMA CM1 SL0 transaction monitoring messages with XCF
First IMS Startup messages I12A AOS=S RRS=Y
09.54.28 STC03112 DFS0653I PROTECTED CONVERSATION PROCESSING WITH RRS/MVS ENABLED I12A
09.54.28 STC03112 DFS2089I APPC/OTMA SMQ Enablement active. XCF and RRS are used I12A
/DIS A DC shows:
09.57.46 STC03112 DFS4444I DISPLAY FROM ID=I12A 924
924 VTAM STATUS AND ACTIVE DC COUNTS
924 VTAM ACB OPEN -LOGONS ENABLED
924 IMSLU=N/A.N/A APPC STATUS=DISABLED TIMEOUT= 0
924 OTMA GROUP=I12XOTMA STATUS=ACTIVE
924 + APPC/OTMA SHARED QUEUE STATUS - LOCAL=FORCE-RRS/XCF GLOBAL=ACTIV
924 E-RRS/XCF
924 APPC/OTMA SHARED QUEUES LOGGING=Y
924 + APPC/OTMA RRS MAX TCBS - 40 ATTACHED TCBS - 2 QUEUED RRSWKS-
924 0
924 APPLID=APPLI12A GRSNAME=IMS12XCF STATUS=ACTIVE
/DIS A DC shows:
10:52:58.00 STC03125 00000090 DFS4444I DISPLAY FROM ID=I12C 598
598 00000090 VTAM STATUS AND ACTIVE DC COUNTS
598 00000090 VTAM ACB OPEN -LOGONS ENABLED
598 00000090 IMSLU=N/A.N/A APPC STATUS=DISABLED TIMEOUT= 0
598 00000090 OTMA GROUP=I12XOTMA STATUS=ACTIVE
598 00000090 + APPC/OTMA SHARED QUEUE STATUS - LOCAL=ACTIVE-XCF GLOBAL=ACTIV
598 00000090 E-RRS/XCF
598 00000090 APPC/OTMA SHARED QUEUES LOGGING=Y
598 00000090 + APPC/OTMA RRS MAX TCBS - 40 ATTACHED TCBS - 2 QUEUED RRSWKS-
598 00000090 0
598 00000090 APPLID=APPLI12C GRSNAME=IMS12XCF STATUS=ACTIVE
Example 4-6 and Example 4-7 show the log records for a message processing where XCF is
used between front-end and back-end. Example 4-6 shows the message flow on the front
end.
Diagnostic improvements
Logging and the /DISPLAY command output have been improved, and a new abend has been
added to fit with the XCF case.
Front-end logging
When the AOSLOG=Y parameter is set on the front-end, or the /DIAGNOSE SET AOSLOG(ON)
command is run, log record x'6701' is cut when the response messages or error messages
are returned from the back-end by using XCF (Example 4-8 on page 123). This is true for all
synchronization levels (none, confirm or syncpt).
With AOSLOG=Y, IMS also generates a new record of TIB3. APAR PM45923 and PTF UK71520
must be applied to have the log record x'6701' cut properly without setting a trace.
Example 4-8 X'6701' diagnostic log record in a front-end system process flow
---- ------------------------------------------------------ ----------------
01 Input Message 17.26.34.001126
UTC=17.26.34.001122 TranCode=IVTNO Userid=IMSR6 LTerm=7100
Terminal=7100 OrgUOWID=I12A/C8329808AB82338C Port=7100
LogToken=C83297F6AD385306 SSN=04 Socket=TRAN CM=1 SL=0 Source=Connect
----------------------------------------------------------------------------
35 Input Message Enqueue 17.26.34.001137
UTC=17.26.34.001122 TranCode=IVTNO Userid=IMSR6 LTerm=7100
Terminal=7100 OrgUOWID=I12A/C8329808AB82338C Port=7100
LogToken=C83297F6AD385306 SSN=04 Socket=TRAN CM=1 SL=0
----------------------------------------------------------------------------
33 Free Message 17.26.34.001508
OrgUOWID=I12A/C8329808AB82338C
----------------------------------------------------------------------------
6701 YSND XCF message sent to OTMA client 17.26.34.007908
UTC=17.26.34.007905 Terminal=7100 Port=7100 LogToken=C83297F6AD385306
SSN=05 ID=YSND
----------------------------------------------------------------------------
For more information about the /DIAGNOSE command, see IMS Version 12 Commands,
Volume 1: IMS Commands A-M, SC19-3009.
This log record retrieves diagnostic information for system resources, such as IMS control
blocks, user-defined nodes, or user-defined transactions, at any time without taking a console
dump.
The transaction instance is ended, but the program and transaction are not stopped.
For more information about this abend, see IMS Version 12 Messages and Codes, Volume 3:
IMS Abend Codes, GC19-9714.
New statuses are available for APPC/OTMA SHARED QUEUE STATUS – LOCAL=status1
GLOBAL=status2:
Status1 can be one of the following options:
– ACTIVE-RRS
– ACTIVE-XCF
– ACTIVE-RRS/XCF
– FORCE-RRS
– FORCE-RRS/XCF
– INACTIVE
– UNSUPPORTED
Status2 can be one of the following options:
– ACTIVE-RRS
– ACTIVE-XCF
– ACTIVE-RRS/XCF
– CHECK
– INACTIVE
WAIT-XCF is a new status that is shown when an application program is the middle of
processing a sync point for an APPC or OTMA transaction with a synchronization level of
TERM-WAIT XCF is another new status that is displayed when a dependent region
termination is in progress and the application in the region is still processing a sync point for
an APPC or OTMA transaction with sync level of NONE or CONFIRM. Sync point can
continue after the client issues either an ACK or NAK. When a dependent region is found in
this state, a continuation line is inserted into the display, which shows either the transaction
member (TMEMBER) and the transaction pipe (TPIPE) for OTMA client or the network ID
(NETWORKID) and the logic unit name (LUNAME) for APPC client, in addition to the
originating IMS system ID (ORIGIN) that is associated with the transaction processing in the
dependent region.
Example 4-9 illustrates the new statuses for the /DIS A REG command when using XCF.
Example 4-9 New statuses for the /DIS A REG command when using XCF
24,/DIS A REG
DFS000I REGID JOBNAME TYPE TRAN/STEP PROGRAM STATUS CLASS IMS1
DFS000I JMPRGN JMP NONE IMS1
DFS000I 1 IMSMPPA TPI APOL11 IMS1 APOL1 WAIT-RRS/PC 1,2,3,4
DFS000I URID: C2D6B6917DE820000000000001010000 ORIGIN: IMS2
DFS000I 2 IMSMPPB TPI APOL12 IMS1 APOL1 TERM-WAIT RRS 1,2,3,4
DFS000I URID: C2D6B6917DE830000000000001010000 ORIGIN: IMS2
DFS000I 3 IMSMPPC TPI APOL13 IMS1 APOL1 WAIT-XCF 1,2,3,4
DFS000I TMEM: HWS1 TPIPE: CLIENT01 ORIGIN: IMS2
DFS000I 4 IMSMPPD TPI APOL14 IMS1 APOL1 TERM-WAIT XCF 1,2,3,4
DFS000I LUNAME: IMSNETWK.LU62IMS1 ORIGIN: IMS2
DFS000I JBPRGN JBP NONE IMS1
DFS000I BATCHREG BMP NONE IMS1
DFS000I FPRGN FP NONE IMS1
DFS000I DBTRGN DBT NONE IMS1
DFS000I DBRICTAB DBRC IMS1
DFS000I DLISDEP DLS IMS1
All measurements were conducted within a stable and isolated environment using a
full-function workload.
The ETR is the actual arrival rate of transaction per second. The ITR is a calculated by
determining the throughput if the processor is running at 100%.
Important: Do not confuse this capability with the ACEE caching done with the IMS
Connect Extensions IBM Tool, which is related to IMS Connect security validation.
ACEE caching
To fully benefit from the availability, scalability, and performance advantages of the z/OS
environment, multiple instances of OTMA member clients (TMEMBERs), such as IMS
Connect, DB2 Stored Procedures, and WebSphere MQ, can run in parallel on multiple
servers or LPARs while still being able to connect to the same IMS.
In previous releases, IMS OTMA isolated the security environment of one member client from
another. This meant that an Access Control Environment Element (ACEE) for a user ID had to
be created for each OTMA member client instance if the same user ID accessed IMS through
more than one path (IMS Connect and WebSphere MQ). Creating multiple copies of the
same ACEE resulted in increased storage usage in subpool 249 in Extended Private storage.
Important: In IMS 12, OTMA caches the ACEE so that only one copy exists for the same
user even when messages from that user are sent in through different member clients. The
cached ACEEs also reside in subpool 249.
For example, if WebSphere MQ sets its ACEE aging value to 5 days while IMS Connect sets
its ACEE aging value to 1 day, any ACEEs that only WebSphere MQ uses have an aging
value of 5 days and any ACEEs that are only invoked through IMS Connect have an aging
value of 1 day. After the ACEE, which was previous only used through WebSphere MQ, is
invoked on behalf of the same user sending a message through IMS Connect, the aging
value is set to 1 day, the lower of the 5 days versus the 1 day value.
This enhancement is important because some OTMA clients, such as the DB2 Stored
Procedure DSNAIMS, set the '7FFFFFFF'X seconds (68 years) as the ACEE aging value for
users sending IMS transactions and commands. The enhancement also detects obsolete
ACEEs, improving security.
Challenge addressed
The scenario illustrated in Figure 4-6 that references IMS 11 describes the issue when a user
accesses IMS using several paths. With sysplex distributor and multiple servers, a connection
can be routed through more than one IMS Connect for the same IMS system.
Creating multiple ACEEs occupies storage in subpool 249 and requires calls to RACF to
create each instance of an ACEE. For example, we have two IMS Connect member clients
connected to IMS and each one submitted IMS transactions on behalf of the same 10,000
RACF user IDs. In this case, IMS OTMA creates 20,000 RACF ACEEs. Because the same
user sends in messages through WebSphere MQ, a third ACEE is created for that user. The
system storage for multiple copies of ACEEs on behalf of the same user can keep growing.
For example, an IMS input transaction for a user ID by using an instance of an OTMA member
client application can be rejected by OTMA. However, the same user ID sending in a
message through a different path can be accepted by OTMA because the ACEE was created
before the profile change and before a refresh of the ACEE is requested.
– More storage
IMS 11
– More RACF calls to create an
ACEE instance of an ACEE
Suzie
Sysplex
ACEE
Suzie
• Different versions of the ACEE
distributor based on which OTMA client is used
IMS
RACF ID
Connect
TCP/IP Suzie
Network
2 IMS 12
Suzie
WMQ
RACF ID Solution
Suzie
ACEE
Suzie Single ACEE cache
However, as shown with the connections to IMS 12, the issues described previously no longer
apply to this environment. The ACEE is created when a transaction message on behalf of a
Cached ACEE
HWSA
2 ACEE ANGIE
Datastore= 2
ANGIE (id=I12A, AV 300
………
OAAV=300) 3
3 ACEE ISABELLE
AV 600 AV 300
1
HWSB 1
Datastore=
ISABELLE (id=I12A, Not cached ACEE
………
OAAV=600)
ACEE ISABELLE
One shot !
4 AV 0
HWSC
4
Datastore=
(id=I12A,
………
OAAV=0)
GROUP/MEMBER XCF-STATUS USER-STATUS SECURITY TIB INPT SMEM DRUEXIT T/O ACEEAGE
I12XOTMA
-I12AOTMA ACTIVE SERVER FULL 0 8000 N/A 0
-HWBI12A1 ACTIVE ACCEPT TRAFFIC FULL 0 5000 I12S 120 300
-HWBI12A3 ACTIVE ACCEPT TRAFFIC FULL 0 5000 120 600
-HWBI12A2 ACTIVE ACCEPT TRAFFIC FULL 0 5000 I12S 120 0
-HWBI12C3 NOT DEFINED SMQ BACKEND FULL 0 0 0 0
-I12S SUPER MEMBER I12S
Performance
Because cached ACEEs are applicable to all OTMA TMEMBER clients, the requirement to
issue the RACROUTE REQUEST=VERIFY command requests to create ACEEs for user IDs for a
new OTMA client instance is reduced, also reducing possible RACF I/Os.
When an input transaction message carries a user ID whose ACEE has expired, OTMA uses
an asynchronous request to create a new ACEE for this user ID, which will be kept in the
cache for subsequent transaction requests. For the current transaction request, an ACEE is
created for the authorization process and deleted when the authorization is complete.
From an operational perspective, the /SECURE OTMA REFRESH TMEMBER membername and
/SECURE OTMA REFRESH commands have an identical impact because both commands work on
the single OTMA ACEE table for all users.
Security
OTMA detects obsolete RACF ACEEs by using an internal IMS timer. Every two minutes,
OTMA performs an ACEE cleanup operation for 10 expired user IDs.
IMS 12 changes the input ACEE aging value to 999999 seconds (11.5 days). This can
increase the number of RACF I/Os and impact performance if more refreshes are done based
on the aging value.
Mixed environments
In a mixed version environment, the difference in maximum aging value can cause an ACEE
to be refreshed in an IMS 12 system but not in the IMS 10 or IMS 11 environments. (It can be
999999 seconds in IMS 12 and 2147483647 seconds in IMS 10 or 11.)
Benefits
Cached ACEEs reduce system storage requirements while providing better security and
performance, as explained here:
Having only one copy of the ACEE instead of multiples per OTMA client helps to reduce
storage usage, security exposures, and improve performance.
The same security result, regardless of which OTMA client is used, provides consistency.
This section explains why the DFS2082 message has been implemented for Commit Mode 0
OTMA transaction and how it is requested.
IMS commit modes: OTMA can control how IMS commits transactions: They can be
either commit-then-send (Commit Mode 0) or send-then-commit (Commit Mode 1).
For CM0 transactions, IMS processes the transaction and commits the data before
sending a response to the OTMA client. It commits the transaction output as part of
sync-point processing, and then delivers the output to the client later.
For CM1 transactions, IMS processes the transaction and sends a response to the
OTMA client before committing the data. It delivers the transaction output first, receives
an acknowledgment from the client, and then completes the syncpoint processing.
When converting remote programs that use CM1 to CM0 (Commit-the-Send), a problem can
arise. Before IMS 12, a remote program that sends a CM0 transaction message and then
waits for a reply can possibly wait a long time until a timeout occurs, if the IMS application
does not send any reply to the IOPCB.
This DFS2082 message for a commit-then-send transaction only occurs for the original input
transaction and does not support the program-to-program switch. This restriction means that
there is no DFS2082 message for a switched-to transaction, even if the switched-to transaction
fails to reply and the original transaction does not reply either.
This function eases the CM1-to-CM0 application conversion by reducing the necessary
timeout in remote applications.
Optional flag: TMAMHRSP can be set in the OTMA state data prefix
IMS TM RA
The new optional flag TMAMHRSP is for OTMA input commit-then-send transactions (CM0)
only. When TMAMHRSP is specified for an input send-then-commit (CM1) transaction, IMS
OTMA ignores it. The DFS2082 message has been supported for OTMA send-then-commit
messages without the need of setting any OTMA message prefix flag.
IMS Connect uses this function by introducing a new IRM flag, IRM_F3_DFS2082. With this
flag, the client applications can request the DFS2082 message for a CM0 input transaction.
After the flag is set and the IMS application either does not reply to the IOPCB or
message-switch to another transaction, OTMA sends a DFS2082 message to the client
regardless of the IMS transaction response mode so that the client’s application does not
need to wait for the timeout. This DFS2082 message for a CM0 input only occurs for the
original input transaction and does not support IMS program-to-program switches.
With the enhancement, OTMA performs tpipe validation only when a new tpipe name is
received. For the subsequent transactions using the same tpipe name, OTMA no longer does
tpipe name validation checking. Even for OTMA tmember clients like IMS Connect that only
use 1 port tpipe for CM1 messages, validation checking will only be invoked one time instead
of for every message from that port tpipe.
This enhancement is also available for IMS 10 and IMS 11 with the APARs PM20292 (V10)
and PM20293 (V11).
4.4.2 TM and MSC Message Routing and Control exit routine (DFSMSCE0) exit
modified for shared queues
Users can also use the DFSMSCE0 TM and MSC Message Routing and Control user exit to
specify the APPC synchronous conversation or OTMA CM1 transaction with sync level of
NONE or CONFIRM to be queued with the RRS or the XCF indicator option.
On input to the exit, the following indicators are set
MSCE3XCF EQU X'80'A global XCF enabled IMS system.
MSCE3RRS EQU X'40'A global RRS enabled IMS system.
On output, the exit can set the following indicators:
MSTR2XCF EQU X'02
Message is to be queued globally to the SQ with the XCF indicator.
Only for synclevel of NONE or CONFIRM.
MSTR2RRS EQU X'01'
Message is to be queued globally to the SQ with the RRS indicator.
Only for synclevel of NONE or CONFIRM.
If both options are set, IMS uses the XCF indicator option and ignores the RRS indicator
option.
If the XCF indicator option is selected, IMS determines whether the message can be queued
globally with the XCF indicator. If not, IMS determines whether the message can be queued
globally with the RRS indicator. If not, IMS queues the message locally.
If the exit selects the RRS indicator option, the input transaction message queuing is
determined as follows:
1. Queue the input globally to the SQ with the RRS indicator if eligible.
2. Queue the input locally with the affinity to the FE system where the input is received.
You can use this exit routine to perform the following tasks:
Change the APPC local LU name of an asynchronous LU 6.2 outbound conversation.
Change the synchronization level of an asynchronous LU 6.2 conversation.
View the contents of a message segment and continue processing.
Change the contents of a message segment and continue processing.
Discard a message segment.
Perform a DEALLOCATE_ABEND of the LU 6.2 conversation.
For input messages, IMS calls the LU 6.2 Edit exit routine for each message segment before
the message segment is inserted to the IMS message queue. The exit routine can edit
message segments as necessary before the application program processes the input
message.
For output messages, IMS calls the LU 6.2 Edit exit routine for each message segment before
the message segment is sent to the LU 6.2 program. The exit routine can intercept the data
sent by the application program and edit it for the particular destination.
For more information, see IMS Version 12 Exit Routines, SC19-3016, and IMS Version 12
Communications and Connections, SC19-3012.
In IMS 12, an enhancement to the LU6.2 Input/Output Edit Exit (DFSLUEE0) supports a new
return code (RC=2) that requests the dequeuing of an undeliverable asynchronous output
message. Previously, IMS always requeued the message.
Important: The new return code RC=2 is only valid for asynchronous conversations.
Transaction expiration
IMS Connect can adjust the expiration time for IMS transactions to match the timeout value of
the socket connection on which the transaction is submitted. If an expiration time is specified
A client application can submit a transaction request to IMS. IMS receives the transaction,
processes it, and sends a reply. If IMS does not have the resources to process the transaction
in the allotted time frame, the client application might time out the transaction call.
However, this transaction request might have been received by an IMS, but by the time the
transaction is processed and a reply is sent, the client application no longer requires the
response message. Processing unwanted transactions in IMS increases processing costs
and CPU cycles.
The transaction expiration function has been introduced for the following versions:
In IMS 10 for OTMA messages with APAR PK74017/UK50901
In IMS 11 for all messages
Important: The purpose of the IMS Transaction Expiration SPE is to issue message
DFS3688I including tpipe and tmember information instead of DFS555I or DFS2224I when
the transaction expiration time is reached at GU time. See Example 4-11.
Example 4-11 Message DFS3688I sent for expired OTMA message at GU time
DFS3688I Transaction aaaaaaaa expired: EXPRTIME=nnnnnn, ELAPSE=ssssss
Tmember xxxxx Tpipe xxxx
APARs: This enhancement is retrofitted in IMS 11 and IMS 10 with APAR PM05984 and
APAR PM05985 respectively.
Expired non-OTMA messages already receive the message DFS3688I since APAR
PK86426/UK47070 for IMS 11.
For details about how to use the transaction expiration support, see IMS Version 12
Communications and Connections, SC19-3012.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105559
Important: In WebSphere MQ, the Message Expiry facility provides a new service
parameter that indicates whether WebSphere MQ will take advantage of the OTMA
message expiry interface. This new service parameter SERVICE = 0000000001 specified in
the ZPARM CSQ6SYSP or set by using the SET SYSTEM SERVICE command.
With this service parameter, WebSphere MQ can tolerate the receipt of an OTMA
NACK_FOR_TRANS_EXPIRED response when WebSphere MQ passes a message by using
OTMA to IMS.
A unique expiration time can also be set at the MQ message level. In this case, the desired
expiration interval is passed by the application to MQ using the MQMD.Expiry field and the
service parameter should be set. The time is expressed in tenths of a second and can be
thought of as a time to live (TTL) for the message.
In Figure 4-9, from the remote application perspective, the MQPUT application is unaware of
an expiry unless it specifies a Report option. This option can entail the following actions:
Include the generation of an expiry report, which is sent to the specified reply-to queue.
Pass the remaining expiry interval from a request message to a response message.
Discard the expired message.
IMS
WMQ on zOS
LINUX
Msg is placed on OTMA:
Appl places (MQPUT) Msg is processed and
IMS bridge Q with sent to OTMA EXPRTIME used
MQIIH tran Stays on
Stays in MQ for Expiry is now 5.8 second is 6 seconds
MQMD.Expiry =60 0.2 sec
MQMD.Expiry =100 Transmission Q But is rounded so final Checked at:
For 4 sec MQMD.Expiry =60
(6 sec = original 10 - Input Receipt
(10 sec) (6 sec) - Msg enqueue
minus 4)
- Appl GU
An attempt to perform an MQGET of a message after the expiration time has expired results
in the MQ message being removed from the queue and the execution of expiry processing.
As an MQ message flows between queue managers in an MQ network, the “time to live”
expiration field is decremented even while the message waits on a transmission queue for
movement between queue managers. MQ expiry processing is driven by the Report options
specified by the application in the MQMD header of the message. The putting application can
request a report on expiration, and will get a message sent to the reply to queues with the
expiration information
If the service parameter for OTMA transaction expiration is set, then WebSphere MQ also
looks to see if an MQ expiry time has been defined for the message. If an MQ expiry time
value exists, then WebSphere MQ calculates the residual value (MQ expiry time minus the
time the message has already spent in MQ) and uses that value when building the OTMA
interface. If no MQ expiry value has been set, then WebSphere MQ does not send a
user-specified value to IMS.
The chapter includes a review the new DBRC functions that you can use to add valuable user
information to the recovery control (RECON) data set records, to take advantage of greater
flexibility with user written skeletal job control language (JCL) with new keys, to perform a
deeper cleanup when removing obsolete information, and to better control change
accumulation (CA) records.
In addition, it explains the DBRC commands and functions that have changed to improve
serviceability. They now eliminate the need to type unnecessary information, and provide
improved information from the some of the LIST commands.
DBRC is not required for IMS batch jobs and for some offline utilities. However, if batch jobs
and utilities that access registered databases are allowed to run without DBRC, the
recoverability and integrity of the databases can be lost. Even if your configuration does not
require the use of DBRC, you can simplify your recovery process by using DBRC to supervise
recovery and protect your databases.
DBRC includes the RECON data sets, the DBRC utility (DSPURX00), and skeletal job control
language (JCL):
DBRC stores recovery-related information in the RECON data sets. DBRC uses two
RECON data sets to increase availability and recoverability. They contain identical
information. If you want to continue operations in dual mode after an error occurs on one
of the two active RECON data sets, you can define a third RECON data set. DBRC does
not use this spare data set unless an error occurs. The RECON data sets are critical
resources for both DBRC and IMS.
The DBRC utility (DSPURX00) is used to issue commands that build and maintain the
RECON data set, add information to the RECON data set, and generate jobs for utilities.
Skeletal JCL are input models or templates stored as members in a partitioned data set
(PDS) for generating input for some of the recovery utilities. DBRC uses the skeletal JCL,
information from the RECON data set, and instructions from a GENJCL command to
generate the JCL and control statements that are needed to run some of the recovery
utilities.
The RECON data set contains many types of records. Certain records, such as header
records, exist primarily to control processing of the RECON data set. Other records exist to
define the various data sets used in the recovery of database data sets (DBDSs). Still other
records exist to record events related to the use of DBDSs.
RECON considerations
Allocate the RECON data sets with different amounts of space so that if one becomes full, the
system can continue using the other RECON data set while you provide a replacement.
Allocate secondary extents for the RECON data set when you define space for it.
Back up the RECON data sets frequently. They are a critical resource. Always make backup
copies of the RECON data set after performing any RECON record maintenance, such as
registering databases and adding or deleting CA groups.
You need to reorganize the RECON data sets periodically. Many of the record keys in the
RECON data set include the date and time. DBRC recording of IMS log and database activity
can cause control interval (CI) and CA splits, which can degrade performance. In addition,
deleting unnecessary records might not prevent the RECON data set from filling up because
VSAM does not always reuse the space freed.
The NOTIFY.RECOV command adds information about the recovery of the database.
The NOTIFY.REORG command adds a record about the reorganization of the database.
In Example 5-3, the LIST.DBS command shows how the UDATA is displayed in a report.
ALLOC
ALLOC =10.053 19:07:28.472125 * ALLOC LRID =0000000000000000
DSSN=0000000001 USID=0000000002 START = 10.053 14:51:54.800000
IMAGE
RUN = 10.049 21:12:28.460952 * RECORD COUNT =0
STOP = 00.000 00:00:00.000000 BATCH USID=0000000001
USERDATA= FIRST CYCLE USING IMS NEW VERSION
The DSPAPQCA, DSPAPQIC, DSPAPQRV, and DSPAPQRR DSECTS are changed for the
UDATA enhancement. Therefore, reassemble any of user-written programs that use it.
When you issue a GENJCL command, the command uses a skeletal JCL execution member,
which contains symbolic keywords. You can define your own symbolic keywords and use the
symbolic keywords that already exist. DBRC substitutes current information for the symbolic
keywords.
IMS 12 increases the number of user keys in skeletal JCL from 32 to 64, keeping the same
conventions and restrictions applied to earlier versions. In addition, the existing %DBTYPE
key can be used when selecting DBDS allocation.
The keyword USERKEYS(%key1,'value' | %key2) is an optional keyword that you use to set
the value of keywords you have defined, where:
%key1 This is a user-defined keyword that is assigned a value. The maximum
length of the keyword is 8 characters, including the percent sign (%).
The first character after the % must be alphabetic (A–Z). The
remaining characters must be alphanumeric (A–Z, 0–9).
'value' This is the value assigned to the user-defined keyword when it is
encountered. The value can be any character string enclosed in single
quotation marks. The maximum length of the value is 132 characters
(excluding the quotation marks). If the value contains a quotation
IMS 12 adds the ability to delete CA execution records information by the new CAGRANGE,
CAONLY, and LASTCA keywords. In addition, IMS 12 issues the message DSP0115I instead
of setting a fatal return code when a DELETE.LOG INACTIVE or TOTIME command was issued
and a LOGALL record did not exist for the inactive log.
For more examples of the CLEANUP.RECON command, see IMS Version 12 Commands,
Volume 3: IMS Component and z/OS Commands, SC19-3011.
Example 5-6 shows the DELETE.LOG command when specifying that the records of all inactive
recovery log data sets (RLDSs) and system log data sets (SLDSs) that have a start time older
than the time specified with the TOTIME keyword are to be deleted.
Evaluate the usage of the CLEANUP.RECON command first on a copy of the RECONs, to avoid
unintended deletions and to estimate the amount of time to process the command.
The last CA execution record for a CA group is only deleted whenever specifically requested.
To improve the control of relevant information stored in the CA group record, IMS 12 provides
the RECOVPD optional keyword that can be used to set the recovery period for a specified
CA group records. The recovery period is the amount of time before the current date for which
DBRC maintains CA information in the RECON data set.
The CHANGE.CAGRP and INIT.CAGRP commands are affected, which you use to modify
information contained in a specified CA group record. Figure 5-6 shows the new syntax of
CHANGE.CAGRP command. For the syntax of other DBRC commands, see IMS Version 12
Commands, Volume 3: IMS Component and z/OS Commands, SC19-3011.
The CHANGE.CAGRP command has the RECOVPD(0 | value) option. You use this optional
keyword to modify the recovery period for a specified CA group. The recovery period is the
amount of time before the current date for which DBRC maintains CA information in the
RECON data set. For example, if the recovery period of a CA group is 14 days, DBRC
maintains sufficient CA execution records for at least 14 days.
To determine whether the CA execution record falls within the recovery period, subtract the
RECOVPD value from the current time. Any CA execution records with stop times that are
newer than the calculated time is to be kept in the RECON data set.
If you issue the CHANGE.CAGRP command and specify the GRPMAX and RECOVPD values
that are less than the existing values, any used CA data sets with stop times that are beyond
the recovery period are deleted. These data sets remain deleted until the number of
remaining CA data sets equals the specified GRPMAX value.
If you issue the DELETE.CA command, any specified CA data set record is deleted regardless
of the RECOVPD or GRPMAX values.
If the GRPMAX limit is reached, but the RECOVPD for the oldest CA record has not expired,
DBRC issues an informational message (DSP1232I) and does not discard the record. If the
DSP1232I message appears frequently, you might need to tune the GRPMAX or RECOVPD
values by using the CHANGE.CAGRP command.
Attention: If the GRPMAX value is lowered by using the CHANGE.CAGRP command, the
GRPMAX value is recorded regardless of whether the oldest CA data sets can be deleted
because they are within the recovery period.
To reduce the unnecessary information required to create and manipulate Image copy and
CA records, IMS 12 makes the VOLLIST keyword parameter optional if the RECON status
record indicates that the data sets are to be treated as cataloged. (CATDS is in effect.)
The INIT.RECON command has the following mutually exclusive, optional keywords that you
use to indicate whether image copy, CA, and log data sets are cataloged:
NOCATDS This keyword specifies that these data sets, regardless of whether
they are cataloged, are not to be treated as cataloged. DBRC verifies
that the volume serial and file sequence numbers appearing in the job
file control block are the same as the information recorded in the
RECON data set.
CATDS This keyword specifies that these data sets are cataloged or
SMS-managed. If the data set is allocated by the catalog and the
CATDS option is used, DBRC bypasses volume serial and file
sequence verification for the data set. For the CATDS option to be
effective, the data set must be cataloged, and volume serial
information for the data set must be omitted from the JCL.
– If the data set is cataloged, CATDS is specified, and volume serial
information is included in the JCL. DBRC ignores CATDS and
allocates the data set by the JCL. Normal volume serial and file
sequence checking occurs.
– If the data set is not cataloged, CATDS is not effective, and DBRC
allocates the data set by the JCL, with volume serial and file
sequence checking. If log data sets are SMS-managed, select the
CATDS option and remove the %LOGVOLS keyword from skeletal
JCL member CAJCL.
IMS 12 does not allow usage of the INIT.IC command without VOLLIST when NOCATDS is
in effect, which is different from earlier versions. In this sense, this command is now more
restrictive.
IMS 12 allows an output larger than 32 K for the /RMLIST online command, but only when the
command is entered through the Operations Manager (OM) API. The output size is restricted
by the DBRC private storage available for buffering the output message or OM limitations.
IMS 12 provides the NORCVINF keyword for the LIST.DB and LIST.DBDS commands, which
suppresses recovery-related information. That is, ALLOC, IC, RECOV and REORG records
are not listed, which reduces command output.
IMS 12 adds the full precision timestamps and more information about the HALDB type of
databases such as active DBDSs, DDNames of inactive DBDSs, and current reorganization
number for partition in the LIST.HISTORY command output.
In the LIST.RECON command output, IMS 12 includes the amount of registered databases,
facilitating your control over the DBRC limit of 32,767 registered databases.
The LIST.DB command has the NORCVINF option. This option suppresses recovery-related
records (ALLOC, IC, RECOV, and REORG) for those DBDSs or areas in the RECON data set
that are associated with the specified database. If the LIST.DB command output is truncated
and message DSP0057I is returned, you can specify the NORCVINF keyword to minimize the
size of the output.
Example 5-10 shows the output of the LIST.HISTORY command, which includes full precision
timestamps.
.
.
.
.
DSP0180I NUMBER OF RECORDS LISTED IS 4
DSP0203I COMMAND COMPLETED WITH CONDITION CODE 00
DSP0220I COMMAND COMPLETION TIME 11.208 19:16:28.844639
Example 5-11 shows the LIST.RECON STATUS command to verify how many databases are
already registered into the RECON data sets.
.
.
.
.
NUMBER OF REGISTERED DATABASES = 8
5.4.1 Coexistence
IMS 12 can coexist with IMS 10 and IMS 11, so that existing applications and data can be
used without change. However, coexistence considerations apply to each of the IMS versions.
Therefore, cross-system coupling facility (XCF) use by APPC synchronous conversations and
OTMA CM1 (send-then-commit) are not available.
Considerations
The MINVERS level must be set to the lowest level of IMS that uses or shares the RECON
data sets.
DBRC applications compiled with Version 1.0 DSPAPI macros work without modification or
reassembly with Version 2.0 of the DBRC API. However, these applications cannot use any of
the newer functions or options that are supported in Version 2.0 macros, which are available
only with IMS 10 and later.
5.4.2 Migration
IMS 12 requires you to upgrade the RECON data set before you start a control region. A new
CHECKUP keyword was introduced in the CHANGE.RECON UPGRADE command to inform you in
advance if the RECON data set from earlier version is in a state that allows upgrade. The
benefit is that you can react and take the appropriate actions before effectively doing the
upgrade, and thereby avoid facing unexpected conditions.
If you have to coexist with earlier versions of IMS, you must apply the DBRC migration or
coexistence APARs. You are not required to change the MINVERS value to '12.1' when you
migrate to IMS 12. Change this value only after you verify that you do not need to coexist with
an earlier version of IMS. When you do not need to fall back to an earlier version and you
need to use new functions, the MINVERS value must be set to '12.1'.
Before you issue the CHANGE.RECON UPGRADE command against the production RECON data
sets, upgrade a copy of the production RECON data sets to verify how long it will take to
perform the upgrade. Issue the CHANGE.RECON UPGRADE command by using either the IMS 12
DBRC Recovery Control utility (DSPURX00) or the IMS 12 DBRC Command API request.
IMS online Command is not allowed.
After this command successfully completes, DBRC sets the value for MINVERS (the
minimum version of IMS that can sign on to DBRC) to '10.1' if the value was less than '10.1'.
Ensure that you have two active RECON data sets (COPY1 and COPY2) and a spare data
set when you upgrade the RECON data sets while other jobs are accessing them.
Upgrading from IMS 10 can increase the size of the RECONs. Evaluate whether the current
size space can accommodate the IMS 12 records
The upgrade process reads all database records to ensure that the high-order bit is on in all
DMB numbers. If the high-order bit is not on (this should not occur), the bit is turned on if the
database is not authorized and the DSP1235W message is displayed:
DSP1235W THE INTERNAL REPRESENTATION OF THE DMB NUMBER FOR DATABASE xxxxxxxx IS
INCORRECT
The high-order bit is not turned on if the database is authorized; instead, the upgrade fails and
the DSP1236E message is displayed:
DSP1236E THE INTERNAL REPRESENTATION OF THE DMB NUMBER FOR DATABASE xxxxxxxx COULD
NOT BE CORRECTED BECAUSE THE DATABASE IS AUTHORIZED
Attention: After RECONs are upgraded to IMS 12, IMS 11 or IMS 10 Log Archive jobs use
additional memory because versions of RECON records are kept in memory. Use
REGION=0M for IMS 10 and IMS 11 archive jobs.
The need for a customer written gateway is removed with IMS 12. You can now define a
remote IMS system in IMS Connect and an Open Transaction Manager Access (OTMA)
descriptor in IMS (Figure 6-1). This way, an application program can push a message to the
remote system using a normal sequence of CHNG, ISRT, and PURG calls. (PURG is only
valid for an EXPRESS=YES PCB).
Customer
IMS1 IMS Connect1 IMS Connect2 IMS2
Gateway Application
Resume Tpipe O
Existing O
ISRT ALTPCB T
T XCF TCP/IP TCP/IP XCF
Method Descriptor M M
ICON1 A Tran output SendOnly A
You also need a matching entry in the other system (Example 6-2) so that the remote IMS
can send replies. The reply message is not sent back to the originating transaction in the local
IMS.
Example 6-3 defines the same OTMA descriptors created with the static definitions in
Example 6-1.
Example 6-4 defines the same OTMA descriptors created with the static definitions in
Example 6-2.
Notice that, in these examples, we used OPTION(WILDCARD) as part of that command. With any
OTMA descriptor, we can use an asterisk (*) at the end of the name of the descriptor.
Therefore, a descriptor called OTMAD* matches OTMAD1 or OTMADX on a CHNG call. To
display that one descriptor, we use NAME(OTMAD*) with the OPTION(NOWILDCARD). If we want the
command to look for any descriptors that start with OTMAD, we must specify NAME(OTMAD*)
with OPTION(WILDCARD). OPTION(NOWILDCARD) is the default and is supported for the DELETE
OTMADESC command.
Example 6-8 Local IMS Connect TCP/IP, data store, and remote IMS definitions
TCPIP=(EXIT=(HWSSMPL1),HOSTNAME=TCPIP,PORT(ID=7301))
DATASTORE=(GROUP=I12XOTMA,ID=I12C,
MEMBER=HWBI12C3,TMEMBER=I12COTMA)
RMTIMSCON=(ID=HWSI12D1,
HOSTNAME=WTSC64.ITSO.IBM.COM,PORT=7401,
AUTOCONN=Y,PERSISTENT=Y,
IDLETO=60000,RESVSOC=10)
Example 6-9 Remote IMS Connect TCP/IP, data store, and remote IMS definitions
TCPIP=(EXIT=(HWSSMPL1),HOSTNAME=TCPIP,PORT(ID=7401))
DATASTORE=(GROUP=I12XOTMA,ID=I12D,
MEMBER=HWBI12D4,TMEMBER=I12DOTMA)
RMTIMSCON=(ID=HWSI12C1,
HOSTNAME=WTSC63.ITSO.IBM.COM,PORT=7301,
AUTOCONN=Y,PERSISTENT=Y,
IDLETO=60000,RESVSOC=10)
Example 6-16 Output from the QRY IMSCON TYPE(RMTIMSCON) NAME(HWSI12C1) SHOW(ALL) command
Response for: QUERY IMSCON TYPE(RMTIMSCON) NAME(HWSI12C1) SHOW(ALL)
RmtImsCon MbrName CC CCText IpAddress HostName
Port AutoConn Persist IdleTO ResvSoc NumSoc Appl UserID Status
TotSClnts TotRecv TotConn TotXmit TotOther SendClnt SendUID Second SendPort SendStatus
---------------------------------------------------------------------------------------------------------
HWSI12C1 HWSI12D1 0 9.12.6.70 WTSC63.ITSO.IBM.COM
7301 Y Y 60000 10 1 ACTIVE
1 0 1 0 0
Example 6-19 Output from QRY IMSCON TYPE(SENDCLNT) NAME(*) SHOW(ALL) command
Response for: QUERY IMSCON TYPE(SENDCLNT) NAME(*) SHOW(ALL)
SendClnt MbrName CC UserID MscName Second SendPort RmtImsCon Status
-------------------------------------------------------------------------------------
OTMA9644 HWSI12C1 0 IMSR3 94 8148 HWSI12D1 CONN
OTM73C4B HWSI12D1 0 IMSR3 207 35616 HWSI12C1 CONN
When you create the OTMA descriptor, you need to add the new SMEM(Y) parameter and
change the value for the TMEMBER parameter to be the super member name. Example 6-20
shows the modified CREATE OTMADESC command. We can also activate super member support
on an existing OTMA descriptor with the UPDATE OTMADESC command.
If you define the OTMA descriptor in the IMS.PROCLIB(DFSYDTx) member, you must add
SMEM=YES (the default is SMEM=NO) in that OTMA descriptor definition (Example 6-21).
For an explanation of how to define USERID=xxx, APPL=yyy, and RACF PTKTDATA definitions to
create a secure connection, see 6.2.6, “IMS Connect security using RACF PassTickets” on
page 186.
IMS1
TMEMBER/ ICON1 config
App
RMTIMSCON=(ID=ICON2,
TPIPE HOSTNAME=ICON2.IBM.COM,
XCF
ICON1
ISRT WAIT_R PORT=9999)
ALTPCB
ICON2
D DESC1
TMEMBER=ICON1
RMTIMSCON=ICON2 HWS=(ID=ICON2,XIBAREA=100,RACF=Y)
RMTIMS=IMS2
TCPIP=(HOSTNAME=TCPIP,PORTID=(9999),
RMTTRAN=TRANABC
MAXSOC=50,TIMEOUT=5000,EXIT=(HWSSMPL0,HWSSMPL1)
USERID=USERXYZ
DATASTORE=(ID=IMS2,GROUP=XCFGRP1,MEMBER=ICON2,
TMEMBER=IMS2,DRU=HWSYDRU0,APPL=APPLID1
ICON1 has a RMTIMSCON definition that defines ICON2 and gives the TCP/IP host name (or IP
address) and the PORT of the remote IMS Connect. ICON2 is configured as normal with a
TCP/IP statement that defines the PORT it should listen on and a DATASTORE to define the
OTMA connection to its local IMS system IMS2.
In this example, if we need IMS2 to send transactions back to IMS1, we must add an
RMTIMSCON in ICON2 and an OTMA descriptor in IMS2.
In addition, a new function in IMS Connect establishes network connectivity between two IMS
systems using TYPE=TCPIP MSPLINK. Figure 6-3 shows a general overview of how the TCP/IP
MSC link support in IMS 12 is configured.
PLEX1 PLEX2
There are several changes to the MSPLINK macro to support a TCP/IP MSC link. The NAME
parameter now defines the definition on a RMTIMSCON. Two additional parameters, LCLICON and
LCLPLKID, define the IMS Connect and the MSC statement in the IMS Connect configuration
that defines the link.
Example 6-22 shows the IMS stage1 macros that are needed to define an MSC link using
TCP/IP for the local IMS system. The MSLINK and MSNAME macros have no special
requirements.
Example 6-22 IMS stage 1 macros for a TCP/IP MSC link (local system)
LINKBA MSPLINK TYPE=TCPIP,NAME=I12B,SESSION=2,BUFSIZE=4096, X
LCLICON=HWSI12A1, X
LCLPLKID=MSC2A2B
MSLINK PARTNER=BA,MSPLINK=LINKBA
MSCBA MSNAME SYSID=(102,101)
Example 6-23 IMS stage 1 macros for a TCP/IP MSC link (remote system)
LINKBA MSPLINK TYPE=TCPIP,NAME=I12A,SESSION=2,BUFSIZE=4096, X
LCLICON=HWSI12B1, X
LCLPLKID=MSC2B2A
MSLINK PARTNER=BA,MSPLINK=LINKBA
MSCBA MSNAME SYSID=(101,102)
Restriction: When the first TYPE=TCPIP MSPLINK is added to your IMS stage1, you must
run an ALL or NUCLEUS system generation and restart IMS.
The relationship between the MSPLINK macro and the MSC configuration statements in IMS
Connect is created by the IMSPLEX, TMEMBER, NAME, LCLPLKID, LCLICON, and LCLIMS
specifications.
The IMS MSVERIFY utility and /MSVERIFY command is enhanced in IMS 12 to support the new
TCP/IP MSC links. To use the MSVERIFY command, you must add an MSVID operand to the
IMSCTRL macro before running a stage1 or stage2 generation. This method causes additional
modules to be written to the MODBLKS data set. You can then run the MSVERIFY utility
(Example 6-24).
The output from running the MSVERIFY utility shows the configuration and assignments that
are active when the linked IMS systems are restarted (Example 6-25).
...
Example 6-26 Output from the /DIS ASSIGNMENT MSPLINK ALL command
DFS4444I DISPLAY FROM ID=I12A
LINK PLINK TYPE ADDR MAXSESS NODE
1 LINKAB VTAM 36000007 2 APPLI12B
2 LINKBA TCPIP **** 2 I12B
3 LINKAC VTAM 00000000 2 APPLI12C
4 LINKCA TCPIP **** 2 I12C
5 LINKAD VTAM 58000006 2 APPLI12D
6 LINKDA TCPIP **** 2 I12D
*11214/165528*
Example 6-28 shows the MSPLINK being stopped, updated to have a new local ICONPLIKID,
and then restarted.
Example 6-31 IMS Connect configuration for a TCP/IP MSC link (local system)
HWS=(ID=HWSI12A1,PSWDMC=N,RACF=N,RRS=Y,UIDCACHE=Y,XIBAREA=50)
TCPIP=(EXIT=(HWSSMPL1,HWSSOAP1),HOSTNAME=TCPIP,PORT(ID=7102))
MSC=(LCLPLKID=MSC2A2B,RMTPLKID=MSC2B2A,
LCLIMS=I12A,RMTIMS=I12B,
IMSPLEX=(MEMBER=HWSI12A1,TMEMBER=IM12X),
RMTIMSCON=HWSI12B1)
RMTIMSCON=(ID=HWSI12B1,
HOSTNAME=WTSC64.ITSO.IBM.COM,PORT=7202,
PERSISTENT=Y,RESVSOC=1)
Example 6-32 IMS Connect configuration for a TCP/IP MSC link (remote system)
HWS=(ID=HWSI12B1,PSWDMC=N,RACF=N,RRS=Y,UIDCACHE=Y,XIBAREA=50)
TCPIP=(EXIT=(HWSSMPL1,HWSSOAP1),HOSTNAME=TCPIP,PORT(ID=7202))
MSC=(LCLPLKID=MSC2B2A,RMTPLKID=MSC2A2B,
LCLIMS=I12B,RMTIMS=I12A,
IMSPLEX=(MEMBER=HWSI12B1,TMEMBER=IM12X),
RMTIMSCON=HWSI12A1)
RMTIMSCON=(ID=HWSI12A1,
HOSTNAME=WTSC63.ITSO.IBM.COM,PORT=7102,
PERSISTENT=Y,RESVSOC=1)
The relationship between the MSC and RMTIMSCON configuration statements in IMS Connect is
created by the RMTPLKID, RMTIMS, RMTIMSCON, and ID specifications.
The number of reserved sockets (RESVSOC) is determined by the number of logical links
(MSLINK macros) that are defined for the MSC link in the IMS stage1 and by the number of
existing MSLINK macros that are assigned, by using the /MSASSIGN command, to the new
TCP/IP MSC link.
The use of AUTOCONN=N, PERSISTENT=N, and a non-zero IDLETO is disallowed for RMTIMSCON
connections used by MSC. IMS Connect resets the values (Example 6-33).
Notice that IMS reacted by disconnecting the MSC link, as shown in message DFS2169I.
Example 6-44 Output from the QRY IMSCON TYPE(CONFIG) SHOW(ALL) command
Response for: QUERY IMSCON TYPE(CONFIG) SHOW(ALL)
MbrName CC Version IconID IPAddress MaxSoc TimeOut NumSoc WarnSoc WarnInc UidCache
UidAge RACF PswdMc RRS RRSStat Recorder SMem Cm0Atoq Adapter ODBMAC
ODBMTO
---------------------------------------------------------------------------------------------------------
HWSI12A1 0 V12 HWSI12A1 009.012.006.070 50 0 5 80 5 Y
2147483647 N N Y REGISTERED N Y Y
18000
HWSI12B1 0 V12 HWSI12B1 009.012.006.009 50 0 5 80 5 Y
2147483647 N N Y REGISTERED N Y Y
18000
HWSI12C1 0 V12 HWSI12C1 009.012.006.070 50 0 5 80 5 Y
2147483647 N N Y REGISTERED N Y Y
18000
HWSI12D1 0 V12 HWSI12D1 009.012.006.009 50 0 5 80 5 Y
2147483647 N N Y REGISTERED N Y Y
18000
Example 6-45 Output from the QRY IMSCON TYPE(MSC) NAME(*) SHOW(ALL) command
Response for: QRY IMSCON TYPE(MSC) NAME(*) SHOW(ALL)
MscName MbrName CC CCText RmtPlkID LclIMS RmtIMS GenIMSID
Affin IpMember IMSplex RmtImsCon IpAddress HostName Port
Status Link Partner SendClnt RecvClnt LinkStatus
-----------------------------------------------------------------------------------------------------
MSC2A2B HWSI12A1 0 MSC2B2A I12A I12B
HWSI12A1 IM12X HWSI12B1 9.12.6.9 WTSC64.ITSO.IBM.COM 7202
ACTIVE
MSC2A2B HWSI12A1 0
Example 6-46 Output from the QRY IMSCON TYPE(MSC) NAME(MSC2B2A) SHOW(LINK) command
Response for: QRY IMSCON TYPE(MSC) NAME(MSC2B2A) SHOW(LINK)
MscName MbrName CC CCText Link Partner SendClnt RecvClnt
LinkStatus
MSC2B2A HWSI12B1 0
Example 6-47 Output from the QRY IMSCON TYPE(RMTIMSCON) NAME(*) SHOW(ALL) command
Response for: QRY IMSCON TYPE(RMTIMSCON) NAME(*) SHOW(ALL)
RmtImsCon MbrName CC IpAddress HostName Port AutoConn Persist IdleTO
ResvSoc NumSoc Appl UserID Status TotSClnts TotRecv TotConn TotXmit TotOther
SendClnt LclPlkID Second SendPort SendStatus
---------------------------------------------------------------------------------------------------------
HWSI12B1 HWSI12A1 0 9.12.6.9 WTSC64.ITSO.IBM.COM 7202 N Y 0
10 1 ACTIVE 1 0 1 0 0
HWSI12B1 HWSI12A1 0
HWSI12A1 HWSI12B1 0
Example 6-48 Output from the QRY IMSCON TYPE(SENDCLNT) NAME(*) SHOW(ALL) command
Response for: QUERY IMSCON TYPE(SENDCLNT) NAME(*) SHOW(ALL)
SendClnt MbrName CC UserID MscName Second SendPort RmtImsCon Status
-------------------------------------------------------------------------------------
MSC15F4C HWSI12A1 0 MSC2A2B 3030 8137 HWSI12B1 CONN
MSC1C844 HWSI12B1 0 MSC2B2A 3030 35474 HWSI12A1 CONN
Example 6-50 Output from the UPD IMSCON TYPE(LINK) NAME(x) MSC(m) STOP(COMM) command
Response for: UPDATE IMSCON TYPE(LINK) NAME(DFSL0002) MSC(MSC2A2B) STOP(COMM)
Link MscName MbrName CC CCText
---------------------------------------------------------------------------------------------------------
DFSL0002 MSC2A2B HWSI12A1 0
Example 6-55 Output from UPD IMSCON TYPE (SENDCLNT) NAME(cl) RMTIMSCON(rmt) STOP(COMM)
Response for: UPDATE IMSCON TYPE(SENDCLNT) NAME(MSCCCAC6) RMTIMSCON(HWSI12A1) STOP(COMM)
SendClnt RmtImsCon MbrName CC CCText
MSCCCAC6 HWSI12A1 HWSI12B1 0
RMTIMSCON
SCI Connect3 TCP/IP
IMS2
MSPLINK
1. TCPIP
RMTIMSCON
Connect4
MVS3
2. IMS4
3. ICON1
IMSPlex 3
4. MCS24 ICON3 IMS4
1. MSC42 1. TCPIP
2. IMS2
TCP/IP SCI
RMTIMSCON 3. ICON3
Connect1 4. MSC42
In the new queue sharing IMS system, we add the MSPLINK, MSLINK, and MSNAME as shown in
Example 6-56. The existing definition remains unchanged.
Example 6-56 IMS stage 1 macros for a TCP/IP MSC link (shared queues)
LINKCB MSPLINK TYPE=TCPIP,NAME=I12B,SESSION=2,BUFSIZE=2048, X
LCLICON=HWCF12C1, X
LCLPLKID=MSC2C2B
MSLINK PARTNER=CB,MSPLINK=LINKCB
MSCBC MSNAME SYSID=(102,103)
If this is the first time you add an MSC link with TYPE=TCPIP, you must run a stage1 or stage2
ALL or NUCLEUS system definition and restart IMS to make the MSC link available.
You must also update the IMS.PROCLIB(DFSDCxxx) member for both queue sharing IMS
systems to define their generic IMS ID. This update requires an IMS WARM restart to activate
the generic IMS ID. Example 6-57 shows the new specification.
This generic IMS ID is only needed in the queue sharing systems. The /DISPLAY ACTIVE
command shows the generic IMS ID when it is activated.
Example 6-58 Output from the UPD MSPLINK NAME(*) START(GENLOGON) command
Response for: UPD MSPLINK NAME(*) START(GENLOGON)
MSPLink MbrName CC CCText
LINKBA I12A 0
LINKCB I12C 0
Example 6-59 Output from the UPD MSPLINK NAME(*) STOP(GENLOGON) command
Response for: UPD MSPLINK NAME(*) STOP(GENLOGON)
MSPLink MbrName CC CCText
LINKBA I12A 0
LINKCB I12C 0
MSC=(LCLPLKID=MSC2C2B,RMTPLKID=MSC2B2C,
LCLIMS=I12C,RMTIMS=I12B,
GENIMSID=ACIM,
IMSPLEX=(MEMBER=HWSI12A1,TMEMBER=IM12X),
RMTIMSCON=HWSI12B1)
RMTIMSCON=(ID=HWSI12B1,
HOSTNAME=WTSC64.ITSO.IBM.COM,PORT=7202,
AUTOCONN=Y,RESVSOC=10)
The shared queues system that is running for I12A and I12C now has a generic IMS ID of
ACIM. The one IMS Connect is used by both IMS systems.
RMTIMSCON=(ID=HWSI12A1,
HOSTNAME=WTSC63.ITSO.IBM.COM,PORT=7102,
AUTOCONN=Y,RESVSOC=10)
The RMTIMSCON definitions are unchanged but are included in both examples for clarity.
You must create PTKTDATA and APPL definitions in the RACF database. At the local system,
you need PTKTDATA so that system can generate a PassTicket (Example 6-62).
At the remote system, create PTKTDATA (used to verify the incoming PassTicket) and an
APPL, and permit the remote user ID read access to the APPL (Example 6-63).
SETROPTS CLASSACT(PTKTDATA)
SETROPTS RACLIST(PTKTDATA)
RDEFINE PTKTDATA APPLI12B SSIGNON(KEYMASKED(E001193519561977)) UACC(N)
SETROPTS REFRESH RACLIST(PTKTDATA)
SETROPTS CLASSACT(APPL)
SETROPTS RACLIST(APPL)
RDEFINE APPL APPLI12B UACC(N)
PERMIT APPLI12B ACCESS(READ) CLASS(APPL) ID(USER002)
SETROPTS RACLIST(APPL) REFRESH
RLIST APPL APPLI12B AU
KEYMASKED value: The value used for KEYMASKED must be identical on both systems.
IMS1 has MSPLINK with the new parameters for TCP/IP. The NAME matches the RMTIMS
parameter defined on the MSC definition in the local IMS Connect.
The LCLICON matches the MEMBER parameter of the IMSPLEX statement in the MSC definition
in the local IMS Connect. The LCLPLKID matches the LCLPLKID parameter in the MSC
definition.
The local IMS Connect has the MSC definition and the RMTIMSCON definition. On the MSC
definition, the LCLIMS name must match the IMSID of the local IMS system. The RMTIMS
defines the IMS ID of the remote system (or the generic name if GENIMSID is used). The
RMTIMSCON name must match the ID parameter of the RMTIMSCON definition. On the RMTIMSCON
definition you define the ID and the TCP/IP parameters so that you can find the remote
system within the network.
The remote IMS Connect has the MSC definition with RMTIMS, LCLIMS, LCLPLKID, and
RMTIMSCON. It also has the RMTIMSCON definition to define the other IMS Connect.
Of the new records, those capturing the entire message, and therefore generating a lot of
data, are only recorded in the Base Primitive Environment (BPE) External trace. They have
the following IDs (eyecatchers):
ICONTR TCP/IP Receive
ICONTS TCP/IP Send
ICONIR IMS OTMA Receive
ICONIS IMS OTMA Send
The new records generating small amounts of data are captured by using the existing, old
method. They have the IDs (eyecatchers):
ICONMS MSC send
ICONMR MSC receive
ICONRR Remote IMS Connect to local IMS Connect
The existing trace points are still recorded in both the new recorder trace and the existing
HWSRCORD trace:
ICONRC: User Msg Exit Receive
ICONSN: User Msg Exit XMIT
The support for a BPE-based recorder trace was included in IMS 11.
Tip: Remove the existing recorder trace, and replace it with the new BPE-based trace.
2. Update the BPE configuration to add a TRCLEV and EXTTRACE definition to define the
GDG base for the new recorder trace (Example 6-66).
3. Define the GDG base data set by using IDCAMS or option 6 of ISPF (Example 6-67).
4. Restart IMS Connect, which then creates a recorder trace data set.
New commands
When you want to activate the recorder trace, you must issue a modify command
(Example 6-68) that is actioned by BPE. Regardless of the specification in the BPE
configuration, the recorder trace is always inactive when IMS Connect starts. It can only be
activated by an explicit UPDATE TRACETABLE command.
To terminate the recorder trace, you issue another BPE UPDATE command (Example 6-69).
Example 6-71 shows the QUERY MEMBER (or equivalent VIEWHWS) command with the current
values for UIDCACHE and UIDAGE.
The REXX application was amended to send in a bad password. The HWCSCFG was
updated to have RACF=Y. In addition, the REXX error handler was updated to display the
header when IMS Connect returns a *REQSTS* RSM.
Example 6-73 shows the new REXX code used to handle the *REQSTS*. RSM_RetCod and
RSM_RsnCod were already being interpreted.
Example 6-74 shows that, after the RACF error, IMS Connect returns LLLL=24 (x’00000018’)
because HWSSMPL1, RSM_Len=20 (x’14’), RSM_FLG1=x’00’, and RSM_RACFRC=x’08’ are
used.
IMS Connect in IMS 12 now has a function to deal with clients that have not completed their
send of data or have the wrong value for the length passed to the IMS Connect exit. You can
see which clients are waiting in with a partial read.
Example 6-75 shows the output from a VIEWPORT command with a client using HWSSMPL1
that has an invalid full length value (LLLL). The client is hanging waiting for IMS Connect. IMS
Connect is waiting for the client to send the missing data.
Example 6-75 Output from the VIEWPORT command with a partial read client
R 822,VIEWPORT 7400
IEE600I REPLY TO 822 IS;VIEWPORT 7400
HWSC0001I PORT=7400 STATUS=ACTIVE KEEPAV=0 NUMSOC=2 EDIT=
TIMEOUT=0
HWSC0001I CLIENTID USERID TRANCODE DATASTORE STATUS
SECOND CLNTPORT IP-ADDRESS APSB-TOKEN
HWSC0001I DELDUMMY READ
386 9180 127.000.000.001
HWSC0001I TOTAL CLIENTS=1 RECV=0 READ=1 CONN=0 XMIT=0 OTHER=0
Example 6-76 shows the same status by using the QUERY IMSCON command from the IMS
batch SPOC. You can also use the QRY IMSCON TYPE(PORT) NAME(7400)
SHOW(STATUS,CLIENT) command to reduce the response to just the information you need.
Example 6-76 Output from the QRY IMSCON TYPE(PORT) command with a partial read client
Response for: QRY IMSCON TYPE(PORT) NAME(7400) SHOW(ALL)
Port MbrName CC CCText KeepAv NumSoc Edit TimeOut
Status TotClnts TotRecv TotRead TotConn TotXmit TotOther ClientID UserID Trancode DataStore
CStatus Second ClntPort IpAddress ApsbToken
---------------------------------------------------------------------------------------------------------
7400 HWSI12D1 0 0 2 0
ACTIVE 1 0 1 0 0 0
7400 HWSI12D1 0
DELDUMMY
READ 262 9180 127.0.0.1
Example 6-77 shows usage of the STOPCLNT command to terminate the connection from the
remote client and break the hang condition.
Example 6-78 The UPD IMSCON TYPE(CLIENT) command to terminate a partial read client
Response for: UPD IMSCON TYPE(CLIENT) NAME(DELDUMMY) PORT(7400) STOP(COMM)
ClientID Port MbrName CC CCText
----------------------------------------------------------------------------
DELDUMMY 7400 HWSI12D1 0
In IMS 12, the exits are being repackaged as load modules and sample source. Users who
want to modify the IBM supplied samples can still do so. Customer who assemble and bind
the current samples without modification can remove a task from their migration and
implementation plans.
If you need to assemble the sample source for any reason, then the JCL used to for this task
must include IMS.SDFSMAC, SYS1.MACLIB, and SYS1.MODGEN. Otherwise, the assembly
will fail.
HLISIOT HiLocIconSendIOTime The longest interval of time that the local IMS Connect instance is required
to process a message from SCI and send it to TCP/IP.
HLSSIOT HiLocSciSendIOTime The longest interval of time that the local SCI instance is required to
process a message from IMS and send it to the local IMS Connect.
HRISIOT HiRmtIconSendIOTime The longest interval of time that the remote IMS Connect instance is
required to process a message from TCP/IP and send it to the remote SCI.
HRSSIOT HiRmtSciSendIOTime The longest interval of time that the remote SCI instance is required to
process a message from the remote IMS Connect instance and send it to
the remote IMS system.
HTCSIOT HiTcpipSendIOTime The longest interval of time that a message is required to travel from the
local IMS Connect instance to the remote IMS Connect instance on the
TCP/IP network.
LLISIOT LowLocIconSendIOTime The shortest interval of time that the local IMS Connect instance is required
to process a message from SCI and send it to TCP/IP.
LRISIOT LowRmtIconSendIOTime The shortest interval of time that the remote IMS Connect instance is
required to process a message from TCP/IP and send it to the remote SCI.
LRSSIOT LowRmtSciSendIOTime The shortest interval of time that the remote SCI instance is required to
process a message from the remote IMS Connect instance and send it to
the remote IMS system.
LTCSIOT LowTcpipSendIOTime The shortest interval of time that a message is required to travel from the
local IMS Connect instance to the remote IMS Connect instance on the
TCP/IP network.
TLISIOT TotLocIconSendIOTime The total amount of time that the local IMS Connect instance is required to
process all messages from SCI and send them to TCP/IP.
TLSSIOT TotLocSciSendIOTime The total amount of time that the local SCI instance is required to process
all messages from IMS and send them to the local IMS Connect.
TRISIOT TotRmtIconSendIOTime The total amount of time that the remote IMS Connect instance required to
process all messages from TCP/IP and send them to the remote SCI.
TRSSIOT TotRmtSciSendIOTime The total amount of time that the remote SCI instance is required to
process all messages from the remote IMS Connect instance and send
them to the remote IMS system.
TTCSIOT TotTcpipSendIOTime The sum total of the amount of time that all messages is required to travel
from the local IMS Connect instance to the remote IMS Connect instance
on the TCP/IP network.
QUERY IMSCON TYPE(DATASTORE) NAME(*) SHOW(ALL VIEWDS ALL QUERY DATASTORE NAME(*) SHOW(ALL)
| showparm)
QUERY IMSCON TYPE(MSC) NAME(*) SHOW(ALL | VIEWMSC ALL QUERY MSC NAME(*)
showparm)
QUERY IMSCON TYPE(MSC) NAME(mscid) SHOW(ALL | VIEWMSC mscid QUERY MSC NAME(mscid)
showparm)
QUERY IMSCON TYPE(PORT) NAME(*) SHOW(ALL | VIEWPORT ALL QUERY PORT NAME(*) SHOW(ALL)
showparm)
QUERY IMSCON TYPE(PORT) NAME(LOCAL) SHOW(ALL | VIEWPORT LOCAL QUERY PORT NAME(LOCAL) SHOW(ALL)
showparm)
QUERY IMSCON TYPE(RMTIMSCON) NAME(*) SHOW(ALL VIEWRMT ALL QUERY RMTIMSCON NAME(*)
| showparm)
QUERY IMSCON TYPE(UOR) NAME(*) SHOW(ALL | VIEWUOR ALL QUERY UOR NAME(*) SHOW(ALL)
showparm)
QUERY IMSCON TYPE(UOR) NAME(uorid) SHOW(ALL | VIEWUOR uorid QUERY UOR NAME(uorid) SHOW(ALL)
showparm)
Table 6-8 IMS Connect type-2 set, reset, and refresh commands
Type-2 command WTOR reply Modify command
Dynamic resource definition (DRD) exploits this new function by using the IMSRSC type of
repository to simplify management of MODBLKS resources among multiple IMS systems.
DRD also maintains the resource definitions of all IMS systems in a single centralized store.
The IMSRSC repository is a strategic alternative to the resource definition data set (RDDS).
If you are already using DRD with separate RDDS for each IMS system, you can perform a
few simple migration steps to begin using the repository instead. This will eliminate the need
to manually coordinate and manage several RDDSs across different IMS systems. If you have
never implemented DRD, you can use the definitions that exist in your MODBLKS data set as
a starting point, eventually porting them to the new repository.
Similar to the RDDS, the IMSRSC repository contains MODBLKS definitions of the following
resources:
Databases
Database descriptors
Programs
Program descriptors
Routing codes
Routing code descriptors
Transactions
Transaction descriptors
Implementation of the repository is in alignment with the future direction of IMS in that it
provides a simplified, more dynamic method of managing resource definitions. Using DRD
with the repository provides the highest availability for your IMS system MODBLKS
resources. You no longer need to perform sysgen or online change to manage these
resources. The repository centralizes all of these definitions in a single location.
Operations Resource
Structured X Primary/Secondary
Manager Manager
Call Interface C
(OM) (RM)
F
SCI SCI SCI
Audit Log
X Repository
C Server
SCI SCI F (RS)
Batch ADMIN X
IMS1 IMS2 Utility C
(FRPBATCH) F
IMSRSC Repository
The Repository Server is responsible for managing registrations and connections to the
repository. It ensures data integrity within the repository, and restricts access to the repository
so that only authorized users can retrieve data from it. The RS also has an audit trail
capability which you can set up to track certain events that occur when attempts to access the
repository occur. Finally, the RS provides a tracing capability that uses the Base Primitive
Environment (BPE), because it is based on this component.
As shown in Figure 7-1, the RS communicates with the Resource Manager component of the
Common Service Layer using z/OS cross-system coupling facility (XCF) services to process
incoming requests from RM and from the batch ADMIN utility. The RM utilities can be used to
populate the repository and to read from the repository. The RM utilities are useful in
migration and fallback because they work with both the RDDS and repository. The batch
ADMIN utility performs administration tasks for both the user and the RS catalog repository.
Both types of repositories consist of two pairs of VSAM key-sequenced data sets (KSDS): a
primary repository data set pair and a secondary repository data set pair. Each data set pair
is made up of an index data set that contains all of the search fields for members that exist in
the repository (including the member name). Each data set pair is also made up of a member
data set that contains the member data that is indexed by the index data set. In addition to its
primary and secondary repository data sets, the user repository can have a spare repository
data set for availability.
RS catalog repository: The RS catalog repository does not have spare capability.
The following sections explore these two types of repositories further, starting with the RS
catalog repository followed by the IMSRSC repository.
The catalog repository must be defined before any user repositories are defined, and it will
consist of two pairs of VSAM key sequenced data sets: a primary pair and a secondary or
duplex pair.
Like the RS catalog repository, the IMSRSC repository consists of a primary and secondary
pair of VSAM KSDS repository data sets, but it also has an optional spare pair. If a repository
write failure occurs on the primary or secondary data sets, the data set that still contains valid
data is copied to the spare data set and the failed copy is marked as discarded. In the event of
a read error, the remaining valid data set is read and no copy to spare is initiated. For
information about the recovery process, see “Recovery activities” on page 257.
You can have more than one IMSRSC repository per Repository Server, for example for test
and production IMS systems. However, it is recommended to have one Repository Server per
IMSplex so typically you will have one IMSRSC repository per Repository Server per IMSplex.
So far, the IMSRSC repository has been explained in general terms of it containing
MODBLKS stored resource definitions. The following sections examine the contents within
the IMSRSC repository from a structural perspective.
Repository data
An IMSRSC repository keeps track of the MODBLKS stored resource definitions for each IMS
system in the IMSplex. So, how does it do this? For each DRD-enabled IMS system that is
using the repository, the following two entities are in the IMSRSC repository:
IMS resource lists
These lists contain the names of resources that are defined to each IMS. Up to eight
resource lists can be in the repository for each IMS, one for each resource type: DB,
DBDESC, TRAN, TRANDESC, PGM, PGMDESC, RTC, and RTCDESC. If an IMS does
not have a resource of a certain type, the resource list for that type of resource does not
exist in the repository. The DBCTL system does not have a transaction or routing code
resource and descriptor resource list.
Resource definitions
The resource definition of each resource is defined in the repository. The resource
definition consists of the generic definition that applies to all IMS systems that have the
resource defined. If an IMS system has one or more attributes that are different from the
generic definition, a specific definition is maintained for each IMS. The entire resource
definition for a resource with the generic and specific sections is stored as one entity. Each
resource definition is stored as one entity, with the key and other information in the
repository index data set and the attribute information in the repository member data set.
You can query the resource definitions from the repository and the IMS systems that own the
resources by using the type-2 QUERY command for the resources in question. For more
information, see “QUERY for resources and descriptors” on page 234, and Example 7-37 on
page 234.
When IMS restarts, it reads its own IMS resource list in the repository to determine the list of
resource names to be autoimported. The stored resource definitions of the resources owned
by the IMS are read from the repository during autoimport. When these definitions are read
into the running IMS system, they are referred to as runtime resource definitions.
Important: SCI is not used for communication between RM and the Repository Server.
Communication between these two address spaces is handled by using z/OS XCF
services.
A single point of control (SPOC) is also required for entering certain types of repository
commands. For example, you can issue the EXPORT DEFN TARGET(REPO) command by using a
SPOC to write stored resource definitions to the repository.
Repository Server in command output: The Repository Server can be included in the
command output for a QUERY IMSPLEX command, and is shown as an active member of the
IMSplex. However, to include the Repository Server in this command output, it must first
register with SCI. However, registration is optional. This is the only instance in which SCI
will access the RS component of the repository environment. RS registers to SCI if
IMSPLEX(NAME=) is specified in the FRPCFG member.
If you are already familiar with the CSL and with the RM address space in particular, you
might be aware of the ability of RM to work in conjunction with a coupling facility resource
structure for certain functions. If a resource structure is available in your repository
environment, it will be used to provide repository name and repository type consistency within
the IMSplex.
Other utilities, provided by RM, are available for assistance in migrating to the repository
environment and for falling back from it. The RDDS to Repository utility (CSLURP10) populates
a user repository with the contents of a specified RDDS, and the Repository to RDDS RM
utility (CSLURP20) reverts definitions from a repository back to an RDDS.
For information about both of these utilities, see 7.8.2, “Offline access through RM utilities” on
page 228, and “Batch ADMIN commands” on page 241.
This section explores the implementation steps for using the repository in IMS 12 for both
environments: one in which DRD has already been implemented with RDDSs, and one in
which DRD will be implemented for the first time.
To begin, the following section outlines the general setup steps that apply to both of these
environments.
Regardless of the repository data set type, remember that each repository data set consists
of an index data set and a member data set. The IMSRSC repository can contain a spare in
addition to the primary and secondary data set, but a spare is not permitted with the
Repository Server catalog repository. Therefore, a maximum of 10 data sets must be defined:
two each for the primary, secondary and spare IMSRSC repository data sets, and two each
for the primary and secondary RS catalog repository data sets. If a spare is not used, then the
user only needs eight data sets.
Tip: Define the primary, secondary and spare data set pairs on different volumes, to
ensure availability. Also make sure that the size of the secondary index and member data
sets is greater than the size of the primary index and member data sets. Finally, the size of
the spare index and member data sets should be greater than the size of the secondary
index and member data sets.
In addition, for our RS catalog repositories we used the following data set names to maintain
consistency with our naming convention:
Primary Repository Server catalog repository index data set:
IMS12Q.IMS12X.REPO.CATPRI.RID
Primary Repository Server catalog repository member data set:
IMS12Q.IMS12X.REPO.CATPRI.RMD
Secondary Repository Server catalog repository index data set:
IMS12Q.IMS12X.REPO.CATSEC.RID
Secondary Repository Server catalog repository member data set:
IMS12Q.IMS12X.REPO.CATSEC.RMD
Example 7-1 JCL for allocating user repository and RS catalog repository data sets
//**********************************************************************
//* FUNCTION: ALLOCATE DATA SETS NEEDED FOR THE REPOSITORY USAGE FOR DRD
//**********************************************************************
//RDSALLOC JOB ACTINFO1,'PGMRNAME',CLASS=A,MSGCLASS=H,MSGLEVEL=(1,1),
// NOTIFY=&SYSUID,REGION=128M
//*
/*JOBPARM S=SC64
// JCLLIB ORDER=IMS12Q.PROCLIB
//*
//* ALLOCATE DATA SETS
//ALLOCATE EXEC PGM=IDCAMS,DYNAMNBR=200
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DEFINE CLUSTER(NAME(IMS12Q.IMS12X.REPO.CATPRI.RID) REUSE INDEXED -
KEYS(128,0) FREESPACE(10 10) RECORDSIZE(282 282) -
SHAREOPTIONS(2 3) CONTROLINTERVALSIZE(8192) -
VOLUMES(SBOXI5) CYLINDERS(1 1))
DEFINE CLUSTER(NAME(IMS12Q.IMS12X.REPO.CATPRI.RMD) REUSE INDEXED -
KEYS(12,0) FREESPACE(20 20) RECORDSIZE(8185 8185) -
SHAREOPTIONS(2 3) CONTROLINTERVALSIZE(8192) -
VOLUMES(SBOXI5) CYLINDERS(1 1))
DEFINE CLUSTER(NAME(IMS12Q.IMS12X.REPO.CATSEC.RID) REUSE INDEXED -
KEYS(128,0) FREESPACE(10 10) RECORDSIZE(282 282) -
SHAREOPTIONS(2 3) CONTROLINTERVALSIZE(8192) -
VOLUMES(SBOXI5) CYLINDERS(2 1))
DEFINE CLUSTER(NAME(IMS12Q.IMS12X.REPO.CATSEC.RMD) REUSE INDEXED -
KEYS(12,0) FREESPACE(20 20) RECORDSIZE(8185 8185) -
SHAREOPTIONS(2 3) CONTROLINTERVALSIZE(8192) -
VOLUMES(SBOXI5) CYLINDERS(2 1))
DEFINE CLUSTER(NAME(IMS12Q.IMS12X.REPO.IMSPRI.RID) REUSE INDEXED -
KEYS(128,0) FREESPACE(10 10) RECORDSIZE(282 282) -
SHAREOPTIONS(2 3) CONTROLINTERVALSIZE(8192) -
Now that the user repository data sets and RS catalog repository data sets have both been
allocated, the next step is to set up the Repository Server.
Example 7-2 shows the BPE configuration member that we used in our test environment for
our Repository Server address space.
Example 7-2 BPE configuration member for Repository Server address space
# DEFINITIONS FOR REPO TRACES
TRCLEV=(*,HIGH,REPO,PAGES=300)/* DEFAULT ALL TRACES TO HIGH */
For more information about BPE, see IMS Version 12 System Administration, SC19-3020.
Important: The IMSplex that you specify in the FRPCFG configuration member must be
the same IMSplex that you specify in your Common Service Layer configuration member
(either in the CSL section in the DFSDFxxx member or in the DFSCGxxx member) and the
CSL initialization members for SCI, RM, and OM.
Also note in the example that we have specified the names of the RS catalog repository
primary and secondary data sets that we allocated in the previous section. The RSNAME=
parameter represents the REPOID that will be appended to all messages issued by the
Repository Server. The characters “RP” will be added as a suffix to the RSNAME that you
specify. Our REPOID will be IMS12XRP because we specified RSNAME=IMS12X. Because
IMSPLEX(NAME=) is specified, RS will attempt to register to the SCI address space on the same
LPAR as the RS if SCI is available.
Another important parameter to specify is the XCF group name, which must be the same for
RM, in the CSLRIxxx member, the RS FRPCFG parameter XCF_GROUP_NAME, and on the
XCFGROUP= in the FRPBACTH JCL.
Important: The XCF group name specified on the XCF_GROUP_NAME= parameter in the
FRPCFG member must match the XCF group name specified in both the RM initialization
member CSLRIxxx and in the FRPBATCH (batch ADMIN) utility JCL that defines the user
repositories to the RS catalog repository (using the ADD function).
For a complete description of the other parameter values included in the FRPCFG
configuration member, see IMS Version 12 System Administration, SC19-3020.
Next, we must create the startup procedure JCL for the Repository Server, so that we can
later start this address space as a task. Example 7-4 shows the RS startup procedure that we
used in our test environment.
Resource Manager: If you are already using the SCI and OM components of the CSL in
your environment, you only need to complete the setup steps in this section for Resource
Manager. Then you can skip to “Resource Manager” on page 211.
Example 7-5 shows the SCI initialization member that we used in our test system.
Example 7-6 shows the SCI startup procedure that we used in our test system (used later
when we start the SCI address space).
Operations Manager
Define the CSLOIxxx PROCLIB member and the startup procedure JCL for this address
space. Later you start the OM address space as a task using the startup procedure, which
then reads the initialization member that you defined.
Example 7-7 shows the OM initialization member that we used in our test system.
Example 7-8 shows the OM startup procedure that we used in our test system (used later
when we start the OM address space).
Resource Manager
Define the CSLRIxxx PROCLIB member and the startup procedure JCL for this address
space. Later you start the RM address space as a task using the startup procedure, which
then reads the initialization member that you defined.
Example 7-9 shows the RM initialization member that we used in our test system.
The REPOSITORY section shown after the other RM initialization parameters is new for
IMS 12. Here, we define the name of the IMSRSC repository, the XCF group name, and
optional audit log access parameters. There can only be one REPOSITORY= statement. The
parameter values (new in this IMS release) have the following meanings:
REPOSITORY=() This value defines the IMS repository parameters for RM initialization. It is
specified within a section with the header <SECTION=REPOSITORY>.
NAME= This value specifies the repository name that is managed by RM. This name
must be same as the repository name defined to the RS. A repository is
defined to the RS with the batch ADMIN utility ADD command (see “ADD
command” on page 241). The repository name can be up to 44 characters
long and can contain the alphanumeric characters (A–Z, 0 - 9) and the
following symbols: period (.) at sign (@), number sign (#), underscore (_),
and dollar sign ($). The alphabetic characters A–Z can be uppercase only.
TYPE= This value specifies the repository type. The only valid value is IMSRSC.
GROUP= This value specifies the Repository Server z/OS XCF group name. This value
must be the same as the XCF group name specified on the
XCF_GROUP_NAME parameter of the FRPCFG member. RM and the RS
must be in the same XCF group. The value must be eight characters padded
on the right with blanks. Valid characters are A-Z (uppercase only), 0 - 9, and
the following symbols: number sign (#), dollar sign ($), and at sign (@).
AUDITACCESS= This is an optional parameter. It specifies the repository audit access level for
the specified repository. If this value is not specified, the audit access level
You can specify AUDITACCESS= in the CSLRIxxx member if you have more than one IMSRSC
repository being managed by the RS address space and want to have different audit level
specifications for each repository. Example 7-10 shows the RM startup procedure that we
used in our test system (executed later when we started the RM address space).
Setting up IMS
Upon initialization, IMS reads the DFSDFxxx PROCLIB member (Example 7-11) to apply
specific processing options for various functions. This member contains multiple sections that
apply to these functions. Here, we focus on the sections that apply to the CSL, DRD, and
repository, again with emphasis on the parameters that are new in IMS 12.
For each of the following sections, refer back to Example 7-11 on page 213. For more
information about other parameters that are not expanded upon here, see IMS Version 12
System Definition, GC19-3021.
Define the following parameters in the CSL section of the DFSDFxxx member for repository
enablement:
MODBLKS=DYN to indicate DRD usage (versus MODBLKS online change)
RMENV=Y to indicate that an RM address space will be used
Otherwise, automatic import is attempted from the RDDS or MODBLKS if certain other
conditions are true.
The AUTOIMPORT=REPO parameter specifies that stored resource and descriptor definitions are
imported (automatic import) from the IMSRSC repository. In this case, the CSLRIxxx member
and the REPOSITORY section of the DFSDFxxx member must also be defined with the
TYPE=IMSRSC parameter. Take into account the following considerations when the
AUTOIMPORT=REPO parameter is specified:
If the repository does not contain the stored resource definitions for the IMS, then IMS
comes up with no resources. IMS issues the DFS4404I message if the repository is empty.
If the IMS resource list in the repository is not empty, IMS processes the resource
definitions returned by the RM.
If an error occurs while processing the returned definitions, the action that is taken is
based on the IMPORTERR= parameter setting.
If there is an error reading from the repository, other than if the IMS resource list is not
found, a DFS4401E message is issued with the RM return and reason code. Action is taken
based on the REPOERR= parameter setting.
If the AUTOIMPORT=REPO parameter is specified and no REPOSITORY section is defined, or
the REPOSITORY= statement for the repository is not defined, the DFS4403E message is
issued. IMS initialization abends with U0071 with return code X'27'. The DFS2930 message
is issued with completion code 27,2108 before the abend.
If the AUTOIMPORT=REPO parameter is specified and RMENV=N is specified in the CSL section,
IMS initialization abends with U0071 with return code X'27', because IMS cannot access
the repository for the stored resource definitions. IMS requires the CSL RM address space
to access the repository. The DFS2930 message is issued with completion code 27,210C
before the abend.
For detailed information about automatic import from these other sources, see IMS Version
12 System Definition, GC19-3021.
If you have not yet enabled DRD, you can disregard these parameters in the sample
because your environment does not have them currently defined. For more information
about the post-migration items mentioned here, see 7.7.9, “Cleaning up the DFSDFxxx
member” on page 227.
Repository section
There is a new repository section flagged by a <SECTION=REPOISTORY> header that
specifies the type of IMS repository that will be used. As mentioned earlier, the type of
repository used with DRD is TYPE=IMSRSC, which is currently the only valid option. There can
be only one TYPE= statement within this section.
The second parameter is the actual name of the IMSRSC repository that holds the DRD
definitions for the IMSplex. Figure 7-2 illustrates each parameter in more detail.
Repository
IMSRSC repository
The parameter for the IMSRSC repository specifies the IMSRSC repository name that is
managed by RM. The repository name can be up to 44 characters long. The name of the
IMSRSC repository is defined in the RM initialization PROCLIB member (CSLRIxxx), and
must match the repository name specified when you invoke the batch ADMIN utility.
Now that you have defined the PROCLIB members required for the repository environment,
you can begin initializing the address spaces associated with them.
Next, we initialized our OM address space named IM12DOM using the startup procedure JCL
in Example 7-8 on page 210.
Example 7-13 Defining an IMSRSC repository to the RS catalog repository with FRPBATCH
//DEFNREPO JOB ACTINFO1,
// 'PGMRNAME',
// CLASS=A,
// MSGCLASS=H,MSGLEVEL=(1,1),
// NOTIFY=IMSR2,
// REGION=128M
//*
/*JOBPARM S=SC64
// JCLLIB ORDER=IMS12Q.PROCLIB
//*
//REPOADD EXEC PGM=FRPBATCH,PARM='XCFGROUP=IM12XREP'
//STEPLIB DD DISP=SHR,DSN=IMS12Q.SDFSRESL
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
ADD REPOSITORY(IMS12XRP) +
REPDSN1RID(IMS12Q.IMS12X.REPO.IMSPRI.RID) +
REPDSN1RMD(IMS12Q.IMS12X.REPO.IMSPRI.RMD) +
REPDSN2RID(IMS12Q.IMS12X.REPO.IMSSEC.RID) +
REPDSN2RMD(IMS12Q.IMS12X.REPO.IMSSEC.RMD) +
REPDSN3RID(IMS12Q.IMS12X.REPO.IMSSPR.RID) +
REPDSN3RMD(IMS12Q.IMS12X.REPO.IMSSPR.RMD) +
AUTOOPEN(YES)
START REPOSITORY(IMS12XRP)
//*
You can also control whether the repository data sets you are specifying with this command
are opened when the repository is started (AUTOOPEN(YES) is the default, which we have
included for clarity) or when a user first connects to it (AUTOOPEN(NO)). Lastly, after adding our
user repository to the RS catalog repository, we started it with the START command.
Now that we have defined our IMSRSC repository to the RS catalog repository, we can
initialize the RM address space.
When the RM address space is initializing, RM attempts to connect to the repository name
specified in the CSLRIxxx member on the REPOSITORY= statement. In our case, it connects to
the IMSRSC repository defined to the RS catalog repository (Example 7-13 on page 217).
RM then issues message CSL2500I (Example 7-14) that we received when we initialized RM
in our test environment. If the repository is empty and this is the first RM initialized repository,
you also see a CSL2501I message.
Attention: If you are already using RM in your shop, you do not need to restart the RM
address space to enable it for repository usage. Instead, you can issue a type-2 UPDATE RM
command to dynamically enable it using the following syntax:
UPDATE RM TYPE(REPO) REPOTYPE(IMSRSC) SET(REPO(Y))
Ensure that your CSLRIxxx and DFSDFxxx members have been defined with the required
repository enablement parameters before issuing this command.
With APAR PM41952 applied: If you started RM before the RS is started, RM initialization
will issue a CSL2502A highlighted message and will attempt to register to RS every
5 seconds until it is successful or RM is terminated. Also, if RS is started, but the repository
name RM is trying to connect to is not defined to the RS or is not available, RM will issue a
CSL2503A highlighted message and attempt to connect to the repository every 5 seconds.
Now that RM has been started, we can begin working with the IMSRSC repository in various
ways. We begin by populating the IMSRSC repository with resource definitions.
Tip: Choose the most current system RDDS for input to the RDDS to Repository utility
(CSLURP10). You can determine which system RDDS is the most current by browsing each
RDDS and viewing the timestamp information, which is included in clear text in the header
record. Determine the system RDDS names by viewing the RDDSDSN=() parameter in the
DFSDFxxx member of the IMS system.
To test this utility in our environment we used the most current system RDDS associated with
our IM12D system, which was IMS12Q.IMS12D.RDDS1. Example 7-15 shows the JCL that
we used to run the RDDS to Repository utility (CSLURP10).
After we ran the utility, we browsed the output to confirm that the definitions were successfully
copied from our specified RDDS to the IMSRSC repository which is shown in Example 7-16.
DB COUNT : 15
DBDESC COUNT : 0
PGM COUNT : 53
PGMDESC COUNT : 0
RTC COUNT : 3
RTCDESC COUNT : 0
If you had already implemented DRD in your shop and you want to populate the IMSRSC
repository while IMS is active, you can do so dynamically by using a type-2 EXPORT command,
as explained in the following section.
Important: Route the command to the IMS system whose runtime resource definitions you
want to capture. If you do not specify any command routing, ensure that you include the
IMSID on the SET(IMSID()) parameter.
We issued this flavor of the EXPORT command and routed it to our repository-enabled IMS
(I12D); the output is shown in Figure 7-3.
Attention: Migration from MODBLKS online change to DRD requires an IMS cold start
and some other general setup. For more information about migrating from MODBLKS
online change to DRD, see IMS Version 12 System Definition, GC19-3021.
Currently no utility is available that uses a MODBLKS data set as input and directly generates
an equivalent IMSRSC repository. Two utilities must be used: one that creates a non-system
RDDS from MODBLKS, and the other that generates an IMSRSC repository from the
non-system RDDS created with the first utility.
As you can see, a non-system RDDS must be used as a temporary bridge between the
MODBLKS and IMSRSC repository. Begin by allocating a non-system RDDS for use with
these utilities. Then use the Create RDDS from MODBLKS utility (DFSURCM0) to generate an
RDDS with contents equivalent to your MODBLKS data set. You can then use this RDDS, in
turn, as input to the RDDS to Repository utility (CSLURP10) to generate equivalent IMSRSC
To populate our IMSRSC repository with the stored definitions in our MODBLKS, we ran the
JCL shown in Example 7-18. This JCL invoked the Create RDDS from MODBLKS utility
(DFSURCM0) and the RDDS to Repository utility (CSLURP10).
Tip: The SUFFIX= parameter (can be abbreviated as SUF=) shown in Example 7-18
specifies a 1-character value for the suffix that is associated with the members in the
MODBLKS data set. Ensure that this parameter is specified correctly by confirming the
suffix character for the members that exist within your MODBLKS data set. The SUFFIX
default value is 0.
Example 7-18 Populating an IMSRSC repository from MODBLKS using DFSURCM0 and CSLURP10
//POPREPOS JOB ACTINFO1,
// 'PGMRNAME',
// CLASS=A,
// MSGCLASS=H,MSGLEVEL=(1,1),
// NOTIFY=IMSR2,
// REGION=128M
//*
/*JOBPARM S=SC64
// JCLLIB ORDER=IMS12Q.PROCLIB
//*********************************************************************
//* FUNCTION: Populate data into the IMS repository
//*********************************************************************
//STEP1 EXEC PGM=DFSURCM0
//STEPLIB DD DSN=IMS12Q.SDFSRESL,DISP=SHR
//MODBLKS DD DISP=SHR,DSN=IMS12Q.IMS12D.MODBLKSA
//RDDSDSN DD DSN=IMSR2.TEMPRDDS,DISP=SHR,
// UNIT=SYSDA,VOL=SER=SBOX79,
// SPACE=(CYL,(1,1),RLSE),
// DCB=(LRECL=32756,BLKSIZE=32760,RECFM=VB)
//SYSPRINT DD SYSOUT=*,
// DCB=(LRECL=133,BLKSIZE=6118,RECFM=FBA)
//REPORT DD SYSOUT=*,
// DCB=(LRECL=133,BLKSIZE=6118,RECFM=FBA)
//CONTROL DD *
IMSID=I12D
DB COUNT : 15
DBDESC COUNT : 0
PGM COUNT : 50
PGMDESC COUNT : 0
RTC COUNT : 3
RTCDESC COUNT : 0
TRAN COUNT : 35
TRANDESC COUNT : 0
DB DUPLICATES: 0
DBDESC DUPLICATES: 0
PGM DUPLICATES: 0
PGMDESC DUPLICATES: 0
RTC DUPLICATES: 0
RTCDESC DUPLICATES: 0
TRAN DUPLICATES: 0
TRANDESC DUPLICATES: 0
Attention: If your IMS system is DRD-enabled and you have been using RDDSs up to this
point, you do not need to restart your IMS system to enable it for repository usage. Instead,
you can issue a type-2 UPDATE IMS command to dynamically enable it using the following
syntax:
UDPATE IMS SET(LCLPARM(REPO(Y) REPOTYPE(IMSRSC))
Ensure that your DFSDFxxx member has been defined with the required repository
enablement parameters before issuing this command. Afterwards, if you have not already
populated your repository, you can use the EXPORT command to export all of the runtime
resource definitions of your IMS to it at this time.
Example 7-20 Batch ADMIN LIST command to display the information of a single IMSRSC repository
//LISTREPO JOB ACTINFO1,
// 'PGMRNAME',
// CLASS=A,
// MSGCLASS=H,MSGLEVEL=(1,1),
// NOTIFY=IMSR2,
// REGION=128M
//*
/*JOBPARM S=SC64
// JCLLIB ORDER=IMS12Q.PROCLIB
//*
//* FUNCTION: List the information of the IMS repository
//REPOLST EXEC PGM=FRPBATCH,PARM='XCFGROUP=IM12XREP'
//STEPLIB DD DISP=SHR,DSN=IMS12Q.SDFSRESL
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
//*
LIST REPOSITORY(IMS12XRP)
Example 7-21 Output for the batch ADMIN LIST REPOSITORY command
LIST REPOSITORY(IMS12XRP)
Repository Name . : IMS12XRP
There is also a flavor of the batch ADMIN LIST command that displays general information
about all user repositories that are defined to the RS. In Example 7-20 on page 225, simply
replace the LIST REPOSITORY() parameter with LIST STATUS. We ran the batch ADMIN LIST
STATUS command in our test environment and received the output shown in Example 7-22.
Example 7-22 The batch ADMIN LIST STATUS command to display all IMSRSC repository information
LIST STATUS
Repository Status Changed ID RDS1 RDS2 RDS3
-------------------------------------------- -------- ---------- -------- -------- -------- --------
IMS12XRP OPEN 2011/07/26 IMSR3 COPY1 COPY2 SPARE
FRP4750I - LIST command processing completed successfully
Notice that the output indicates a status for the primary, secondary, and spare repository data
sets. When the user repository data sets are originally defined to the RS catalog repository,
the primary starts out with a COPY1 status, the secondary with COPY2 status, and the spare
with SPARE status. These statuses will change when a write error occurs on either the
primary or secondary repository data sets. In this case, the RS will drive recovery and the
statuses for the repository data sets will change.
For example, when a primary repository data set starts out with a COPY1 status and a write
error occurs on it, its status will change from COPY1 to DISCARD status. The current status
of the spare data set changes from SPARE to COPY1. You must then allocate and define a
new repository data set as the new spare and manually set its status to SPARE with the batch
ADMIN DSCHANGE command. For more information about this command, see “DSCHANGE
command” on page 244.
Tip: You can disable autoexport for an IMS after it is enabled to use the repository by
issuing the UPDATE IMS SET(AUTOEXPORT(NO)) command. Otherwise, if your RDDSs are still
defined, autoexport will continue to occur at each system checkpoint if definitional changes
have been made since the previous system checkpoint.
These commands are used for dynamic enablement and disablement of a repository, to
display status of RM and IMS, and to create, update, delete, and query the DRD stored
resource definitions in the IMSRSC repository. Remember that the DRD CREATE, UPDATE, and
DELETE commands work with runtime definitions, not the stored resource definitions in the
repository. For more information about these commands, see 7.9.1, “IMS repository
commands” on page 230.
CSLURP10
CSLURP10 takes the contents of a system or non-system RDDS and writes these definitions to
an IMSRSC repository. This utility can be used to initially populate an IMSRSC repository and
update definitions at some later time. Because CSLURP10 uses RM to communicate with the
repository, an SCI address space on the LPAR where the utility is being run and an RM
address space must be available. Example 7-23 shows a job that runs the CSLURP10 utility to
generate an IMSRSC repository.
Example 7-23 JCL to populate a repository with stored definitions within an RDDS
//RDDS2RPO JOB,USER,CLASS=A,MSGCLASS=X,NOTIFY=USER
//*
//JOBLIB DD DSN=IMSTESTL.TNUC0,DISP=SHR
//*
//STEP1 EXEC PGM=CSLURP10,MEMLIMIT=4G
//SYSUDUMP DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//*
//***************************************************************/
//* SPECIFY A VALID RDDS DSN FOR INPUT */
//***************************************************************/
//RDDSDSN DD DSN=IMSTESTG.NONSYS.IMSRDDS1,DISP=SHR
//*
//***************************************************************/
//* IMSID MAY BE SPECIFIED ON SYSIN OR DEFAULT TO THE IMSID */
//* ON THE RDDS HEADER RECORD. SUBSTITUTE THE SYSIN STATEMENT */
//* BELOW TO CHANGE BEHAVIOUR */
//***************************************************************/
//SYSIN DD *
IMSPLEX(NAME=PLEX1 IMSID(SYS3,IMS2,IMS3))
//*
CSLURP20
CSLURP20 generates a non-system RDDS from an IMSRSC repository. It can be used for
backup or fallback. Because CSLURP20 uses RM to communicate with the repository, an SCI
address space and an RM address space must be available. Example 7-24 shows a job that
runs the CSLURP20 utility to generate an RDDS.
Example 7-24 JCL to populate an RDDS with stored definitions within an IMSRSC repository
//RPO2DDS JOB ,USER,CLASS=A,MSGCLASS=X,NOTIFY=USER
//*
//JOBLIB DD DSN=IMSTESTL.TNUC0,DISP=SHR
//*
//STEP1 EXEC PGM=CSLURP20
//SYSUDUMP DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
The ADD, UPDATE, RENAME, and DELETE commands provide the capability to manage IMSRSC
repository definitions. The DSCHANGE command provides the capability to change a data set
disposition (used to set up new SPARE data sets). The LIST command provides a display of
IMSRSC information. The START and STOP commands allow and prevent access to an
IMSRSC repository.
Example 7-25 shows JCL that you can use to run the batch ADMIN utility. The commands are
entered as part of the SYSIN DD statement.
Example 7-25 JCL that executes the ADD, START, and LIST functions of the batch ADMIN utility
//FRPBAT EXEC PGM=FRPBATCH,PARM=‘XCFGROUP=IM12XREP’
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
ADD REPOSITORY(IMS12XRP) +
REPDSN1RID(IMS12Q.IMS12X.REPO.IMSPRI.RID) +
REPDSN1RMD(IMS12Q.IMS12X.REPO.IMSPRI.RMD) +
REPDSN2RID(IMS12Q.IMS12X.REPO.IMSSEC.RID) +
REPDSN2RMD(IMS12Q.IMS12X.REPO.IMSSEC.RMD) +
REPDSN3RID(IMS12Q.IMS12X.REPO.IMSSPR.RID) +
REPDSN3RMD(IMS12Q.IMS12X.REPO.IMSSPR.RMD) +
AUTOOPEN(NO)
LIST REPOSITORY(IMS12XRP)
//*
For more information about a particular command beyond what is explained here, such as
available parameter values and their meanings, see IMS Version 12 Commands, Volume 1:
IMS Commands A-M, SC19-3009, or IMS Version 12 Commands, Volume 2: IMS Commands
N-V, SC19-3010.
UPDATE RM command
Use the new UPDATE RM command to dynamically enable the RM address space to use the
repository, or change the audit access level that was originally specified in the FRPCFG
member. Example 7-26 shows the command syntax.
Notice that the command must be issued with TYPE(REPO) and REPOTYPE(IMSRSC) to indicate
that we want to target a repository of type “IMSRSC”. These are the only valid values for
these parameters.
If the command is successful at the command master RM, the command master RM
communicates the changes to other active RMs in the IMSplex. All RMs in the IMSplex will
have the same repository settings. If RM is defined to use the resource structure, the
command master RM will update the resource structure with the repository name and
repository type that it is connected to. Subsequent RMs that are restarted after the change
will ensure that they are connected to the same repository name and repository type as read
from the resource structure.
If the RM repository usage is already enabled, you can dynamically disable it by specifying
SET(REPO(N)). The CSLRIxxx member is not reread or reprocessed in this case like it is when
SET(REPO(Y)) is specified. Therefore, as part of the repository disabling process, you can
remove the repository definitions from CSLRIxxx either before or after UPDATE RM is issued
with SET(REPO(N)). If the repository definitions specified on the REPOSITORY= statement are
still present in CSLRIxxx, any RMs that start after the UPDATE RM … SET(REPO(N)) command is
issued will reconnect to the repository during RM startup and enable the IMSRSC repository
at all RMs in the IMSplex.
With the AUDITACCESS() parameter, you can dynamically change the audit level settings and
override what was originally specified on the AUDIT_DEFAULT parameter in the FRPCFG
member. Example 7-27 shows an UPDATE RM command being issued to change the
AUDITACCESS parameter value to read.
QUERY RM command
Issue the QUERY RM command to determine whether an RM address space is enabled for
repository usage, or to view the audit access level or status of RM. The RM can have one of
the following statuses:
CONNECTED RM is connected to the IMSRSC repository and the repository is
available for use.
CONNECT-INCOMPLETE
RM successfully connected to the repository, but RM failed to correctly
update the repository global entry in the resource structure.
DISCONNECT-INCOMPLETE
RM successfully disconnected from the repository, but RM failed to
correctly update the repository global entry in the resource structure.
NOTAVAIL The repository is not available.
RS-NOTAVAIL No master RS is available.
SPARERECOV The repository spare recovery process is in progress.
SPARERCVERR The repository spare recovery process resulted in an error and the
repository is not available for use.
For information about the repository spare recovery process, see “Recovery activities” on
page 257.
You can filter the information associated with RM by specifying the SHOW() parameter
accordingly. For example, display only the repository attributes in RM by using the command
shown in Example 7-29. The only attribute value that is displayed when the SHOW(ATTRIB)
parameter is applied is the audit access level.
Example 7-29 Displaying the audit access setting using the QUERY RM command
Response for: QUERY RM TYPE(REPO) SHOW(ATTRIB)
RepositoryType MbrName CC AuditAccess RepositoryName
IMSRSC RMXRM 0 READ IMS12XRP
Alternatively, you can choose to display only the status of RM using the command shown in
Example 7-30.
Lastly, you can display the audit access setting and the status, the name of the IMSRSC
repository with which RM is associated, and the repository group in which it is contained. See
Example 7-31.
Example 7-31 Displaying all information associated with RM using the QUERY RM command
Response for: QUERY RM TYPE(REPO) SHOW(ALL)
RepositoryType MbrName CC Status AuditAccess RepositoryName RepositoryGroup
IMSRSC RMXRM 0 CONNECTED READ IMS12XRP IM12XREP
For detailed information about the input and output of the QUERY RM command, see IMS
Version 12 Commands, Volume 2: IMS Commands N-V, SC19-3010.
XRF: In an XRF environment, the UPDATE IMS command is processed both at the XRF
active IMS and the XRF alternate IMS to dynamically enable and disable usage of
repository.
Important: Before you issue this command to dynamically enable IMS for repository
usage, you must add repository definitions to the DFSDFxxx member and RM must
already be repository-enabled.
To dynamically disable automatic export for an IMS system, issue the UPDATE IMS command
with the LCLPARM(AUTOEXPORT(N)) parameter included. You take this step after migration to the
repository has been completed to reduce I/O overhead that occurs with autoexport.
IMS cold start: After automatic export is disabled with this command, an IMS cold start is
required to re-enable it.
The SHOW() parameter can be used to filter output shown in the command response. For
example, Example 7-34 shows the QUERY IMS command being issued to display whether the
IMS systems in an IMSplex are enabled for repository usage in addition to other information
about the repository.
Example 7-34 Showing repository attributes using the QUERY IMS command
Response for: QUERY IMS TYPE(LCLPARM) SHOW(REPO)
MbrName CC RepositoryType RepositoryName LastExportTime
I12A 0 IMSRSC IMS12XRP
I12B 0 IMSRSC IMS12XRP
I12C 0 IMSRSC IMS12XRP
I12D 0 IMSRSC IMS12XRP 2011.235 09:12:01.56
Example 7-35 Showing the automatic export setting using the QUERY IMS command
Response for: QUERY IMS TYPE(LCLPARM) SHOW(AUTOEXPORT)
MbrName CC AutoExport
I12A 0 Y
I12B 0 Y
I12C 0 Y
I12D 0 Y
To display all possible information about the repository, included with other IMS attribute settings,
you can specify the SHOW(ALL) parameter with the QUERY IMS command. For example command
output, see IMS Version 12 Commands, Volume 2: IMS Commands N-V, SC19-3010.
Tip: After an UPDATE IMS command is issued to dynamically enable repository usage, you
can confirm that the IMS is now repository-enabled by issuing the QUERY IMS command. If
IMS cannot be enabled for repository usage, you detect this problem by observing the
non-zero UPDATE command completion code.
Example 7-36 Syntax for displaying attribute values for resources or descriptors
QUERY rsc-type | desc-type NAME() SHOW()
To display both repository and local IMS resource definitions using the QUERY command, specify
the SHOW(DEFN) parameter (Example 7-37). In this example, we filtered the output to display the
attribute values indicating the type of access to the database and the local runtime value for the
resident option. The repository definitions can be distinguished from the IMS runtime definitions
by the presence of a “Y” in the “Repo” column. The generic definitions that apply to all IMS
systems have a Y in the Repo column and blank in the IMSid column. The IMS-specific
definitions have the IMS ID in the IMSid column both for the repository and runtime definitions.
Example 7-37 Displaying both repository and local IMS definition information with QUERY
Response for: QUERY DB NAME(*) SHOW(DEFN,ACCTYPE,RESIDENT)
DBName MbrName CC Repo IMSid TYPE Acc LAcc Rsdnt LDRsdnt LRsdnt
AUTODB I12A 0 Y UPD N
AUTODB I12A 0 I12A DL/I UPD N
AUTODB I12B 0 I12B DL/I UPD N
AUTODB I12C 0 I12C DL/I UPD N
AUTODB I12D 0 I12D DL/I READ N
DBFSAMD1 I12A 0 Y EXCL N
DBFSAMD1 I12A 0 I12A EXCL Y N
DBFSAMD1 I12B 0 I12B EXCL Y N
DBFSAMD1 I12C 0 I12C EXCL Y N
DBFSAMD1 I12D 0 I12D EXCL N
Example 7-38 Displaying only repository IMS definition information with QUERY
Response for: QUERY DB NAME(*) SHOW(DEFN,GLOBAL,ACCTYPE,RESIDENT)
DBName MbrName CC Repo IMSid Acc Rsdnt
AUTODB I12A 0 Y UPD N
DBFSAMD1 I12A 0 Y EXCL N
Alternatively, to display only IMS resource definitions local to IMS, include the
SHOW(DEFN,LOCAL) parameter in the QUERY command (Example 7-39). These definitions are
specific to each IMS system shown in the MbrName and IMSid columns in the output. To
maintain consistency with the previous two examples, we have again chosen to filter the
output to display the access type and resident attribute values.
Example 7-39 Displaying only local IMS definition information with QUERY
Response for: QUERY DB NAME(*) SHOW(DEFN,LOCAL,ACCTYPE,RESIDENT)
DBName AreaName PartName MbrName CC IMSid TYPE LAcc LDRsdnt LRsdnt
AUTODB I12A 0 I12A DL/I UPD N
AUTODB I12B 0 I12B DL/I UPD N
AUTODB I12C 0 I12C DL/I UPD N
AUTODB I12D 0 I12D DL/I READ N
DBFSAMD1 I12A 0 I12A EXCL Y N
DBFSAMD1 I12B 0 I12B EXCL Y N
DBFSAMD1 I12C 0 I12C EXCL Y N
DBFSAMD1 I12D 0 I12D EXCL N
If you want to see a list of IMS systems that have the resources specified in the command
defined to them, include the SHOW(IMSID) parameter in the QUERY command. Example 7-40
shows a command that includes this filter.
Example 7-40 Displaying a list of IMS systems that have a specific resource defined with QUERY
Response for: QUERY DB NAME(AUTODB) SHOW(IMSID)
DBName MbrName CC Repo IMSid
AUTODB I12A 0 Y I12A
AUTODB I12A 0 Y I12B
AUTODB I12A 0 Y I12D
AUTODB I12A 0 Y I12Z
Finally, to display all IMSIDs that have the specified resource defined, a list of the repository
resource definitions and any IMS-specific definitions, include the SHOW(DEFN,IMSID)
parameter as shown in Example 7-41. This example assumes that APAR PM41761 has been
applied.
To maintain consistency with the previous three examples, we have again chosen to filter the
output to display the access type and resident attribute values. In the example command
output, you can see that the AUTODB database is defined in the repository for IMSIDs I12B,
I12D, and I12Z as a stored resource definition. It is also defined at active IMS systems I12A,
I12B, I12C, and I12D as a runtime resource definition. The AUTODB database is not defined
in the repository as a stored definition for IMS systems I12A and I12C even though they have
a runtime definition in these systems.
EXPORT command
Issue the EXPORT command when resources or descriptors have been either created or
updated, and they need to be hardened to the repository. If hardening changes to offline
stored definitions is part of your change management process, use the EXPORT command
when definitional changes occur or at regular intervals during operations.
Important: In an XRF environment, the XRF active IMS can export its runtime resource
definitions to the repository using the EXPORT command, where the XRF alternate IMS
cannot. The XRF alternate IMS will only be able to export its runtime resource definitions to
the repository after it takes over for the active IMS. You can, however, update the XRF
active and alternate IMS systems’ stored resource definitions within the repository by
issuing an EXPORT command, specifying both of their IMSIDs on the SET(IMSID())
parameter.
The EXPORT command is processed by a single command master IMS and will write valid
resources and descriptors to the repository. Example 7-42 shows the command syntax.
The EXPORT command can also use data included in QUERY command output. If the QUERY
command is issued with the SHOW(TIMESTAMP) parameter included, you can determine the
exact time that a resource was created or updated. See the LTimeCreate column header
output in Example 7-43.
Example 7-43 Displaying the time a resource was created with QUERY and SHOW(TIMESTAMP)
Response for: QUERY PGM NAME(PGMADD) SHOW(TIMESTAMP)
PgmName MbrName CC LRgnType LTimeCreate
PGMADD I12A 0 MPP 2011.242 22:23:13.36
PGMADD I12B 0 MPP 2011.242 22:23:13.36
PGMADD I12C 0 MPP 2011.242 22:23:13.36
PGMADD I12D 0 MPP 2011.242 22:23:13.36
You are then able to use those timestamp values for the EXPORT STARTTIME() parameter, the
ENDTIME() parameter, or both. The STARTTIME() and ENDTIME() parameters can be as
specific as tenths and hundredths of a second, and matches the timestamp granularity
displayed in QUERY SHOW(TIMESTAMP) command output, so copying the exact values is
facilitated in this way. The STARTTIME() and ENDTIME() parameters are optional.
Next, you can indicate which IMS resource lists in the repository should be exported to by
listing them on the SET(IMSID()) parameter. This is an optional parameter and if omitted, the
runtime resource definitions of the command master IMSID are exported for the command
master IMS only. You can control which IMS is selected as command master by using the
ROUTE capability of the OM interface from which you are entering the command.
Wildcards are supported for the IMSID. However, the EXPORT command with a wildcard for the
IMSID will fail if no IMS resource lists are in the repository that matches the wildcard name.
When a specific IMSid is specified, the IMS resource list for the IMS is created if it does not
exist and the resource definitions are written to the repository.
When exporting a transaction resource to the repository with a type-2 EXPORT command,
ensure that the program resource associated with this transaction already exists within the
repository. Otherwise, the EXPORT command fails with a completion code such as the one
shown in Example 7-45.
If this occurs, determine which program should exist in the repository by querying the
transaction with a command such as the QUERY TRAN command (Example 7-46).
Lastly, attempt the EXPORT command a second time using a command such as the one shown
in Example 7-47.
Lastly, when issuing the EXPORT command, you can optionally display a line of output for each
successfully exported resource and optionally export only the resources and descriptors that
had definitional changes since the last EXPORT command was issued. Automatic export is not
possible when using DRD with the repository. However, by issuing the EXPORT command with
the OPTION(CHANGESONLY) parameter at regular intervals, you can ensure that definitional
changes are captured and hardened to the repository.
The EXPORT command writes all of the definitions to the repository as a single unit of work, as
is the case with RDDS DRD. In either case (that is, using DRD with the repository or with an
RDDS), the export fails if one resource is in error, and no other resources will be exported.
The difference between these two DRD types is that, when exporting to the repository, only
one command master IMS processes the command. This command master IMS creates and
updates the definitions for each IMSID specified in the SET(IMSID()) parameter. With RDDS
DRD, each IMS that receives the command will process it. In this case, the command can
succeed on some systems and fail on other systems.
In some cases, resources or descriptors cannot be exported, such as when they are IMS
system-defined descriptors or HALDB partitions or have invalid names. Validation is done by
IMS before export occurs. Validation of resource attributes between associated resources
such as transactions and programs, or transactions and routing codes, or routing codes and
programs, is performed by RM before the resource definitions are written to the repository.
For example, a transaction cannot be written if the associated program does not exist in the
repository.
The EXPORT command should be used in different ways depending on whether you have a
cloned or non-cloned IMS environment. Specifically, in a cloned environment it is
recommended to always specify the SET(IMSID()) parameter in the EXPORT command to
ensure that all IMS resource lists are exported to. This keeps the definitions in each of the
cloned resource lists for IMS consistent with one another. However, in a non-cloned
environment, the IMS systems might have differing attribute values for their resources.
Therefore, omit the SET(IMSID()) parameter so that each IMS resource list can be updated
one at a time, separately, and the IMS-specific attribute values can be maintained. When
omitting the SET(IMSID()) parameter, remember to route the command to the correct IMS.
Because no IMS-specific information is in the command syntax, you can easily reissue this
command without modifying the command itself.
For best performance, export only the resources and descriptors that have been changed
since they were last hardened to the repository with an EXPORT command. You can more
easily pinpoint the changed resource or descriptors by including the OPTION(CHANGESONLY)
parameter. Another method of minimizing the total number of resources or descriptors to be
exporting is including the STARTTIME() parameter to target only those that have been created
or updated after the specified time.
To delete stored definitions for specific IMS systems that have resource definitions defined in
the IMSRSC repository, specify the IMSIDs associated with the IMS resource lists with the
FOR(IMSID()) parameter. As mentioned before, it is appropriate to issue this command when
you want to harden runtime definition deletes to the repository. It is recommended that you
first delete the runtime resource definitions using the DELETE commands at the IMS systems
before using the DELETE DEFN command to delete the resource definitions from the IMSRSC
repository. Example 7-48 shows the command syntax.
A series of commands is shown in Example 7-49. Assume that the first three commands
shown are routed to IMS1 and IMS2. Looking at the example, a program is first queried to
determine whether any work in progress exists for it on either IMS system. Then, the
scheduling is stopped for the program to prevent any new work in progress from occurring.
The example continues to show that the program is deleted from two online IMS systems with
the DELETE command, then deleted from the IMS resource lists associated with these two
systems in the offline repository with the DELETE DEFN command.
Example 7-49 Command sequence for deleting runtime and stored IMS resource definitions
QUERY PGM NAME(PGM1) SHOW(WORK)
UPDATE PGM NAME(PGM1) STOP(SCHD)
DELETE PGM NAME(PGM1)
DELETE DEFN TARGET(REPO) TYPE(PGM) NAME(PGM1) FOR(IMSID(IMS1,IMS2)
Wildcard support exists for the FOR(IMSID()) parameter. When issuing the DELETE DEFN
command with the NAME(*) parameter, you can ensure that a line of output is displayed for
each resource or descriptor that was processed by including the OPTION(ALLRSP) parameter.
Attention: Use the NAME(*) parameter with care because it can delete all of the resource
definitions in the IMSRSC repository and IMS will be restarted with no resources.
IMPORT command
The IMPORT command reads stored definitions that exist in the repository into running IMS
systems. This command can be used if an IMS is restarted with no resources defined to
populate the control region with runtime resource definitions. Alternatively, if changes were
made to the repository offline and you want to roll the changes to the systems in IMSplex, the
IMPORT command can be used to accomplish this. An example of when this scenario might
arise is when you use the “RDDS to Repository” (CSLURP10) utility, introduced in 7.8.2, “Offline
access through RM utilities” on page 228, to write resource definitions to the IMSRSC
repository that have not been read into any IMS system yet.
You can control the output displayed by the IMPORT command by specifying the OPTION()
parameter accordingly. The OPTION(ABORT) and OPTION(ALLRSP) parameters used with RDDS
DRD import before IMS 12 are now also used with repository DRD import.
A new OPTION(UPDATE) parameter has been added for the IMPORT command. With this
parameter, an existing runtime resource definition can be updated with a stored resource
definition being imported from either the RDDS or the repository.
Attention: The OPTION(UPDATE) parameter is not the default and must be explicitly
specified to update a runtime resource definition with a stored resource definition.
The ROUTE capability of the OM API is used to route commands to specific IMS systems.
ROUTE=ALL is recommended when the SCOPE(ALL) parameter is included. If a ROUTE list is
specified (other than ROUTE=ALL), the command is processed only by the IMS systems in the
list that receives the command. Other IMS systems that have the resources defined but are
not included in the ROUTE list will not receive the command and therefore will not be
synchronized with the repository.
In IMS 12, the SCOPE(ALL) and SCOPE(ACTIVE) parameters are the same. The SCOPE(ACTIVE)
parameter applies the import to only the active IMS systems. The definitions of any inactive
IMS system are not synchronized with the definitions of the other IMS systems in the IMSplex
when it warmstarts or emergency restarts. To reestablish synchronization, you can issue an
IMPORT command to import the resources that the other active IMS systems imported while it
was inactive. If the inactive IMS is restarted, it is synchronized with the other IMS systems
because it reads its entire IMS resource list.
Table 7-1 shows the resource names associated with the new DELETE DEFN, QUERY RM, and
UPDATE RM commands that must be added to the RACF OPERCMDS class to prevent unauthorized
access. The required RACF permissions are also shown, and the IMSplex name must begin
with the characters CSL. For information about restricting access to other elements that are
part of the repository environment, see 7.10, “Security considerations” on page 258.
As explained in this section, user repositories are defined to the catalog repository using the
batch ADMIN ADD command and are started with the batch ADMIN START command.
Batch ADMIN commands focus on managing the individual user repository. In “Repository
Server commands issued through z/OS modify interface” on page 248, you see that the z/OS
modify interface commands have a similar function, but are geared toward managing the RS.
Batch ADMIN commands are available by starting the FRPBATCH utility with JCL statements.
Table 7-2 summarizes the batch ADMIN commands that are available for managing a user
repository.
LIST List status information for all user repositories or detailed information for a single user
repository
ADD command
To define an IMSRSC repository to the RS catalog repository, execute the batch ADMIN ADD
command using the syntax shown in Example 7-51. Here, you must specify the IMSRSC
repository name in addition to the names of the IMSRSC repository primary and secondary
index and member data sets. The remainder of the parameters are optional. The repository
name is converted to uppercase if it is specified in lower or mixed case.
Example 7-51 Syntax for the batch ADMIN utility ADD command
ADD REPOSITORY(repository-name)
REPDS1RID(primaryRID-name)
REPDS1RMD(primaryRMD-name)
REPDS2RID(secondaryRID-name)
REPDS2RMD(secondaryRMD-name)
REPDS3RID(NULL | spareRID-name)
REPDS3RMD(NULL | spareRMD-name)
AUTOOPEN(NO | YES)
SECURITYCLASS(NULL | securityclassname)
You can also control whether the repository data sets you are specifying with this command
are opened when the repository is started (AUTOOPEN (YES)), which is the default, or when a
user first connects to it (AUTOOPEN (NO)).
If you are going to be restricting access to the IMSRSC repository, specify the name of the
8-byte security class that will be used to restrict access here. This class overrides the
SAF_CLASS= parameter value in the FRPCFG configuration member, if one was specified.
Alternatively, if you want to deactivate repository security, you can specify the
SECURITYCLASS(NULL) parameter on the batch ADMIN ADD command to accomplish this. For
more information about setting up security for the IMS repository, see 7.10, “Security
considerations” on page 258.
We used the JCL shown in Example 7-52 to define a user repository named IMS12XRP to
the RS catalog repository.
UPDATE command
Use the batch ADMIN UPDATE command to modify a user repository definition within the RS
catalog data sets (specifically, to change the data sets, auto-open option or security class
associated with a specific repository). Example 7-53 shows the syntax.
Example 7-53 Syntax for the batch ADMIN utility UPDATE command
UPDATE REPOSITORY(repository-name)
REPDS1RID(ds1_rid_dsname | NULL)
REPDS1RMD(ds1_rmd_dsname | NULL)
REPDS2RID(ds2_rid_dsname | NULL)
REPDS2RMD(ds2_rmd_dsname | NULL)
REPDS3RID(ds3_rid_dsname | NULL)
REPDS3RMD(ds3_rmd_dsname | NULL)
AUTOOPEN (YES | NO)
SECURITYCLASS(securityclassname | NULL)
The only required parameter for this command is the REPOSITORY parameter. The parameters
associated with this command have the same meaning as they do when issued with the batch
ADMIN ADD command, as described in “ADD command” on page 241.
RENAME command
You can use the batch ADMIN RENAME command to rename a user repository name defined
within the RS catalog repository. Before a user repository can be renamed, you must first stop
the repository.
Example 7-55 Syntax for the batch ADMIN utility RENAME command
RENAME REPOSITORY(repository-name) REPOSITORYNEW(repository-newname)
If you rename the IMSRSC repository at the RS, and you have one or more RM systems
enabled with the repository, modify the RM to refer to the repository by the new name.
To modify the RM to refer to the repository by the new name, complete the following steps:
1. Disable RM from using the repository:
UPDATE RM TYPE(REPO) REPOTYPE(IMSRSC) SET(REPO(N))
2. Ensure that all RMs are disabled from using the repository:
QUERY RM TYPE(REPO) SHOW(ALL)
3. Modify the CSLRIxxx member of the IMS PROCLIB data set at all RMs to have the new
repository name.
4. Enable RM to use the repository:
UPDATE RM TYPE(REPO) REPO(TYPE(IMSRSC)) SET(REPO(Y))
To test this command in our environment, we used the JCL shown in Example 7-56.
Example 7-56 JCL to stop and rename a user repository in the RS catalog repository
//*
//REPOSRN EXEC PGM=FRPBATCH,PARM='XCFGROUP=IM12XREP'
//STEPLIB DD DISP=SHR,DSN=IMS12Q.SDFSRESL
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
STOP REPOSITORY(IMS12XRP)
RENAME REPOSITORY(IMS12XRP) REPOSITORYNEW(IMS12XRP_NEW)
//*
Example 7-57 Syntax for the batch ADMIN utility DELETE command
DELETE REPOSITORY(repository-name)
To test this command in our environment, we used the JCL shown in Example 7-58.
Example 7-58 JCL to delete a user repository from the RS catalog repository
//*
//REPODEL EXEC PGM=FRPBATCH,PARM='XCFGROUP=IM12XREP'
//STEPLIB DD DISP=SHR,DSN=IMS12Q.SDFSRESL
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DELETE REPOSITORY(IMS12XRP_NEW)
DSCHANGE command
At certain times it is appropriate to change the disposition of a repository data set (RDS) to
either SPARE or DISCARD, which can be done using the batch ADMIN DSCHANGE command.
If you want to replace an existing RDS with a different RDS, you must first stop the repository
and change the disposition or status of the RDS to DISCARD. This can be done by issuing
the batch ADMIN DSCHANGE command with the ACTION(DISCARD) parameter specified. When an
RDS has a disposition of DISCARD, it can be replaced with a newly defined data set.
Consider the following example scenario that begins with three RDSs that each have different
dispositions or statuses of COPY1 (primary RDS), COPY2 (secondary RDS), and SPARE
(spare RDS). A write error then occurs on the primary RDS and the following events ensue:
The RS changes the disposition of the primary data set from COPY1 to DISCARD.
The RS copies the definitions of the secondary data set to the spare data set.
The RS changes the disposition of the spare data set to the disposition of the previous
primary by changing its disposition from SPARE to COPY1.
The user issues a command to determine which RDS has been discarded (using either
the batch ADMIN LIST command or the z/OS modify interface ADMIN,DISPLAY command).
The user deletes the discarded data set that was previously the primary data set.
The user defines a new data set to replace the previous spare, making the new spare
larger than the previous spare as this is a best practice.
LIST command
Use the batch ADMIN LIST command to display all user repositories and their associated
statuses, or the details of a single repository. The statuses that will be shown are listed here:
User repository name
User repository status
Date of last update
USERID that last updated user repository
Data set disposition for each RDS
Example 7-60 Syntax for the batch ADMIN utility LIST command
LIST REPOSITORY(repository-name) | STATUS
In our test environment, we issued the LIST STATUS command using the JCL shown in
Example 7-61 to display all user repositories along with information about them.
In our case, we only had a single user repository defined. Therefore, it is the only one shown
in the output generated from the batch ADMIN utility LIST STATUS command; see
Example 7-62. If additional user repositories were defined in our environment, they might
have been included in the output for this command.
Example 7-62 Output from the batch ADMIN utility LIST STATUS command
LIST STATUS
Repository Status Changed ID RDS1 RDS2 RDS3
-------------------------------------------- -------- ---------- -------- -------- --------- ---------
IMS12XRP OPEN 2011/07/26 IMSR3 COPY1 COPY2 SPARE
Example 7-63 Output from the batch ADMIN utility LIST command when a user repository is specified
LIST REPOSITORY(IM12XRP)
Repository Name . : IM12XRP
START command
Use the batch ADMIN START command to start a specific user repository, for example after it
has been defined to the RS catalog repository with the batch ADMIN ADD command.
Example 7-64 shows the command syntax.
Example 7-64 Syntax for the batch ADMIN utility START command
START REPOSITORY(repository-name) OPEN(YES | NO)
MAXWAIT(seconds,IGNORE | CONTINUE | ABORT)
With the optional OPEN() parameter, you can override the AUTOOPEN= parameter value that was
originally specified when the repository was added to or last updated in the RS catalog (with
batch ADMIN ADD or UPDATE commands, respectively). This parameter value indicates whether
the RDSs of the user repository will be open when it starts (with OPEN(YES)), or when a user
first connects to it (with OPEN(NO)). You can only override the AUTOOPEN= parameter if it was
originally specified as AUTOOPEN=NO.
You can optionally include the MAXWAIT parameter to indicate how many seconds should
elapse before a particular action that you also specify is taken. You can specify a wait time of
up to 9999 seconds and opt to have the command continue processing with a return code of
either 0 or 4, or opt to have it terminate with a return code of 8. By default, if 5 seconds has
elapsed after this command has been issued, the command will continue processing and will
give a return code of 4.
In our test environment, we started our user repository at the time that we defined it to the RS
with the batch ADMIN utility ADD command, which is shown in Example 7-52 on page 242.
STOP command
Use the batch ADMIN STOP command to stop a specific user repository that is defined to the
RS catalog. A stopped repository will reject user connection attempts. The command also
results in the repository being closed and deallocated by the RS. Example 7-65 shows the
command syntax. This command has the same MAXWAIT() parameter value as the batch
ADMIN START command.
Example 7-65 Syntax for the batch ADMIN utility STOP command
STOP REPOSITORY(repository-name)
MAXWAIT(seconds,IGNORE | CONTINUE | ABORT)
Much like a /DBR command that prevents programs and transactions from accessing a
database, the batch ADMIN STOP command, when issued with MAXWAIT(xx,IGNORE) or
MAXWAIT(xx,CONTINUE), can continue processing after xx seconds have elapsed. At this point,
the command continues processing (just like the /DBR command) and a specific return code is
received, determined by whether IGNORE (rc=0) or CONTINUE (rc=4) was specified. Of
course, if ABORT was specified instead of IGNORE or CONTINUE, the command terminates
processing and a rc=8 is received when xx seconds has elapsed.
In our test environment, we issued the batch ADMIN utility STOP command at the time that we
tested the RENAME command, because a user repository must be stopped before it can be
renamed. A sample of the JCL that we used is shown in Example 7-56 on page 243.
Table 7-3 summarizes the different z/OS modify interface commands that are available to
administer the user repositories, again from the RS perspective.
ADMIN Administrative functions such as change data set disposition, display data sets, start
and stop repositories
ADMIN command
Use the z/OS modify interface ADMIN command to perform various administrative tasks. The
syntax is shown in Example 7-67.
Example 7-67 Syntax for the z/OS modify interface ADMIN command
F reposervername,ADMIN
DSCHANGE(repositoryname, S|D, 1|2|3)
DISPLAY(repositoryname | <blank>)
START(repositoryname)
STOP(repositoryname)
Use the ADMIN command with the DSCHANGE parameter to change the disposition of a data set
contained in a repository to DISCARD or SPARE status. For details about the circumstances
under which this situation is appropriate, see “DSCHANGE command” on page 244.
The target repository data set pair is indicated by any of the following values:
1 Primary (also known as COPY1)
2 Secondary (also known as COPY2)
3 Spare
Next, you can use the ADMIN command with the DISPLAY parameter to display a list of user
repository names defined to the RS catalog. If a repository name is specified for the
DISPLAY() parameter, other details such as the RDS names and statuses will be shown. This
command is similar to the batch ADMIN LIST STATUS command. Sample output for the z/OS
modify interface ADMIN,DISPLAY command when a user repository name is specified this
command is shown in Example 7-68.
Example 7-68 Output from the z/OS modify interface ADMIN,DISPLAY command
/* Display the IMSRSC_REPOSITORY via the ADMIN cmd */
/* FRP2100I - ADMIN DISPLAY repository IMSRSC_REPOSITORY */
/* - Last updated date/time : USRT001 */
/* - Status . . . . . . . . : OPEN */
/* - Auto-open . . . . . . . : YES */
/* - Security Class . . . . : NOT DEFINED */
/* FRP2101I - ADMIN DISPLAY repository RDS1: */
/* - Index (RID) . . : IMSTESTS.FRP1.IMSPRI.RID */
/* - Member (RMD) . : IMSTESTS.FRP1.IMSPRI.RMD */
/* - Status . . . . : COPY1 */
/* FRP2101I - ADMIN DISPLAY repository RDS2: */
/* - Index (RID) . . : IMSTESTS.FRP1.IMSSEC.RID */
/* - Member (RMD) . : IMSTESTS.FRP1.IMSSEC.RMD */
/* - Status . . . . : COPY2 */
/* FRP2101I - ADMIN DISPLAY repository RDS3: */
/* - Index (RID) . . : */
/* - Member (RMD) . : */
/* - Status . . . . : NONE */
Notice that detailed information about this user repository is displayed, such as its status, its
auto-open value, and the different RDSs that it contains and their statuses (dispositions).
Lastly, you can use the ADMIN command with the START or STOP parameters to start and stop a
repository, respectively.
AUDIT command
Use the AUDIT command to dynamically change the audit level setting specified on the
AUDIT_LEVEL parameter in the FRPCFG configuration member, following the syntax shown in
Example 7-69.
Example 7-69 Syntax for the z/OS modify interface AUDIT command
F reposervername,AUDIT LEVEL(NONE|HIGH) | RESTART
When the RS is starting and attempting to connect to the log stream, an error can occur.
Depending on what value you specified for AUDIT_FAIL in the FRPCFG member, the RS can
either continue starting or can terminate. If you specify AUDIT_FAIL=CONTINUE, logging is
suspended in the event that the RS encounters an error while attempting to connect to the log
stream. You can later activate the logging of records by issuing the z/OS modify interface
AUDIT command with RESTART included.
SECURITY command
For repository security, if you need to make changes to your RACF (or System Authorization
Facility (SAF) equivalent) definitions, they are only active if you refresh the RACF in-storage
profiles. Use the z/OS modify interface SECURITY command to accomplish this. The command
results in refreshing the RACF profiles in storage to reflect the updated profile definitions.
Example 7-70 shows the command syntax.
Example 7-70 Syntax for the z/OS modify interface SECURITY command
F reposervername,SECURITY REFRESH
When the z/OS modify interface SECURITY command is issued, the RACF in-storage profiles
are refreshed, but neither the FRPCFG member or the repository definitions are reread.
SHUTDOWN command
To shut down one or more RSs, issue the z/OS modify interface SHUTDOWN command.
Including the optional ALL keyword targets all RSs in the same XCF group. Omitting the ALL
keyword simply targets the specified RS for shutdown. If the master RS is shut down, one of
the subordinate servers becomes the new master. Example 7-71 shows the command
syntax.
Example 7-71 Syntax for the z/OS modify interface SHUTDOWN command
F reposervername,SHUTDOWN ALL
STOP command
The z/OS modify interface STOP command is also available to shut down a single RS. Here
again, if the master RS is shut down, one of the subordinate servers will become the new
master. The syntax is P reposervername.
Table 7-4 Comparison of the batch ADMIN utility commands to the RS commands
Batch ADMIN utility commands Repository Server z/OS modify interface commands
ADD
RENAME
DELETE
UPDATE
SECURITY
SHUTDOWN
The following section compares RDDS DRD with repository DRD and examines their
similarities and differences.
Deleting resources
To delete a resource with RDDS DRD, the runtime definition is deleted with a DELETE
command. This deletion is hardened to the system RDDS of IMS at system checkpoint by
using automatic export (if it is enabled) or with an EXPORT command including the
OPTION(OVERWRITE) parameter. The resource is then removed from the stored resource
definitions of the system RDDS.
With repository DRD, the runtime resource definition is also deleted with the DELETE
command, just as with RDDS DRD. However, because automatic export is not supported with
repository DRD and the EXPORT command cannot be used to harden deletions to the
repository, a different step needs to be taken to accomplish this. The DELETE DEFN command
must be used to remove stored resource definitions from the IMS resource list associated
with the runtime resource deletions that occurred in the running system.
First delete the runtime resource definitions at the IMS system using the DELETE command,
and then use the DELETE DEFN command to delete the resource definitions from the IMSRSC
repository.
Importing resources
When you issue an IMPORT command in an RDDS DRD environment by using the IMPORT
DEFN SOURCE(RDDS) command, each IMS system that receives the command reads the stored
resource definitions from the specified RDDS into the control region, where they become
runtime resource definitions.
With repository DRD, each IMS that receives the IMPORT DEFN SOURCE(REPO) command reads
the stored resources definitions from the repository into the control region, where they
become runtime resource definitions.
Both RDDS DRD and repository DRD can create new and update existing runtime resource
definitions using an IMPORT command.
For clarity, Table 7-5 summarizes all possible results for an IMPORT DEFN command.
Table 7-5 Potential results for IMPORT DEFN command when OPTION(UPDATE) is specified
Existing runtime OPTION(UPDATE) specified? IMPORT DEFN result
definition exists?
If the imported definition is for a resource or descriptor that is unknown to IMS, IMS creates
the runtime definition for the resource, regardless of whether the OPTION(UPDATE) parameter
is specified.
If the imported definition is for a resource or descriptor for which IMS already has a runtime
definition (the resource or descriptor already exists in IMS) and the OPTION(UPDATE)
parameter is not specified, the definition is not imported and the command fails.
If the imported definition is for a resource or descriptor for which IMS already has a runtime
definition and the OPTION(UPDATE) parameter is specified, the existing runtime definition is
updated with the attributes from the imported definition.
The following scenario illustrates the use of the IMPORT DEFN command when the
OPTION(UPDATE) parameter is included.
An IMS application program exists on a test IMS (IMST) and on a development IMS (IMSD)
that are in the same IMSplex. Changes are made to this application program, requiring new or
changed resource definitions on both IMS systems. Testing is required on the test IMS
system before definitions are ported to the development IMS.
On the test IMS system:
a. Dynamically add new resources with the DRD CREATE command.
b. Dynamically update existing resources with the DRD UPDATE command.
c. Export these changes to a repository:
EXPORT DEFN TARGET(REPO) NAME(rsc-names) SET(IMSID(IMST,IMSD))
After successful testing, the definitions can be ported to the development IMS. Issue the
following command to the development IMS:
IMPORT DEFN SOURCE(REPO) NAME(rsc-names) OPTION(UPDATE)
Notice that the EXPORT command specifies the resource definitions that are written to the
repository, targeting the test (IMST) and development (IMSD) IMS systems. When the export
occurs, the following two actions occur:
New resources names and types are added to the IMS resource lists of IMST and IMSD
within the repository.
New stored resource definitions are either added to or updated within the repository.
The EXPORT command hardens the runtime definitional additions and changes made on IMST
to its stored definitions contained in the repository, and updates its IMS resource list. It does
the same for IMSD; that is, the stored resource definitions and IMS resource list of IMSD are
updated in preparation for a later import after successful testing has been completed on
IMST. Another use for the EXPORT command is to ensure that a user repository is not empty.
In most cases, when updating an existing runtime resource definition, the resource cannot be
in use or the IMPORT fails. However, the following transaction attributes can be updated even if
the transaction is in use: CLASS, LCT, LPRI, NPRI, MAXRGN, PARLIM, PLCT, PLCTTIME,
SEGNO, SEGSZ, and TRANSTAT. The TRANSTAT program attribute can be updated while
the program is in use.
When the IMPORT command is issued with the UPDATE option, all existing resources affected by
the update are quiesced. For example, if the import is updating a transaction definition, the
transaction and the associated program are quiesced. While quiesced, a resource cannot be
updated or deleted, and in most cases work cannot be run against the resource, which means
the resource cannot be scheduled. Certain latches are held during the import process that
prevent work from being done. No resources can be scheduled while the resources to be
updated are being quiesced. A system checkpoint is not allowed while an import is in progress.
Exporting resources
Exporting resource definitions to an RDDS is handled differently than exporting to a repository.
Exporting to RDDS
When exporting in an RDDS DRD environment, resources that have been newly created,
changed, or deleted can all be hardened to the RDDS by using automatic export if it is
enabled. They can also be hardened to the RDDS by issuing an EXPORT command, which can
either overwrite the entire contents of an RDDS or append to it.
With RDDS DRD, each IMS system has its own dedicated pair of system RDDSs that contain
the entire collection of MODBLKS definitions for the system. When an EXPORT command is
issued in a cloned environment, it should be routed to all of the IMS systems in the IMSplex so
that their system RDDSs remain synchronized with the same set of definitions. In this case,
Finally, RDDS DRD exporting only applies to active IMS systems, and there is no way for the
stored resource definitions of an inactive IMS to be updated. In IMS 12, several of these
limitations are alleviated when DRD is used with the repository instead of the RDDS.
With repository DRD, instead of each IMS having its own set of system RDDSs, each IMS has
its own IMS resource list that contains the resource names and resource types defined for the
IMS. The repository also has resource definitions for each resource defined to the repository.
The possibility is eliminated that the stored resource definitions of some IMS instances might
be different from others because only one IMS is processing the command as a single unit of
work, writing to the shared repository. RDDS DRD export also processes the command as a
single unit of work, but the difference is that multiple IMSs are each processing the command
separately, updating different RDDSs, which as previously stated can succeed or fail at the
different systems resulting in desynchronization.
Repository DRD export can update the stored definitions of an IMS that is inactive, unlike
RDDS DRD export. These updated stored definitions can be applied when the IMS restarts, but
remember, export does not handle resource deletions. It only handles additions or changes. To
deleted stored resource definitions from the repository, a DELETE DEFN command is required.
The point of failure is important in determining whether the command needs to be reissued.
Use the QUERY SHOW(DEFN) command to determine when the resources or descriptors
involved in the import or export were last created, updated, or imported and compare these
timestamps to the point of failure. Look for the resource data under the following command
output column headers:
TimeCreate
TimeUpdate
TimeImport
This should indicate whether the IMPORT or EXPORT command should be issued again.
Next, we show another event sequence that illustrates a scenario in which IMPORT command
processing becomes indoubt. Then we explain how to resolve this situation.
1. You enter the following command:
IMPORT DEFN SOURCE (REPO) TYPE(TRAN) NAME(TRANA,TRANB)
2. IMS terminates during command processing.
3. Work in progress of the IMPORT command is indoubt.
4. You enter the following command:
QUERY TRAN NAME(TRANA,TRANB) SHOW(DEFN,TIMESTAMP)
5. You check the TimeImport column in the command response data.
In both of these scenarios, if a QUERY command response indicates that work in progress was
not committed, reissue the IMPORT/EXPORT command.
Important: When EXPORT is issued with the SET(IMSID(*)) parameter, every attribute value
except for SIDR/SIDL is written to the stored resource definitions of each IMS system,
because these values must always be unique to a particular IMS system.
For local transactions and transaction descriptors, the SIDR and SIDL values are saved as 0
in the repository for each IMS. When the stored resource definition is imported from the
repository either during AUTOIMPORT processing or during processing of the IMPORT
command, the SIDR and SIDL values are set to the lowest local SID value of the IMS system
where the runtime resource definition is created.
Keep in mind that when using DRD with the repository, no automatic export will occur at
system checkpoint, unlike as is the case with RDDS DRD. To harden transactions that are
dynamically created with the DFSINSX0 user exit to the repository, the only option is to export
with an EXPORT command.
If you are going to restrict access to a user repository defined in the RS catalog, you typically
designate the SAF class name to be used in the FRPCFG configuration member during initial
setup. However, if no security class was specified in the member at that time, you can update
it later on using the batch ADMIN UPDATE command.
Example 7-72 Using the batch ADMIN UPDATE command to designate a SAF security class
UPDATE REPOSITORY(REPO1) SECURITYCLASS(XFACILIT)
After the SAF security class is designated, you can update your RACF definitions to protect
your user repository (Example 7-73).
Example 7-73 Making RACF definitional changes after the security class has been designated
RDEFINE XFACILIT FRPREP.REPO1 UACC(NONE)
PERMIT FRPREP.REPO1 CLASS(XFACILIT) ID(ANGIE) ACCESS(READ)
Here, you can see that the SAF class XFACILIT that we just designated for use in
Example 7-72 is specified in the RACF definitions to prevent unauthorized user access to the
REPO1 repository. A user ID named ANGIE is then permitted to read it within these RACF
definitions.
After the definitional RACF changes are made, the in-storage RACF profiles must be
refreshed to reflect these updates. Use the z/OS modify interface SECURITY command to
accomplish this task (Example 7-74).
Example 7-74 Refreshing RACF in-storage profiles with z/OS modify interface SECURITY command
F REPOSVR1,SECURITY REFRESH
Important: The z/OS modify interface SECURITY command must be used any time the
RACF definitions are updated.
For more information about implementing repository security, see 7.10, “Security
considerations” on page 258.
Recovery activities
A repository data set pair (hereafter referred to as RDS) that is identified by the server as
having lost integrity is discarded. At this time, the disposition of the RDS will be changed to
DISCARD. If an RDS is discarded due to a write error, then the repository will be stopped at
this time to enable recovery. In this event, the RS drives recovery automatically if a spare
RDS is available and the only task of the user is to allocate and define a new spare data set,
assigning it to SPARE disposition with a DSCHANGE command. If no spare RDS is available, the
user repository is stopped and administrator intervention is required to restart the user
repository.
Read errors: Only read errors are handled by using the other valid copy (primary or
secondary) to access the needed data; there is no spare switching. This is an improvement
for reads compared to previous IMS releases, because neither the RDDS nor MODBLKS
have a second data set.
The RS changes the disposition of the primary RDS from COPY1 to DISCARD and copies
the contents of the secondary RDS to the spare RDS automatically (if a spare is available). At
this point, the existing spare becomes the new primary data set that replaces the repository
data set that failed.
At this point in the recovery process, the user must take the following actions:
1. Issue either a batch ADMIN LIST or z/OS modify interface ADMIN,DISPLAY command to
determine which RDS has been discarded.
2. Delete and define the discarded primary data sets to replace the old spare.
3. Change the disposition of this new data set to SPARE.
The batch ADMIN LIST command or z/OS modify interface command F xx,ADMIN DISPLAY()
command can be issued to show the dispositions, or statuses, of each repository data set.
You must then delete the bad repository data set whose disposition is now DISCARD.
Allocate and define a new repository data set (ideally, the size should be larger than the
previous spare) and assign a disposition of SPARE to this new data set using either the batch
ADMIN or z/OS modify interface commands shown in Example 7-75 and Example 7-76. Notice
that in each command, 1 is specified. In our sample scenario, the primary data set failed and
so a 1 representing the old primary data set is specified to set its disposition to SPARE.
Example 7-75 shows the format of the batch ADMIN DSCHANGE command.
Example 7-75 A batch ADMIN DSCHANGE command setting RDS1 to SPARE disposition
DSCHANGE REPOSITORY(REPO1) RDS((1) ACTION(SPARE))
Example 7-76 shows the format of the z/OS modify interface command.
Example 7-76 A z/OS modify interface command setting RDS1 to SPARE disposition
F REPOSVR1,ADMIN DSCHANGE (REPO1,S,1)
Access by using RM
A user repository can be accessed either through RM or directly. If going through RM, the
caller will be considered either “authorized” or “non-authorized”. IMS is an authorized caller
and therefore so long as RM is authorized, it has access to all repository contents by using
commands such as the following examples:
EXPORT TARGET(REPO)
IMPORT SOURCE(REPO)
DELETE DEFN
QUERY with SHOW(DEFN)
Important: Another layer of security can be put in place to prevent an unauthorized user
from issuing any new repository-specific commands. For information about how to restrict
access to these commands, see “Security considerations for commands” on page 240.
However, the RM utilities CSLURP10 (RDDS to Repository RM utility) and CSLURP20 (Repository
to RDDS RM utility) are non-authorized RM callers. In this case, the utilities do not
automatically have authorization for repository access just because RM is authorized, as is
the case with IMS. Therefore, the RM utilities require separate authorization to access the
repository.
Direct access
Access to a RS can be gained directly (without going through RM) by both the batch ADMIN
utility or through the z/OS modify interface. As previously mentioned, the batch ADMIN utility
runs as a JCL job. Therefore, the user ID specified in the JCL can be used for authorization
checking to determine whether repository access is allowed. Security for commands entered
through the z/OS modify interface can be implemented using standard console security.
Connection security
Connection security is used when a caller attempts to connect to the user repository. This
concept applies to both authorized and non-authorized RM callers, which are the IMS and RM
utilities (respectively). In either case, the caller must specify a user ID in the JCL, which is
checked in RACF to determine whether access to the repository is allowed. For the RM
utilities, this user ID is also used for SCI registration.
Member-level security
Security can also be implemented at the member level within a user repository, which only
applies to the RM utilities. In this case, you can restrict access to individual members, and
separately authorize the user ID associated with the RM utility (again, specified in the JCL)
being used to access these members.
Use member-level security if you want to restrict the RM utilities to accessing only certain
resources. After you protect the individual resources in a RACF class, permit the user ID
specified in the JCL of the utility to access these resources accordingly. For example, the
CSLURP20 utility reads repository resources and copies them to an RDDS. If you want to
First, choose a class that your RS resources will be protected in. Use either the FACILITY
class or your own user-defined class. Use member-level security due to the 39-character
profile name length restriction of FACILITY class.
Tip: To define a new user-defined class to RACF, add the new class to the RACF Class
Descriptor Table (ICHRRCDE) and then update the RACF Router Table (ICHRFR01) with
the new class.
The next steps are to protect your resources by defining general resource profiles, and then
to grant access to users that have been protected in those resource profiles.
To protect a user repository, define a profile using the format FRPREP.repositoryname. For
example, if you want to protect a user repository named REPO1, define a resource profile for
it using the format shown in Example 7-77.
Example 7-77 Restricting access to a user repository by defining a profile for it in RACF
RDEFINE XFACILIT FRPREP.REPO1 UACC(NONE)
Notice that we have specified that the user repository is protected in the XFACILIT class. Also,
by designating a universal access of none with UACC(NONE), explicit permission must be
granted in RACF for any user ID to access this user repository.
To protect all user repositories in your environment, define a profile using the format FRPREP.*
(see Example 7-78).
Example 7-78 Restricting access to all user repositories by defining a catch-all profile in RACF
RDEFINE XFACILIT FRPREP.* UACC(NONE)
To restrict access to the RS catalog repository, use the FRPREP.CATALOG format when defining
a resource profile for it. Keep in mind that a user can access or update the RS catalog
repository by using batch ADMIN commands, for example when adding a new user repository
to it during initial setup.
Example 7-79 Restricting access to the RS catalog repository by defining a profile for it in RACF
RDEFINE XFACILIT FRPREP.CATALOG UACC(NONE)
Individual members within a repository can also be restricted from unauthorized access by
the RM utilities CSLURP10 (RDDS to Repository RM utility) and CSLURP20 (Repository to RDDS
RM utility). To define a resource profile for an individual repository member, use the following
format:
FRPMEM.repositoryname.DFS.RSC.membername
The membername referenced in the required format must consist of the IMSplex name, followed
by the resource name and resource type. For example, a transaction named PART that exists
in an IMSplex named IMSPLEX1 can be defined in a profile such as the one shown in
Example 7-80.
Example 7-80 Restricting access to an individual member by defining a profile for it in RACF
RDEFINE XFACILIT FRPMEM.REPO1.DFS.RSC.IMSPLEX1.TRAN.PART UACC(NONE)
Lastly, you can restrict unauthorized users from modifying audit levels associated with an
individual repository. Define a resource profile in RACF using the following format:
FRPAUD.repositoryname.DFS.RSC.TYPE
Example 7-81 shows a RACF profile definition that restricts the audit level associated with a
user repository named REPO1. By designating a universal access of none with
UACC(NONE), explicit permission must be granted to a user ID in RACF for that user ID to
access modifying the audit level for this user repository.
Example 7-81 Restricting access to modifying audit levels by defining a profile for it in RACF
RDEFINE XFACILIT FRPAUD.REPO1.DFS.RSC.TYPE UACC(NONE)
To grant access to individual members that have been protected in RACF (such as in
Example 7-80), use a PERMIT statement such as the one shown in Example 7-84. In the
example, notice that we grant update access to a user ID named USRUTL10 so that it can
update a transaction named PART, which exists in the REPO1 user repository within the
IMSPLEX1 IMSplex.
Example 7-84 Granting update access to the USRUTL10 user ID for an individual resource
PERMIT FRPMEM.REPO1.DFS.RSC.IMSPLEX1.TRAN.PART CLASS(XFACILIT) ID(USRUTL10) ACCESS(UPDATE)
To grant access to all members that have been protected in RACF, use a catch-all PERMIT
statement such as the one shown in Example 7-85. In this example, we grant read access to
user ID USRUTL20 for all resources that have been protected in the XFACILIT class.
Example 7-85 Granting read access to the USRUTL20 user ID for all resources
PERMIT FRPMEM.*.*.*.*.*.** CLASS(XFACILIT) ID(USRUTL20) ACCESS(READ)
Finally, if you have restricted access to the audit level of your user repository (such as in
Example 7-81), you can grant access to specific user IDs to allow the user to modify the audit
level.
Important: If a user ID needs RACF UPDATE access for individual members that have
been restricted from unauthorized access, the user ID also needs RACF UPDATE access
for the actual user repository that these members are contained in. Therefore a separate
RACF PERMIT statement is required to ensure this access.
To do so, use a RACF PERMIT statement such as the one shown in Example 7-86. In the
example, access to modify the audit level of user repository REPO1 is granted to a user ID
named USRZOSMI.
Example 7-86 Granting update access to the USRZOSMI user ID to modify the audit level of REPO1
PERMIT FRPAUD.REPO1.DFS.RSC.TYPE CLASS(XFACILIT) ID(USRZOSMI) ACCESS(UPDATE)
Tip: You can group several user IDs together for higher efficiency when defining resource
profiles and granting access to them. In this case, PERMITs reference RACF group rather
than each individual user ID. The following example illustrates this concept in a series of
RACF statements that protect a user repository named REPO1 in the XFACILIT class and
subsequently grant access to them:
RDEFINE XFACILIT FRPREP.REPO1 UACC(NONE)
ADDGROUP FRPVIEW
ADDGROUP FRPEDIT
PERMIT FRPREP.REPO1 CLASS(XFACILIT) ID(FRPVIEW) ACCESS(READ)
PERMIT FRPREP.REPO1 CLASS(XFACILIT) ID(FRPEDIT) ACCESS(UPDATE)
CONNECT <VIEWER1> GROUP(FRPVIEW)
CONNECT <VIEWER2> GROUP(FRPVIEW)
CONNECT <VIEWER3> GROUP(FRPVIEW)
CONNECT <UPDATER4> GROUP(FRPEDIT)
CONNECT <UPDATER5> GROUP(FRPEDIT)
New panels were added to the DRD UI for the following commands:
EXPORT DEFN (in the initial TARGET panel)
EXPORT DEFN TARGET(REPO)
IMPORT DEFN (in the initial SOURCE panel)
IMPORT DEFN SOURCE(REPO)
DELETE DEFN
In addition, existing panels were enhanced for the following existing commands:
IMPORT DEFN SOURCE(RDDS)
QUERY DB
QUERY TRAN
QUERY TRAN DESC
QUERY DBDESC
QUERY PGM
QUERY PGMDESC
QUERY RTC
QUERY RTCDESC
This section shows several example panels for various commands in “list view” unless
otherwise noted. This view provides the greatest level of assistance to the user. The
alternative view is “syntax view” for more experienced IMS users who are familiar with
command format. This section includes only the new panels and panels that were enhanced
with additional parameter information. Panels that were not changed, but that now apply to
repository DRD (in addition to RDDS DRD), are not shown.
F1=Help F12=Cancel
Figure 7-4 The IMS Application Menu
Help
ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss
IM12X IMS Manage Resources
Command ===>
F1=Help F12=Cancel
Figure 7-5 The IMS Manage Resources application menu
Several panels for the EXPORT command have been enhanced or added to accommodate the
IMS repository function.
Command ===>
* TARGET . . . . . . . . . . 2 1. RDDS
2. REPO
F1=Help F12=Cancel
Figure 7-6 Enhanced EXPORT panel in the MR application for target data set selection
Tip: When navigating through the MR application panels, ensure that your cursor is always
placed on the field in which data needs to be entered before pressing Enter. If your cursor
is elsewhere on the panel, when you press Enter you might receive an error message.
Command ===>
More: -
Target . . . . . . . . : REPO
Resource name . . . . .
Set IMSid . . . . . . .
Resource type
Enter "/" to select types
ALL All RSCs & DESCs
ALLRSC All resources ALLDESC All descriptors
DB Database resource DBDESC Database descriptor
PGM Program resource PGMDESC Program descriptor
RTC Routing code resource RTCDESC Routing code descriptor
TRAN Transaction resource TRANDESC Transaction descriptor
OPTION
Enter "/" to select options
ALLRSP Show all responses CHANGESONLY Export changes only to REPO
F1=Help F12=Cancel
Figure 7-7 New EXPORT panel in the MR application for setting parameters (list view)
Figure 7-7 shows the panel in list view, which provides the most assistance to the user. The
equivalent panel is shown in Figure 7-8 on page 267, but in syntax view. This view assumes
the user is already familiar with command format and parameter values.
Command ===>
F1=Help F12=Cancel
Figure 7-8 New EXPORT panel in the MR application for setting parameters (syntax view)
Several panels for the IMPORT command have been enhanced or added to accommodate the
IMS repository function. Figure 7-9 shows the initial IMPORT panel that designates the
source data set for import.
In the IMS Import panel (Figure 7-9), select a source. In our test environment, we selected
option 2.
Command ===>
* SOURCE . . . . . . . . . . 1. RDDS
2. REPO
F1=Help F12=Cancel
Figure 7-9 Enhanced IMPORT panel in the MR application for source data set selection
Command ===>
More: -
Source . . . . . . . . : REPO
Resource name . . . . .
Resource type
Enter "/" to select types
ALL All RSCs & DESCs
ALLRSC All resources ALLDESC All descriptors
DB Database resource DBDESC Database descriptor
PGM Program resource PGMDESC Program descriptor
RTC Routing code resource RTCDESC Routing code descriptor
TRAN Transaction resource TRANDESC Transaction descriptor
OPTION
Enter "/" to select options
ABORT Abort import if error ALLRSP Show all responses
UPDATE Update runtime defs
SCOPE
Where to apply update . . . . . . 1. All
2. Active
F1=Help F12=Cancel
Figure 7-10 New IMPORT panel in the MR application for setting parameters (list view)
Figure 7-10 shows the panel in list view, which provides the most assistance to the user.
Command ===>
F1=Help F12=Cancel
Figure 7-11 New IMPORT panel in the MR application for setting parameters (syntax view)
Command ===>
Source . . . . . . . . : RDDS
Resource type
Enter "/" to select types
ALL All RSCs & DESCs
ALLRSC All resources ALLDESC All descriptors
DB Database resource DBDESC Database descriptor
PGM Program resource PGMDESC Program descriptor
RTC Routing code resource RTCDESC Routing code descriptor
TRAN Transaction resource TRANDESC Transaction descriptor
OPTION
Enter "/" to select options
ABORT Abort import if error ALLRSP Show all responses
UPDATE Update runtime defs
F1=Help F12=Cancel
Figure 7-12 Enhanced IMPORT panel including the new OPTION(UPDATE) parameter
In the IMS Delete panel, select a source. In our test environment, we selected option 2.
Command ===>
* DELETE . . . . . . . . . . 1. RESOURCES
2. DEFN
F1=Help F12=Cancel
Figure 7-13 Enhanced DELETE panel in the MR application showing the new DEFN keyword
Command ===>
Target . . . . . . . . : REPO
Resource name . . . . .
For IMSID . . . . . . .
Resource type
DB Database resource DBDESC Database descriptor
PGM Program resource PGMDESC Program descriptor
RTC Routing code resource RTCDESC Routing code descriptor
TRAN Transaction resource TRANDESC Transaction descriptor
OPTION
Enter "/" to select options
ALLRSP Show all responses
F1=Help F12=Cancel
Figure 7-14 New DELETE DEFN panel in the MR application (list view)
Command ===>
F1=Help F12=Cancel
Figure 7-15 New DELETE DEFN panel in the MR application (syntax view)
QUERY DB NAME( )
SHOW( ALL, ACCTYPE, DEFN, DEFNTYPE, GLOBAL, IMSID, LOCAL, MODEL
RESIDENT, STATUS, TIMESTAMP )
STATUS( ALLOCF, ALLOCS, BACKOUT, EEQE, LOCK, NOTINIT, NOTOPEN
OFR, OLR, OPEN, RECALL, RECOV, RNL, STOACC, STOSCHD
STOUPDS )
TYPE( DEDB, DLI, MSDB, PART, PHDAM, PHIDAM, PSINDEX )
Several new jobs and tasks, including the following examples, were added to the IVP for
repository DRD:
Creation of the RS catalog data sets and the user repository
Creation of the RS configuration file
Execution of the RS startup procedure
JCL to execute the actions:
– Start an RS
– Add a user repository to the RS catalog
– List user repository status information
– Populate a user repository
– Rename a user repository in the RS catalog
– List detailed information for a single user repository
– Modify and update user repository definitions
– Delete a user repository in the RS catalog
– Delete actual RS catalog and user repository data sets
Help
ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss
Execution (LST Mode) - DBT Row 266 to 282 of 282
Command ===> Scroll ===> CSR
When repository DRD is used, operational functionality and flexibility for managing resources
across an IMSplex are improved with the following functions:
Generic resource definition is available along with IMS-specific resource definitions.
The EXPORT process is a single unit or work for an entire IMSplex (all succeeds or all fails).
The EXPORT process is controlled by the user (no AUTOEXPORT).
You can select CHANGESONLY or by time periods.
Deleting stored resource definitions is controlled by the user.
You can update by using IMPORT on an existing runtime.
The IMS repository is considered a strategic IMS architectural direction. Because all stored
resource definitions exist in a single, shared data set, consistency is ensured while
eliminating your need to maintain and coordinate multiple sets of RDDSs in a multiple-IMS
IMSplex. The architecture of the repository ensures consistency and integrity of data without
the need for additional controls.
If the CBPDO for IMS 12 is older than two weeks by the time you install the product materials,
contact the IBM Support Center or use S/390® SoftwareXcel to obtain the latest PSP bucket
information.
You can also obtain the latest PSP bucket information by going to the Preventive Service
Planning bucket page at:
http://www14.software.ibm.com/webapp/set2/psearch/search?domain=psp
Table 8-1 shows the Upgrade and Subset values for IMS 12.
IMS is now providing fix category data. Examples of IMS 12 categories include
IBM.Coexistence.IMS.V12 and IBM.TargetSystem-RequiredService.IMS.V12.
SMP/E uses the new type of ++HOLD statement to identify APARs and their fix categories,
and the PTF that resolves the APAR.
The following list is an example of the sequence of events to determine what IMS 12
coexistence service has not been applied:
1. Download the current Enhanced Holddata.
2. Issue the SMP/E RECEIVE command on the current Enhanced Holddata.
3. Run the SMP/E REPORT MISSINGFIX command (Example 8-1).
Other products running on z/OS also include fix categories such as the following examples,
which we can use to resolve cross-product dependencies with IMS:
IBM.Coexistence.z/OS.V1R11
IBM.Function.SYSPLEXDataSharing
IBM.Device.Server.z9-EC-2094.zAAP
IBM.Device.Disk.DS8000-2107
Again, we can use the SMP/E REPORT MISSING FIX command used to identify IMS 12 service
not installed that has a cross-product dependency with these categories.
For a complete list of FIXCATs, see the IBM Fix category values and descriptions page at:
http://www.ibm.com/systems/z/os/zos/smpe/fixcategory.html
Restriction: You cannot restart a failed application with extended restart across different
releases of IMS.
UPGRADE RECON
IMS 10 and IMS11 RECONs can be upgraded to IMS 12 by executing the DBRC utility
(DSPURX00) and using the CHANGE.RECON UPGRADE command with an IMS 12 SDFSRESL
library.
After the RECON data set has been upgraded, the SPE allows DBRC to convert records to
the appropriate release format depending on whether the record is being written or read. This
way, IMS 10 and IMS 11 can use the RECONs after they have been upgraded to IMS 12.
However, the SPE does not allow the downlevel DBRC to use the new function.
Restriction: After a RECON data set has been upgraded to the IMS 12 level, it is not
accessible to any IMS 10 or IMS 11 systems that does not have the DBRC Coexistence
SPE applied.
The user exit enhancements in IMS 11 introduced version 6 of the list (SPXPLVER6). Users
migrating from IMS 11 to IMS 12 do not need to make any changes to their exits.
The IMS Connect exit parameter list (HWSEXPRM) was changed in IMS 11. Users migrating
to IMS 12 from IMS 10 must reassemble and rebind any IMS Connect user exits that use
HWSEXPRM to pick up the changes.
The TM and MSC Message Routing and Control user exit from IMS 10 works without
modification with IMS 12, but the routine must be reassembled.
The MODBLKSA, MODBLKSB and MODSTAT data sets are no longer used by FDBR. (The
information about MODBLKS is read from the IMS checkpoint log records.) Remove those
DD statements from the FDBR job control language (JCL).
Resource management
APAR/PTF PM32951/UK68883 and PM19025/UK63960 are required on IMS 10 if a CSL RM
version 1.3 is being used.
You can assemble DSECTs for IMS log records by using the ILOGREC macro.
Table 8-3 shows the log records that are new or changed with IMS 12.
x’22’ Changed New subcodes x’0D’ for recoverable UPDATE POOL command, and x’0E’ for
recoverable UPDATE IMS command.
x’4507’ Changed New logger statistics for OLDS and write-ahead data set (WADS).
x’67D0’ Changed Added subtype x’1B’ for long lock timeout, subtype x’15-01’ internal
processing errors, and DFSBCB section for subtype x’02’.
x’9904’ Changed RACF user ID is added when they are produced by batch (DLI or DBB) jobs.
The TM and MSC Message Routing and Control user exit routine (DFSMSCE0) from IMS 10
and IMS 11 works without modification with IMS 12, but the exit must be reassembled.
New function is added to the IMS 12 sample DFSMSCE0 exit for XCF/AOS.
The IMS 12 Database Recovery (DFSURDB0) utility accepts log, image copy, HISAM unload,
and change accumulation (CA) data sets from IMS 10, IMS 11, or IMS 12.
The IMS 12 Database Change Accumulation (DFSUCUM0) utility accepts log and CA data sets
from IMS 10, IMS 11, or IMS 12.
For complete information about these tools, including the IMS versions that they support, see
the DB2 and IMS Tools for System z page at:
http://www.ibm.com/software/data/db2imstools
In IMS 10 and later, the Variable Export utility can be directly accessed as an option from the
IVP Phase Selection panel. With this utility, you can build a data set on IMS 10 or IMS 11 to
import into the IVP for IMS 12.
To export IVP variables from IMS 11 and import them into IMS 12, complete the following
steps starting with the IVP for IMS 11:
1. In the IVP Environment Options panel (Figure 8-1), select option 3.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP IVP Environment Options IMS 11.1
Command ===>
Option . . 3
IVP Environments
•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
• DFSIX023: DFSIXX01 - Prior session completed successfully for "DBT" •
•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IMS Application Menu
Command ===>
••••••••••••••••••••••••••••••••••••••••••••••••••
• Copyright IBM Corp. 2003. All rights reserved. •
••••••••••••••••••••••••••••••••••••••••••••••••••
Figure 8-2 IMS 11 DFSAPPL menu
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP Sub-Option Selection - DBT IMS 11.1
Command ===>
Note: Your Sub-Option selection affects the user variables, jobs, and tasks
that will be presented. If you later change your selection, you must redo
the IVP Table Merge, Variable Gathering, File Tailoring, and Execution
processes. RACF is required when Java sub-option is selected.
4. In the Table Merge Request - DBT panel (Figure 8-4), choose option 2, Use existing
tables. Press Enter.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP Table Merge Request - DBT IMS 11.1
Command ===>
The IVP Dialog is driven from a set of ISPF tables which contain information
about the variables, JOBs, TASKs and sequence of presentation you will need to
perform the verifications.
Since the tables will be updated by the dialog, working copies must be made
the first time you use the dialog.
If service is applied to your IMS system, or if you decide to use the IVP
dialog to build a different environment, then either the existing copies must
be updated or new copies created.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP IVP Phase Selection - DBT IMS 11.1
Command ===>
Select the desired Phase and positioning option and press ENTER
A A. Variable Export Utility (Export variables to a data set)
6. In the IVP Variable Export Utility panel (Figure 8-6), complete these steps:
a. Choose the type of IVP to be exported.
b. Enter the HLQ value for the INSTATBL data set and the name of the data set that you
will use to hold the exported variables.
The export function writes the current value as XML. The export data set is a simple
flat file (or PDS/PDSE member) and can be edited, with care, to globally change values
as needed before import. For example, change any occurrences of a high level qualifier
that has IMS version as part of its name before running the import. This approach
saves time when you run the import and variable gathering phases in the IMS 12 IVP
dialog.
2. Specify the IVP High Level Qualifier(s) of the INSTATBL data set
___________________________________
3. Specify the Export data set. For a PDS, include the member name.
If the dataset does not exist, you will be prompted to create the dataset
________________________________________________________
Figure 8-6 Entering the INSTATBL and export data set names
To avoid the need to change JES to add the IMS 12 PROCLIB so that the IMS
cataloged procedures can be executed without JCL error, you can update these two
variables in the export data set (Example 8-3).
Updating the variables ensures that all JCL created by the file tailoring phase 3 (FT3)
process for IMS 12 includes the statements in Example 8-4. This way, all jobs submitted
on any z/OS in your sysplex can find cataloged procedures from your IMS 12 PROCLIB
data set.
Example 8-4 Additional JCL generated by the IVP file tailoring process
/*JOBPARM S=*
// JCLLIB ORDER=IMS12Q.PROCLIB
8. In the logo panel (Figure 8-8), which is displayed the first time you run the IMS 12 IVP,
press Enter to close it. On subsequent runs, you start at the environment selection panel
(Figure 8-10 on page 293).
IVP Dialog
for
IMS Version 12.1
Figure 8-9 IMS 12 IVP Notices pane (presented the first time only)
10.In the IMS 12 IVP Environment Options panel (Figure 8-10), choose the type of IMS
system that will be built by the IMS 12 IVP. Most likely your choice is the same type used to
start the IMS 11 IVP dialog (Figure 8-1 on page 287) or to export the IMS 11 IVP variables
shown in Figure 8-6 on page 291.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP IVP Environment Options IMS 12.1
Command ===>
Option . .
IVP Environments
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP Sub-Option Selection - IMS 12.1
Command ===>
Note: Your Sub-Option selection affects the user variables, jobs, and tasks
that will be presented. If you later change your selection, you must redo
the IVP Table Merge, Variable Gathering, File Tailoring, and Execution
processes. RACF is required when Java sub-option is selected.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
Sub-Option Change Verification - DBT
Command ===>
The Sub-Options you have just chosen are not the same as the Sub-Options
which were last active. If you change Sub-Options, Table Merge and the three
Dialog Phases must be re-run from the beginning.
From To
Y Y - IRLM - Use IRLM in IVP Applications (not available for DCCTL)
Y Y - FP - Use Fast Path in IVP Applications (not available for DCCTL)
Y Y - ETO - Use ETO (not available for Batch and DBCTL)
N Y - CQS - Add CQS Applications (not available for Batch and DBCTL)
N Y - RACF - Use RACF Security (not available for Batch)
N Y - JAVA - Use JAVA Applications and Open Database
N N - PRA - Use Parallel RECON Access (not available for Batch)
N Y - ICON - Use IMS Connect
N Y - REPO - Use IMS Repository
N Y - COUT - Use Callout Applications
13.In the Table Merge Request panel (Figure 8-13), accept the default selection of option 1,
and then press Enter to begin the table merging process.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP Table Merge Request - DBT IMS 12.1
Command ===>
The IVP Dialog is driven from a set of ISPF tables which contain information
about the variables, JOBs, TASKs and sequence of presentation you will need to
perform the verifications.
Since the tables will be updated by the dialog, working copies must be made
the first time you use the dialog.
If service is applied to your IMS system, or if you decide to use the IVP
dialog to build a different environment, then either the existing copies must
be updated or new copies created.
While the table merge process runs, your panel shows an update of the progress. Do not
interrupt that variable gathering table merge process.
14.When table merge is complete and the completion panel (Figure 8-15) is displayed, press
Enter.
When variable gathering table merge is complete the Phase Complete flags are reset,
which forces you to revisit the Variable Gathering phase 1 (VG1) and File Tailoring phase 3
(FT3) phases before moving to the Execution phase 6 (EX6) phase. In Phase VG1, you
can import the variables that have been exported from IMS 11.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP Table Merge has completed - DBT IMS 12.1
Command ===>
The Table Merge process has completed and the Phase Complete flags have
been turned off for all phases.
If Table Merge has just been performed for the first time for this option,
then the resetting of Phase Complete flags is of no special interest.
If Table Merge has been performed for some other reason, then the resetting
of Phase Complete flags will force you to revisit each of the phases in
sequence (Variable Gathering, File Tailoring, and Execution). Make use of
this opportunity to examine the tables for changes (the "!" indicator will
be set in the action field for items which have been added or changed by
service). Your position in each phase has been retained so that you may
return to your last position after you have browsed for changes.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP IVP Phase Selection - DBT IMS 12.1
Command ===>
Select the desired Phase and positioning option and press ENTER
1 A. Variable Export Utility (Export variables to a data set)
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP Variable Gathering (LST Mode) - DBT .. Row 1 to 9 of 207
Command ===> Scroll ===>
Action Codes: Chg Doc eNt Rfr Imp Exp -- CHG is default if item modified
Variable = Value..................................................
Var-Title......................................................
IMP IXUIVPHQ = IMS12Q
IVP - High level DSNAME qualifier for IVP (IVP) data sets
! IXURLMHQ = IVPRLM11
IVP - High level DSNAME qualifier for IRLM (RLM) data sets
* IXUDLBHQ = IMS12Q
IVP - High level DSNAME qualifier for IMS DLIB (DLB) data sets
* IXUSYSHQ = IMS12Q
IVP - High level DSNAME qualifier for IMS System (SYS) data sets
! IXUEXEHQ = IVPEXE11
IVP - High level DSNAME qualifier for Execution (EXE) data sets
! IXUUTLHQ = IVPUTL11
IVP - High level DSNAME qualifier for Utility (UTL) data sets
! IXUVSMHQ = IVPVSM11
IVP - High level DSNAME qualifier for VSAM (VSM) data sets
! IXUSSCLS =
SMS - Storage Class
! IXUSMCLS =
SMS - Management Class
17.After starting the variable import function, in the IVP Export File Name panel
(Figure 8-18), enter the name of the XML data set exported from IMS 11.
Export Dataset:
____________________________________________
Figure 8-18 Entering the IVP Variables Export data set from IMS 11
While each variable is read and updated from the XML file being imported, a progress panel
(Figure 8-19) is displayed to show the status of the process.
Figure 8-19 All variable are now updated from the IMS 11 exported data
The variable gathering process has a progress indicator to show which variables have been
updated using the data from IMS 11. Do not interrupt this variable update process.
Now that the variable import process is complete, we return to variable gathering phase 1 but
with the values read from the IMS 11 system (Figure 8-20).
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IVP Variable Gathering (LST Mode) - DBT .. Row 1 to 9 of 207
Command ===> Scroll ===> PAGE
Action Codes: Chg Doc eNt Rfr Imp Exp -- CHG is default if item modified
Variable = Value..................................................
Var-Title......................................................
* IXUIVPHQ = IMS11B
IVP - High level DSNAME qualifier for IVP (IVP) data sets
! IXURLMHQ = IVPRLM11
IVP - High level DSNAME qualifier for IRLM (RLM) data sets
* IXUDLBHQ = IMS11B
IVP - High level DSNAME qualifier for IMS DLIB (DLB) data sets
* IXUSYSHQ = IMS11B
IVP - High level DSNAME qualifier for IMS System (SYS) data sets
* IXUEXEHQ = IMS11B.IMS11D
IVP - High level DSNAME qualifier for Execution (EXE) data sets
* IXUUTLHQ = IMS11B
IVP - High level DSNAME qualifier for Utility (UTL) data sets
* IXUVSMHQ = IMS11B.IVPVSM11
IVP - High level DSNAME qualifier for VSAM (VSM) data sets
! IXUSSCLS =
SMS ••••••••••••••••••••••••••••••••••••••••••••••
! IXUSMCLS • Import of Variables completed Successfully •
SMS ••••••••••••••••••••••••••••••••••••••••••••••
Figure 8-20 Changing the variables as needed from the IMS 11 values
You have now imported the variables from IMS 11. Review all the imported values and
change them as needed for the new IMS 12 system.
The last group listed, Steps Zx for index of additional PDS members, does not identify jobs or
tasks in the IVP process. It identifies the members of DFSSLIB and DFSISRC libraries that
support the IVP process.
Figure 8-21 shows a sample of the initials jobs and tasks presented by the IVP dialog.
Help
______________________________________________________________________________
Execution (LST Mode) - DBT Row 1 to 20 of 282
Command ===> Scroll ===> PAGE
You can print additional documentation for the IVP jobs, tasks, and variables by using the
DOC action during the file-tailoring phase or the execution phase of the IVP dialog. Use the
IVP dialog to obtain current information regarding IVP jobs and tasks. In these lists, the jobs
and tasks are presented in the same sequence that is used by the IVP dialog.
t Item type J (JOB) A PDS member with the same name is placed into
INSTALIB during the file-tailoring phase.
Items of type J are intended to be submitted for execution.
T (Task) Tasks represent items of work that must be prepared by the
user.
For some tasks, an example is provided in the INSTALIB
data set. These examples are not intended for execution.
N (Supporting materials)
The INSTALIB data set can also contains members that
support other jobs such as CLISTs and control statements.
To run the IVP jobs and tasks, complete the following steps:
1. In the IVP Phase Selection panel, select option 6, 7, or 8. Each selection within a phase
provides a different positioning option.
2. Open each job or task. To view the instructions for each job and task, use the ENT action
command.
For IVP jobs you can browse, edit, or submit the job. Some items are nonexecutable
examples, but the browse and edit actions are available to create an executable version of
nonexecutable items.
For IVP tasks, you are provided a scrollable description to assist you in performing the
task.
3. Press End or PF3 when you are done.
4. Press Enter again if you completed the execution of all jobs and tasks, or press End to
save your work if you want to complete the execution phase later.
Help
______________________________________________________________________________
Execution (LST Mode) - DBT Row 266 to 282 of 282
Command ===> Scroll ===> PAGE
Figure 8-22 IMS 12 IVP Phase U: IMS Repository Usage For DRD Resources
Three of these steps are familiar if you have run any of the IVP steps that use the CSL
functions. The RM cannot be started before the repository is running. Other new steps are
listed here:
IV_U101J scratches and reallocates the data sets needed to perform the Repository
usage for DRD resources.
IV_U104J starts the repository server address space.
IV_U105J adds an IMSRSC repository to the repository server's catalog and then starts it.
Syntax Checker saves the parameters to appropriate PROCLIB members in the correct
format. You can also use the Syntax Checker to migrate your configuration members to
IMS 11.
Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IMS Application Menu
Command ===> 5
••••••••••••••••••••••••••••••••••••••••••••••••••
• Copyright IBM Corp. 2003. All rights reserved. •
••••••••••••••••••••••••••••••••••••••••••••••••••
File Help
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IMS Parameter Syntax Checker
Command ===>
Enter the name of the IMS proclib dataset and press enter.
ISPF Library:
Project . .
Group . . .
Type . . . .
Member . . . (Blank for member list)
••••••••••••••••••••••••••••••••••••••••••••••••••
• Copyright IBM Corp. 2000. All rights reserved. •
••••••••••••••••••••••••••••••••••••••••••••••••••
Figure 8-24 Syntax Checker entry panel
When you press Enter from the main panel, Syntax Checker reads the input file and tries to
determine the IMS release and type of control region.
If Syntax Checker cannot determine this information, one of the following entry panels opens:
IMS Release and Control Region Type entry panel
IMS Release entry panel
If Syntax Checker can determine the information it requires from comments in the member,
the Syntax Checker Keyword Display panel is shown (Figure 8-26 on page 307).
File Help
______________________________________________________________________________
IMS Parameter Syntax Checker
Command ===>
Figure 8-25 IMS Release and Control Region Type entry panel
When Syntax Checker saves the member, it adds comment lines to the top of the member
saving this information. The next time Syntax Checker processes the member, this panel
does not display.
After you type in the data and press Enter, the Syntax Checker Keyword Display panel opens.
••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••
IMS 12.1 Parameters for DB/DC
Command ===>
If you select Display new from the View menu, Syntax Checker lists only those parameters
that are new in the IMS version you have selected.
To display the default values for parameters, press the F6 function key; press the key again to
toggle between displaying and not displaying defaults. The default values are displayed in the
description field of the parameter.
After modifying the member, press the Enter key without making any other modifications.
Syntax-value checking is performed.
If errors exist, the first keyword with an error moves to the top of the display and an error
message is displayed. When all errors are resolved and you have modified the member as
required, you can save the member to the originally selected PROCLIB and member or to a
different PROCLIB and member.
Preparation tasks
Perform the following tasks before the migration:
Contact IBM Software Support for current installation, migration, and problem-resolution
information; ask for PSP for IMS.
Before you install IMS 12, check with your IBM Support Center or use Information/Access
or Service Link to determine whether additional PSP information is available that you must
be aware of.
The PSP upgrade name for IMS 12 is IMS1200.
An alternative to using PSP is to use fix category hold data (see 8.2, “Fix Category
HOLDDATA” on page 281).
Read IMS Version 12 Program Directory, GI10-8843, for the most current hardware
requirements, software requirements, prerequisites, and installation information.
Review the IMS installation overview in IMS Version 12 Installation, GC19-3017.
Review the service that has been applied to your current system. Determine if critical
service has been included in the new IMS release. If not, acquire the appropriate service
for the new IMS release. Again the new fix categories process will help you to do that.
Review the functions and enhancements in IMS Version 12 Release Planning,
GC19-3019. In particular, review the changes to the following areas:
– SMP/E, distribution, and system data sets
– System definition macros
– Log records
– RECON records
– Exit routines
– Cataloged procedures
– Control statement members in the PROCLIB data set
– Utilities
– Operator commands
– Operating procedures
– Messages and abend codes
Review the z/OS interface considerations in IMS Version 12 System Administration,
SC19-3020, which explains SVCs and the SYS1.PARMLIB updates.
Install prerequisite software and maintenance.
Determine the availability of updates to IBM IMS Tools, aids, and related products. You
can find the latest information on the Installation page at:
http://www.ibm.com/support/entry/portal/Installation/Software/Information_Manag
ement/IMS_Tools
Installation tasks
To install IMS 12, complete these tasks:
1. Install IMS 12 by using the SMP/E installation process.
2. Use CBPDO or ServerPac:
– The CBPDO product package consists of one logical tape (multiple volumes). A
CBPDO package that includes IMS can also include other products in the same
System Release (SREL). CBPDO also provides service for the products included with
the product order. The service includes all PTFs available within one week of order
fulfillment. All PTFs are identified by one or more SOURCEIDs, including PUTyymm,
RSUyymm, SMCREC, and SMCCOR.
– ServerPac is a software delivery package. It consists of products and service for which
IBM has performed the SMP/E installation steps and some of the post-SMP/E
installation steps. To install the package on your system and complete the installation
of the software it includes, use the CustomPac Installation Dialog, which is the same
dialog used for all CustomPac offerings, including IBM SystemPac® (dump-by-data-set
format), IBM ProductPac®, and RefreshPac.
For IMS, ServerPac allocates, catalogs, and loads all the data sets; sets up the SMP/E
environment; supplies a job to update PARMLIB (IEFSSNxx, PROGxx, IEASVCxx, and
SCHEDxx); and directs you to start the IVP.
3. Validate your system definition stage1 source. Consider merging some elements from the
IVP source with your source.
Run the IVP. The IMS IVP is used after the installation of a new IMS system; it is used to
verify the installation, and can be used sporadically afterwards.
The IVP Variable Export utility makes the migration of IVP variables between releases
easier.
Running the IVP is optional, but is a good practice. All required installation tasks are done
outside the IVP. The IVP verifies that the installation is correct.
4. Install the system prerequisites and your new IMS system (including the pre-generation
service).
The complete set of IMS 12 modules that are necessary for execution are built by a
combination of SMP/E processing and running a stage1 or stage2 ALL system definition
Restriction: It is not possible to run IMS 12 with an ACB library built using IMS 10 or IMS
11. It is not possible to run IMS 10 or IMS 11 with an IMS 12 ACB library. The IMS system
will fail to load DBDs and PSBs because they are not compatible.
Validation tasks
Perform the following validation tasks:
1. Validate users’ cataloged procedures.
2. Validate user-created members of the PROCLIB data set.
Use IMS 12 Syntax Checker to convert members from IMS 10 or IMS 11 where
appropriate.
3. Validate, reassemble, and rebind exit routines and user modifications, especially
IMS Connect exit routines and code that uses IMS control blocks, such as database
randomizers.
Check your exit routines before reassembling.
4. Validate, reassemble, and rebind user programs that process log records.
Some log record formats have changed.
5. Validate and update operating procedures.
This refers to recovery, backup, and restart procedures.
6. Review the various execution parameters in the DFSPBxxx member of the PROCLIB data
set that can affect performance and migration.
Review and set the appropriate values for the AOIP, CMDP, DYNP, EMHB, FPWP, HIOP, LUMC, and
LUMP parameters to specify an upper limit on the amount of storage a pool can acquire.
You can also use Syntax Checker to validate the values for the DFSPBxxx parameters.
7. When using MSC to connect IMS systems with different releases, consider all message
types and the prefix sizes that accompany them. Such message types include Intersystem
IMS 10 was the last release of IMS to support the z/OS-based batch DLIModel utility. It is not
supported with IMS 11. If you are using this function, you should migrate to the DLIModel
utility Eclipse plug-in which is part of the IBM IMS Enterprise Suite.
The JCA 1.0 resource adapter, one of the Java connectors in the IMS DB distributed resource
adapter, is stabilized and is no longer being enhanced. You should switch to using the IMS
Universal DB resource adapter that is delivered in IMS 11.
IBM has discontinued support for IBM Enterprise Workload Manager™ (EWLM). Therefore,
IMS can no longer offer this support. IBM is providing a transition for EWLM 2.1 clients to an
IBM STG Lab Services offering. This new offering provides enhanced capabilities over the
EWLM 2.1 product.
IMS 10 was the final version of IMS in which the IMS information is delivered in the IBM
BookManager® format.
Consider the following steps when preparing your migration fallback plan. This information is
intended as a guide to understanding fallback inhibitors, and should not be considered
complete.
You can use the IBM IMS Queue Control Facility for z/OS (QCF) to requeue IMS 12
messages to IMS 10 or IMS 11 message queues.
You might have to revert either or both of the following exits to older versions if you have
changed these to not be compatible with the old calling interface:
DSPCEXT0
DSPDCAX0
For more information about the DBRC commands, see IMS Version 12 Commands,
Volume 3: IMS Component and z/OS Commands, SC19-3011.
DBRC API applications that interrogate the output from Query TYPE=DB, TYPE=DBDS, and
TYPE=PART requests do not have to be modified if they do not want to access the fields
added with the IMS 11 database quiesce enhancement.
Applications that want to get the new output must map to the new output fields and ensure
that the DSPAPQHD block returned by DBRC has a minimum version of 3.0.
You can optionally migrate application programs that use the existing IMS DB resource
adapter, which supports the JCA 1.0 architecture, to the new type-2 interface IMS Universal
DB resource adapter, which supports the JCA 1.5 architecture.
WebSphere Application Server for z/OS applications that want to use the Open Database
enhancements must deploy the new type-2 interface IMS Universal DB resource adapter.
After you set up ODBM, you can start to use the type-4 interface of the IMS Universal DB
drivers.
APPC enhancements
Migrate all IMS systems that participate in a particular shared-queues environment to IMS 12
before attempting to use the APPC enhancements, even though the APPC local logical unit
(LU) functions do not have any explicit migration considerations for IMS 12.
In these cases, you can activate transaction expiration (but not message-level expiration) by
specifying the EXPRTIME parameter in the TRANSACT macro or by issuing either of the
following commands:
CREATE TRAN
UPDATE TRAN SET(EXPRTIME(seconds))
OTMA protocol messages regarding OTMA resource information are ignored by IMS Connect
and other OTMA clients that have not taken advantage of OTMA resource monitoring.
In IMS 10, the definition of the destination routing descriptors in DFSYDTx must be entered
from the most specific to the most generic destination routing descriptor name. Any
destination with a masked character of asterisk (*) has to occur after the group of names that
the asterisk is masking. For example, following the order from most specific to most generic,
you must list the contents in DFSYDTx as DEST1234, DEST12*, and DEST*.
In IMS 12, this restriction is lifted and the user creating the descriptor entries no longer must
be aware of the order. For example, the entries in DFSYDTx can be in any order, such as
DEST*, DESTT1234, and DEST12*. OTMA automatically rearranges this internally from most
specific to most generic.
CQS migration
Migrate CQS and its control region (or regions) on the z/OS image at the same time. If this is
not possible, CQS must be migrated before any of the control regions are migrated.
If you are going to use your existing DBRC exit routines, there are no migration
considerations.
If you want your DBRC exit routines to use the functionality of BPE, you must create new exits
(which are based on current exits) that handle the BPE interface. The new exits must have
unique names and must be added to the BPE user exit PROCLIB member.
For information about RECON record types, see IMS Version 12 Diagnosis, GC19-3015.
For information about the RECON I/O exit routine (DSPCEXT0), see IMS Version 12 Exit
Routines, SC19-3016.
You might have to modify your procedures for maintaining the RECON data set because of
the CLEANUP.RECON command.
Consider modifying your automated programs or tools that issue DBRC commands.
The user exit enhancements in IMS 11 introduce version 6 of the list (SXPLVER6). Exit
routines that run in multiple versions of IMS must be sensitive to the version of the SXPL. The
version number was SXPLVER5 in IMS 10.
The IMS Connect exit parameter list (HWSEXPRM) is changed for IMS 11. You must
reassemble and rebind the IMS Connect exit routines that use HWSEXPRM to pick up the
changes.
The file system paths must be more efficient during the migration process. Table 8-5 shows
how the file system paths differ between IMS 10 and IMS 12.
SDFSJRAR /usr/lpp/ims/ims12/imsjava/rar/IBM/
SDFSJCPI /usr/lpp/ims/ims12/imsjava/classic/IBM/
SDFSJCPS /usr/lpp/ims/ims12/imsjava/classic/ivp/IBM/
SDFSJXQY /usr/lpp/ims/imsjava10/IBM/
The following file system paths, which were created and used by previous releases of IMS,
are no longer used in IMS 12. You can delete these obsolete file system paths after you
delete the previous release from your system.
/usr/lpp/imsico/
/usr/lpp/IMSICO/
/usr/lpp/ims/imsjava91/IBM/
/usr/lpp/ims/ico91/IBM/
/usr/lpp/ims/imsjava91/samples/IBM/
/usr/lpp/ims/imsjava91/cics/IBM/
/usr/lpp/ims/imsjava91/lib/IBM/
/usr/lpp/ims/imsjava91/dlimodel/IBM/
Although running a mixed IMSplex is possible, it is desirable to upgrade all your CSL
components to IMS 12 if any control region is at IMS 12. Control regions are limited as to
If you are running multiple LPARs, migrate one LPAR at a time. If you are running multiple IMS
systems on one LPAR, migrate one IMS at a time.
In IMS 12, IMS Connect initialization fails if you specify multiple SSL ports. If you are
migrating from IMS 10, you must modify the HWSCFGxx member to specify only one SSL
port. An alternative option is to use application transparent-transaction layer security
(AT-TLS).
Several new specifications are in the HWS, TCPIP, DATASTORE, MSC and RMTIMSCON
statements of the HWSCFGxx configuration file. These specifications appear in the display
output of commands such as VIEWHWS, VIEWPORT, and VIEWDS. Modify the automation programs
and master terminal operator (MTO) documentation to recognize these new fields.
Automation programs that read the output of IMS Connect displays or query the HWSP1410W
message must be aware of the new information and fields that have been added by the IMS
12 enhancements. Similarly, MTOs that issue IMS Connect commands should understand
that additional information is provided.
Message HWSX0908W is issued if the old exits HWSIMSO0 and HWSIMSO1 continue to be
specified in the IMS Connect configuration member. Those exits must be replaced with
HWSSMPL0 and HWSSMPL1.
If WARNSOC and WARNINC are specified in the TCPIP HWSCFGxx statement, new
messages are issued when the warning level is reached (HWSS0772W) and when the number of
sockets falls below the warning level (HWS0773I).
Because the HWSEXPRM macro has been expanded, IMS Connect exit routines that invoke
the macro must be re-assembled. Remember also that the XIBDS (exit interface block data
store entry) has also been expanded.
With the TCP/IP automatic reconnect capability, new code in the terminate port thread
process of IMS Connect automatically issues an internal OPENPORT command on a timer basis.
Operator commands or automation that are issued to ensure that IMS Connect reestablishes
connectivity with a TCP/IP network are no longer needed.
To benefit from the Generated Client ID function, the IMS TM resource adapter must be
replaced with the new version.
The new BPE trace capability is enabled only when the old function is disabled by removing
or commenting out the HWSRCORD DD statement in the IMS Connect startup procedure.
After the new function has been enabled, new RCTR entries in the BPE external trace data
sets will be introduced as variable length trace entries.
IMS 12 adds the following records for the sending or receiving TCP/IP and XCF:
ICONTR – TCP/IP Receive
ICONTS – TCP/IP Send
ICONIR – IMS OTMA Receive
ICONIS – IMS OTMA Send
IMS 10 can process mixed-case passwords, but to enable this function, you must specify
PSWDC=M for IMS and PSWDMC=Y for IMS Connect. The default values for IMS 10 are PSWDC=U
(uppercase) and PSWDMC=N (not mixed case).
The PSWDC and PSWDMC parameters are enhanced in IMS 12 with the “R” specification, which
means that IMS and IMS Connect should handle passwords in the same manner as specified
in RACF.
The IMS 12 isolated log sender (ILS) function of the Transport Manager System (TMS) can
process logs created by earlier releases, with the following exceptions:
IMS 11 and IMS 10 tracking systems cannot accept logs produced by IMS 12.
IMS 11 and IMS 10 ILSs cannot accept logs produced by IMS 12.
Although you can migrate all of the RSR components at the same time, you are more likely to
migrate them in stages. The tracking system must be migrated before or at the same time as
the ILS at the active site. The ILS at the active site must be migrated before or at the same
time as the active IMS system.
The RECONs must be upgraded to IMS 12 before the systems that use them are migrated to
IMS 12.
Migration steps
Perform the following migration steps:
1. Upgrade the RSR tracking system RECONs to IMS 12.
2. Migrate the RSR tracking system to IMS 12.
3. Upgrade the active system RECONs to IMS 12.
4. Migrate the active TMS running the ILS to IMS 12.
5. Migrate the active IMS to IMS 12.
Serviceability enhancements
To benefit from serviceability enhancements when you migrate to IMS 12, you might have to
increase storage or modify automated tools or procedures.
Because the number of address spaces included in the system dump is limited by the amount
of storage specified by the MAXSPACE parameter of the z/OS CHNGDUMP command, you might
have to increase the amount of storage to accommodate the additional address spaces.
However, IMS does not exceed the MAXSPACE value.
Syntax Checker is a stand-alone, offline component, and it can fall back by installing the
previous release of Syntax Checker. Always maintain a copy of your previous-version
PROCLIB members to enable fallback.
For more information, see the IMS Enterprise Suite website at:
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=swg-imsentersuite
This chapter explains the functions (components) that are part of the IMS Enterprise Suite
V2.1. It highlights the functionality and enhancements of each component from Enterprise
Suite V1.1 to V2.1.
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=swg-imsentersuite
Depending on the component, some parts require the IBM Rational Application Installation
Manager (at a minimum level of 1.4.4) to be installed in a new, or added to an existing,
Rational Workbench package.
IBM IMS Enterprise Suite for z/OS V2.01.00 consists of the following function modification
IDs (FMID):
HAHF210 (Base Services)
JAHF201 (SOAP Gateway)
JAHF202 (JMS API)
JAHF203 (Connect API Java)
Table 9-1 lists the IMS environments that each IBM IMS Enterprise Suite for z/OS V2.01.00
FMID supports.
The following sections describe each of the Enterprise Suite V2.1 components. Because
Base Services supports the other components, it is not addressed.
IMS Enterprise Suite Connect APIs for C
IMS Enterprise Suite Connect APIs for Java
IMS Enterprise Suite DLIModel utility plug-in IMS Enterprise Suite V2.1
IMS Enterprise Suite Explorer for Development
IMS Enterprise Suite SOAP Gateway 2.1
IMS Enterprise Suite Java Message Service API
For more information, see IBM Information Management Software for z/OS Solutions
Information Center at the following address, and search for JMS API (in “IMS Enterprise Suite
Version 1.1”):
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
The Connect API for C simplifies the process of developing C and C++ applications that
interact with IMS through IMS Connect in the following ways:
It manages the communication between the C application and IMS Connect. The client
application provides property values that describe the type of connection to establish with
IMS Connect. The IMS Enterprise Suite API for C establishes a connection for the
applications and maintains that connection for as long as it is needed,
It helps you to set the interaction specification, which determines the details of the
exchange with IMS. The parameter values in the interaction specification determine the
fields that are set by the API in the IMS request message (IRM) header of messages that
are sent to IMS Connect.
It provides functions that encapsulate the creation of an input request message to be sent
to IMS Connect and the retrieval of the resulting response from IMS Connect. The creation
of the request message is based on properties that are specified in the application or
loaded from a reusable properties file. The properties determine the type of interaction to
be executed.
By completing these steps on behalf of the application, much of the complexity of the IMS
Connect headers and protocols can be shielded.
You can use the IMS Enterprise Suite Connect API for C to drive IMS transactions, Open
Transaction Manager Access (OTMA)-supported IMS commands, and IMS
Connect-supported commands (such as PING and Resource Access Control Facility (RACF)
Password Change).
The IMS Enterprise Suite Connect API for C/C++ supports the HWSSMPL0 and HWSSMPL1
IMS Connect user message exits. The default user exit is HWSSMPL1. In addition to the
message segment as expected by the IMS program or the command processor, the input
message prepared by the API contains an IMS Connect header of TYPE 2 prefixed. This is
also called the IRM header.
The IMS Enterprise Suite Connect API for C supports all existing user-supplied IMS Connect
client application functions except for the following functions:
Two-phase commit
Unicode
Synchronous callout
Secure Sockets Layer (SSL) support
All materials on this site are downloadable and installable. The downloadable material also
contains two C and two C++ sample programs. For compiling and linkediting client programs,
a C/C++ workbench is required, such as the Microsoft Visual Studio 2008 Professional
Edition.
For more information, go to the IBM Information Management Software for z/OS Solutions
Information Center at the following address, and search for Connect API for C:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
You can use the Connect API for Java to drive IMS transactions, OTMA-supported IMS
commands, Common Service Layer (CSL) Operations Manager (OM)-supported IMS type-2
commands, and IMS Connect-supported commands (such as PING and RACF Password
Change) from your Java client application. The Connect API for Java supports SSL for
securing TCP/IP communications between client applications and IMS Connect.
Callout interactions are initiated from an IMS application program rather than from the
Connect API for Java client application. In a synchronous callout interaction, the IMS
application program that sends the callout request remains scheduled in the IMS dependent
region while the request is processed by the Connect API for Java client application.
Figure 9-1 illustrates how it works.
Initiating
Client
IMS IMS API for Java
Start setHostName(..)
Java Program
IMS App 1 setPortNumber(..)
setClientId(..)
<-SSL/TLS??
getConnection();
IMS App1
Interaction
setResumeTpipeProcessing(wait..)
2 setResumeTpipeRetrievalType()
1 setAckNakProvider()
TP1 TMInteraction.execute()
IMS IMS
Callout
4 Java 5
Msg Connect .....Receive Callout Request
API
setInteractionTypeDescription(ACK);
3 OTMA Descriptor ACK / Execute interaction
ICAL IC4JEJB myTMInteraction.execute();h
Tpipe=TP1
SENDRCV ....Process Request
setInteractionTypeDescription
(CALLOUT_RESPONSE)
setInputMessageData(RESP_MSG)
6
myTMInteraction.execute();
Message flow of synchronous ICAL request to a Java program using IMSJavaAPI ES V2.1
Figure 9-1 Plain Java program accepting and responding to synchronous callout
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
See also OTMA destination descriptors in IMS Version 12 Communications and Connections,
SC19-3012.
When communicating with IMS 12 and later, client applications can submit IMS type-2
commands to the CSL OM with the Connect API for Java. The Connect API for Java can
deliver the response message as either an XML document in a string array or as name and
value pairs in a Java Properties array.
The IMS type-2 command interface is provided by the CSL OM component of IMS. Type-2
commands require an active CSL OM in the target IMSPlex and return output in XML format.
The Connect API for Java can send IMS type-2 command input in a standard InputMessage
object. The InputMessage is parsed and sent to IMS, and the response message is returned
to the Connect API for Java client as an XML instance document that conforms to the
imsout.dtd XML definition document. Your client application can retrieve the XML directly, or
use a set of method calls provided by the Connect API for Java to retrieve the values of
individual fields from the response message.
To install the Connect API for Java, complete the following tasks:
Obtain the Connect API for Java component. You can obtain the component by ordering
the IMS Enterprise Suite product, or by downloading the component from the IMS
Enterprise Suite download website.
For the z/OS platform, order a Custom-Built Product Delivery Offering (CBPDO) for the
IMS Enterprise Suite product through ShopzSeries.
For other platforms, download the latest service level for the Connect API for Java from the
Connect API for Java download site.
Store the Connect API for Java archive (JAR) file in a directory that is in your class path.
JRE: The JRE must be installed with the extended character set library (installed in the
lib\charsets.jar file) that includes support for Cp037 and Cp1047 EBCDIC encoding.
This library is included with the Support for Additional Languages option in the J2SE
installer.
For more information, see IBM Information Management Software for z/OS Solutions
Information Center at the following address, and search for Connect API for Java:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
The DLIModel utility plug-in can be used on both Windows and Linux systems. For a Java
application to access an IMS database, this plug-in needs information about that database.
This database information is in IMS source files such as program specification blocks (PSBs)
and database descriptions (DBDs).
To access this information, you must first convert it into a form that you can use in the Java
application, a subclass of the com.ibm.ims.db.DLIDatabaseView class that is called the IMS
Java metadata class. The DLIModel utility generates this metadata from IMS source files.
In addition to creating metadata, the DLIModel utility plug-in generates a graphical UML
model that illustrates the IMS database structure in an interactive tree model:
It generates annotated XML schemas of IMS databases, which are used to retrieve XML
data from or store XML data in IMS databases.
It incorporates additional field information from COBOL or PL/I copybooks.
It incorporates additional PCB, segment, and field information, or overrides existing
information through a graphical view of the PCB.
It generates a DLIModel database report, which assists Java application programmers in
developing applications based on existing IMS database structures.
It generates an optional DLIModel trace log.
It provides a configuration editor as a launching point for generating IMS database web
services artifacts.
It generates XMI descriptions of the PSB and its databases.
DLIModel is a helpful tool for visualizing and documenting all details about DBDs and PSBs
sources. DLIModel is also required to produce the com.ibm.ims.db.DLIDatabaseView class for
accessing IMS databases from Java applications on remote and local locations by using the
type 4 and type-2 drivers. Both traditional DLI/SSA and Java Database Connectivity (JDBC)
access can be used.
JRE: The JRE must be installed with the extended character set library (installed in the
lib\charsets.jar file) that includes support for Cp037 and Cp1047 EBCDIC encoding.
This library is included with the Support for Additional Languages option in the Java 2
Platform, Standard Edition (J2SE) installer.
For more information, see IBM Information Management Software for z/OS Solutions
Information Center at the following address, and search for IMS Enterprise Suite DLIModel
utility plug-in:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
3. In the New IMS Explorer Project window (Figure 9-5), choose a project name.
When you click Next, you can import PSBs and DBDs from a local or remote file system,
which we do later. The reason for postponing this task is that, in an existing project, the
import requires the selection of the adequate “input source”. You must make this selection
explicitly. In this case, we click Finish.
4. In the Select an Import Source panel, select IMS Resources, and then click Next.
5. In the next window (Figure 9-8), select whether to input from a local or remote source. We
select Local file system in this example. Then click Next.
Figure 9-9 PSB selected and DBD found in same folder and preselected
7. If you receive a warning that some fields cannot be used with JDBC/SQL access, which
can be solved later (Figure 9-10), click OK.
XML representation of the source is used as meta information. This information can be looked
at in two ways. The representation is different for DBDs and PSBs.
DBDs
The DBD metacode is available in XML format as shown in Example 9-1, which you can view
by using a text editor.
Example 9-1 Excerpt of the DBD metacode in XML format for DBD “AUTODB”
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ns2:dbd xmlns:ns2="http://www.ibm.com/ims/DBD" dbdName="AUTODB">
<access dbType="HDAM">
<hdam datxexit="N" password="N" osAccess="OSAM">
<rmName name="DFSHDC40">
<subOptions bytes="200" maxRBN="5" anchorPoints="1"/>
</rmName>
<dataSetContainer>
<dataSet searchA="0" scan="3" ddname="DFSDLR">
<block size="-1"/>
<size size="-1"/>
<frspc fspf="0" fbff="0"/>
</dataSet>
</dataSetContainer>
</hdam>
</access>
<segment imsName="DEALER">
<field imsDatatype="C" seqType="U" imsName="DLRNO">
<startPos>1</startPos>
<bytes>4</bytes>
<marshaller>
<typeConverter>CHAR</typeConverter>
The original DBD source has been interpreted and enriched by the IMS source reader, and
represented in XML. This is the base for editing by the Explorer.
The DBD metacode is also available in graphical format, with the IMS DBD editor (Figure 9-12).
Right-clicking a segment, for example ORDER, and selecting an option from the menu
(Figure 9-14).
The Edit Field window (Figure 9-16) shows an example of editing the properties for a
segment field.
Example 9-2 Excerpt of the DBD metacode in XML format for PSB “AUTPSB11”
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ns2:psb xmlns:ns2="http://www.ibm.com/ims/PSB" compat="N" lang="JAVA"
psbName="AUTPSB11">
<dbPCB copies="1" procopt="AP" keyLength="100" dbdName="AUTOLDB"
name="AUTOLPCB">
<label>AUTOLPCB</label>
<senseg name="DEALER">
<indices/>
<senseg name="MODEL">
<indices/>
<senseg name="ORDER">
<indices/>
</senseg>
<senseg name="SALES">
<indices/>
</senseg>
<senseg name="STOCK">
<indices/>
<senseg name="STOCSALE">
<indices/>
</senseg>
</senseg>
</senseg>
<senseg name="SALESPER">
<indices/>
<senseg name="SALESINF">
<indices/>
</senseg>
<senseg name="EMPLINFO">
<indices/>
</senseg>
</senseg>
</senseg>
</dbPCB>
......
The last part of the editor shows general characteristics of the PSB (Figure 9-18).
After selecting the PCB, we have several options, including two edit options (Figure 9-19). We
start first with the “Edit” option. Then the Edit PCB Statement window (Figure 9-20) opens.
You recognize the options that otherwise are entered with the PCB macro during the
PSBGEN. The PCB shown in Figure 9-20 represents access by using a secondary index.
By hovering over a segment name or even over a field, hover text is displayed, indicating the
details of the sensitivity (Figure 9-22).
call-in
call-out
z/OS
Distributed
URL (Web
z/OS
Service)
tcp/ip (XML)
tcp/ip (SOAP) SOAP IMS
IMSCON OTMA
Gateway (DCCTL)
Converter
CorrelatorFile
Connection
Bundle
When IMS applications are enabled as web service consumers, they can call out to any
external web services and receive the response in the same or a different transaction.
SOAP Gateway can be used on z/OS, Linux for System z, and Windows systems.
Figure 9-27 Comparing the SOAP frame, without and with message security
The security elements can be packed in different formats in the header. If the header is filled
in directly by the requester of the web service, it can be the following types:
Usertoken (UserID/password)
Binary Token (509v3 certificate; not in Enterprise Suite Version 2.1)
In 9.7.3, “SOAP Gateway V2.1 security implementation” on page 351, we explain what is
implemented in SOAP Gateway Version 2.1. For additional reading about WS-Security, see
WS-Security AppNotes at:
http://msdn.microsoft.com/en-us/library/ms951253.aspx
URL (Web
Service)
SOAP IMS
tcp/ip (SOAP) Gateway Connect
Message
SOAP Header
(UserName Token USERID/
USERID/PASSWORD) PASSWORD
+
+
Message
SOAP Body IMS
Connect
O IMS
requester IMS SOAP
XML T Appl
Gateway M
SOAP(+SSL) Adapter
SSL A
Web Service RACF Authorization
Figure 9-29 Web service flow from requester to IMS with UserNametoken
The UserName and Password are extracted from the UserName token in the SOAP Gateway
and passed to IMS Connect in the IRM request to IMS Connect. IMS Connect verifies the
user ID and password with RACF.
The excerpt in Example 9-4 shows the format of the SOAP frame, with Header and Body.
Inside the header, you can recognize the wsse:UsernameToken, with Username and
Password. This information in the SOAP frame is in the clear in this case. The only way to
protect it is by using transport-level security. Otherwise, the information passes in the clear
over TCP/IP.
With SAML, this information must be obtained beforehand from an identity provider (security
server). SAML is a specification designed to provide cross-vendor single-sign-on
interoperability. SAML was developed by a consortium of vendors (including IBM) under the
auspices of OASIS, through the OASIS SSTC (Security Services Technical Council). SAML
has two major components:
SAML assertions are used to transfer information within a single sign-on protocol.
SAML bindings and profiles are used for a single sign-on protocol.
– How SAML is employed in any given context is known as a SAML profile.
– How SAML assertions or protocol messages are conveyed in or over another protocol
is known as a SAML binding.
A SAML assertion is an XML-formatted token that is used to transfer user identity (and
attribute) information from a user's identity provider to trusted service providers as part of the
completion of a single sign-on request. A SAML assertion provides a vendor-neutral means of
transferring information between federation business partners.The identity provider asserts
the identity of the requester. In practice it happens by the requester contacting the identity
server and obtaining this proof of identity in the format of a SAML token.
This token can be used for several purposes. In the case of web services the SAML token,
which asserts the identity of the requester, is passed in the SOAPheader, replacing the
UserName token or other identity tokens.
The SAML token is a richer alternative to standard WS-security identity tokens because in
addition to its identity/authentication content, it can also carry several additional attributes,
such as access control information, that are related to the requester.
As a protocol, SAML has three versions: SAML 1.0, SAML 1.1, and SAML 2.0, as explained
here:
SAML 1.0 and SAML 1.1 (collectively, SAML 1.x) focus on single sign-on functionality.
SAML 2.0 represents a major functional improvement over SAML 1.x. Approved in
March 2005, SAML 2.0 is based on SAML 1.x with significant input from the Liberty
Alliance ID-FF and Shibboleth specifications.
For more information about SAML, see the SAML Technical Overview at:
Technical Overview of the OASIS Security Assertion Markup Language (SAML) V1.1
(May 2004)
http://www.oasis-open.org/committees/download.php/6837/sstc-saml-tech-overview-
1.1-cd.pdf
Security Assertion Markup Language (SAML) V2.0 Technical Overview (March 2008)
http://www.oasis-open.org/committees/download.php/27819/sstc-saml-tech-overview
-2.0-cd-02.pdf
See also Federated Identity Management and Web Services Security with IBM Tivoli Security
Solutions, SG24-6394.
SOAP gateway V1.1 already supports SAML V1.1 unsigned. SOAP gateway V2.1 supports
two additional versions of SAML.
A signed SAML token further protects message integrity by enabling the recipient of the token
to validate the authenticity of the token and assert SAML token identity and attributes based
on the trust relationship with the token issuer. Figure 9-30 illustrates the flow.
Security
server
S E
A X SOAP Header
Message
M C SIGNED SAML11token
(USERID + ATTRIBUTES)
L H Digital Signature USERID/
SignMethod
A Certificate of Signer +
N + Message
IMS
G SOAP Body
Connect
E O IMS
requester
IMS SOAP XML T Appl
Gateway M
Adapter
SOAP(+SSL) A
SSL
W eb Service
RACF Authorization
Figure 9-30 Web service flow from requester to IMS with signed SAML 1.1 token
Example 9-5 shows an excerpt of a SOAPheader containing a signed SAML 1.1 token that is
emitted to the requester by the security domain server.
In Example 9-5, you see the SOAPheader with WS-security included. The “wsse:..” section
contains a “signed SAML11 token”. Within this <saml:Assertion..> token, you can see the
following details:
<saml:Subject>..
– <saml:NameIdentifier>.no credentials (?password) must be present; the user is
asserted by the security server.
– <saml:Attributes>..
<ds:signature../>
– <ds:SignedInfo...>,
<ds:canonicalizationMmethod>
<ds:SignatureMethod> transform, digest algorithms
– <ds:signatureValue>
– <ds:KeyInfo>..ds:X509Data..ds:X509Certifcate of the signer (if
SAML11SignedTokenTrustAny)
Security
server
S E
A X
M C SOAP Header Message
SAML20token
L H (USERID + ATTRIBUTES)
USERID/
A +
+
N Message
SOAP Body IMS
G
Connect
E O IMS
requester
IMS SOAP XML T Appl
Gateway M
Adapter
SOAP(+SSL) A
SSL
Web Service
RACF Authorization
Figure 9-31 Web service flow from requester to IMS with SAML 2.0 token
Without the use of SAML tokens, two actors are involved in the Web Services traffic: the
requester (client) and the server hosting the web service (SOAP gateway). With SAML, there
is a third actor: the security service (identity provider). All three must be able to communicate
in a secure way, and eventually other security opportunities will be used (encryption, digital
signatures).
SSL and TLS protocol technology protects data exchanges between client and server
applications. SSL provides security for your interactions by securing the TCP/IP connection
between SOAP Gateway and IMS Connect. TLS is the successor to the SSL protocol.
TLS V1.0 was the first version, succeeding SSL V3.0. New TLS versions continue to be
defined by the Internet Engineering Task Force (IETF), and the TLS protocol maintains
compatibility modes for the earlier SSL protocol.
The SAML security server can also be called the “identity server”. This is the server that is
contacted by the requester to obtain a SAML token with user ID and attributes included, which
are asserted by this server. This SAML token is passed in the SOAP header between the
requester and the SOAP gateway.
Security
Server
1
sign 1 1
2 sign
sign
keystore truststore
Certificate
S CA
4
A
M
L
SOAP Gateway
SOAP
Requester
3 2
2 keystore truststore
Certificate
CA, SecSvr
keystore truststore 4
4 Certificate
CA RACF on z/OS
Figure 9-32 Security infrastructure based on public and private key pairs
Tooling is available on Distributed and z/OS platforms to install the keystores or truststores
and their required contents. z/OS can use RACF for this.
In a test environment, to avoid the cost of signing by official authorities, it is useful to have
“self-signed” certificates for test CAs.
The following sections list and describe several of the many keytools that are available for
z/OS and distributed environments.
AT-TLS is governed by a policy file, installed on a TCP/IP stack in z/OS. The policies in this file
determine whether SSL/TLS will be used or not, depending in the most simple case on the
IP/port combinations which will go in session. For example, all sessions connecting to SOAP
gateway server port 8443 will have SSL/TLS.
To configure the IBM z/OS Communications Server AT-TLS feature for SOAP Gateway, use
the IBM Configuration Assistant for z/OS Communications Server.
The high-level steps to configure the AT-TLS feature for SOAP Gateway are listed here:
1. In the Configuration Assistant, configure your z/OS image.
2. Enable the AT-TLS technology.
3. In the AT-TLS perspective:
a. Configure the port, IP address, cipher, trace level, and keyring information for server
authentication.
b. Configure other settings such as:
• Client authentication
• Certificate selection
• Certificate revocation list (CRL)
• Connection settings such as cipher reset timer and SSL session timeouts
• Additional authentication between SOAP Gateway and IMS Connect
4. Install the master AT-TLS policy configuration file.
After you activate AT-TLS, you must set up RACF as the store.
For information about SSL/TLS and AT/TLS, see IBM z/OS V1R12 Communications Server
TCP/IP Implementation: Volume 4 Security and Policy-Based Networking, SG24-7899.
Next, we describe the preparation of SSL/TLS on z/OS. Basically you have to set up
keydatabases and build the pub(certificates)/priv key pairs.
If the SOAP Gateway requires the keystores or truststores in a hierarchical file system
(HFS/zFS), the preparation work and all generations can still be performed from RACF.
Exports of keys and certificates from RACF and imports in Local and Remote (after FTP)
keystores or truststores can do the ultimate setup.
RACF does not make a distinction between keystore or truststore. Everything is in one store.
The keys are maintained in a key ring. Before you can do anything, the user ID must be
known to RACF security and must have a key ring added. In practice, the name of the key ring
is qualified (username.keyringname). Example 9-7 shows what is required for the user
SOAPSVR1. SOAPSVR1 is signed with a “so-called” self-signed CA, which is generated first.
We also show the exports of the certificate of the CA and the public or private information for
the user SOAPSVR1. Everything is done with one RACF command, RACDCERT.
For test purposes, the task must be repeated for the other users’ requesters and security
server. The benefits of this solution are that you have a central place where all security
elements are preserved, and that the RACDCERT command is easy to use and can be executed
by a batch job. When you have the file systems with certificate and P12 (public or private
keys), you can easily distribute them and import in the users’ keystores and truststores.
Be aware, though, that if you are using SSL/TLS client authentication to map a digital
certificate to a RACF user ID, you must use the RACF RACDCERT command to store the client
certificate, and not the gskkyman command.
VANAERS @ SC63:/u/vanaers>gskkyman
Database Menu
0 - Exit program
In RACF, a key database is always present. In a gskkyman environment, the key databases,
keystores, trust databases, or truststores must be explicitly created as public/private key,
certificate repositories. In turn, those stores can be shared by any user IDs that have access
using the HFS.
For more information, see IBM z/OS V1R12 Communications Server TCP/IP Implementation:
Volume 4 Security and Policy-Based Networking, SG24-7899.
The iKeyman utility is a part of the IBM Java Security Socket Extension package. It is shipped
with the WebSphere Application Server and with many other Java-related packages.
Using keytool
The command line tool for Windows, keytool, is available in almost all jdk/bin and jre/bin
directories. Figure 9-34 shows the options for keytool.
C:\Program Files\IBM\SDP_RDZ_8\jdk\bin>keytool
keytool usage:
In all scenarios, the SOAP Gateway server acts as a client to IMS Connect. The IMS
Enterprise Suite V2.1 SOAP gateway server can run on both z/OS and Windows
(Figure 9-35).
Figure 9-35 SOAP Gateway as the web service server and the SSL/TLS client
SOAP Gateway provides support for both server authentication and client authentication and
Web-services security (WS-Security) for the web service provider scenario regardless of the
platform that SOAP Gateway runs on.
SOAP Gateway clients can secure data exchanges with SOAP Gateway through HTTPS
requests by using the SSL/TLS security protocol. Similarly, SSL/TLS connections are
supported between SOAP Gateway and IMS Connect.
Tip: To configure the server to know where the Java is located, issue the following SOAP
command either from UNIX System Services or with sample job AEWIOGBP:
Key Trust
store store
SSL/TLS
with
key/trust store
A
Trust
Web SOAP T IMS
store
Service Gateway _ CONNECT
AT AT
Key Client T T T
store
L L L
S S S
TCP/IP
SSL/TLS
using AT-TLS
on Z/OS (only) RACF
RACF
(SAF (SAF
keyring) keyring)
---------- ----------
POLICY POLICY
file file
In addition to the inbound message connection rule for connections from the web service
client (service requester) to SOAP Gateway, you can configure an additional set of rules for
the connections between SOAP Gateway and IMS Connect.
Figure 9-36 shows the rules that are created to protect traffic in the following ways:
Inbound traffic in the SOAP Gateway from the requester (assuming not in z/OS)
Use the QoS feature in IBM z/OS Communications Server to specify traffic thresholds and
traffic priority to help manage traffic to the SOAP Gateway server. QoS refers to the overall
service that a user or application receives from a network, in terms of throughput and
delay. To configure the maximum connections, create a traffic descriptor, a traffic shaping
level, and a requirement map in the QoS perspective.
Outbound traffic from the SOAP Gateway to IMS Connect
Inbound traffic in IMS Connect from the SOAP Gateway
The “policy rules” in the TCP/IP stacks that are passed in z/OS can all be different.
Moreover, for each authentication (client/server), individual SAF key rings in RACF can be
used and as a consequence a different set of certificates.
For information about setting up AT-TLS, see the IBM Information Management Software for
z/OS Solutions Information Center at:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
You make the SSL/TLS setup active by pointing to the stores and then activating SSL in the
SOAP Gateway configuration for inbound and in the Connection bundles for outbound to IMS
Connect.
Attention: Leave these properties blank if you are using IBM z/OS Communications
Server AT-TLS feature to secure the connection between the SOAP Gateway and IMS
Connect.
Enablement on Windows
For Windows, you can only use keystores or truststores. The build of the stores and their
contents and the available tools are described in 9.7.4, “Security setup for Enterprise Suite
V2.1” on page 358.
You make the SSL/TLS setup active by pointing to the stores and activating SSL in the SOAP
Gateway configuration for inbound, and in the Connection bundles for outbound to IMS
Connect.
Example 9-8 Changes to the server.xml file for enabling SSL port 8443
<!-- Define a SSL HTTP/1.1 Connector on your port -->
<Connector port="8443"
maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
enableLookups="false" disableUploadTimeout="true"
acceptCount="100" debug="0" scheme="https" secure="true"
clientAuth="true" sslProtocol="SSL"
keystoreFile="C:\Program Files\IBM\IMS Enterprise Suite V2.1\SOAP
Gateway\server\conf\SGWkeystore.ks"
keystorePass="password"
truststoreFile="C:\Program Files\IBM\IMS Enterprise Suite V2.1\SOAP
Gateway\server\conf\SGWtruststore.ks"
truststorePass="password"
algorithm="IbmX509"/>
You can also make the changes shown in Example 9-8 by using the “SOAP Gateway
management utility” command:
iogmgmt -prop...” command option
Connection bundle
This is the same as for z/OS.
– SSL keystore name:
– SSL keystore password:
– SSL truststore name:
– SSL truststore password:
– SSL encryption level:
ES21SGB1 is simply a customized name for the start procedure distributed by IBM under the
name AEWIOGPR.
The JOB is associated with user IMSSOAP. This is a result of the RACF association with the
started task name.
This JOB is executed as a JZOS job; the required Open MVS environment is read from
member JDKSEPB1. The procedure is shown in Example 9-11.
JZOS Batch Toolkit for z/OS SDKs: The IBM JZOS Batch Toolkit for z/OS SDKs is a set
of tools that improves many of the functional and environmental characteristics of the
current Java batch capabilities on z/OS. It includes a native launcher for running Java
applications directly as batch jobs or started tasks, and a set of Java methods that make
access to traditional z/OS data and key system services directly available from Java
applications.
Figure 9-37 Start and stop options from the installation menu
/P es21sgb1
On Windows, you can stop the SOAP Gateway server from the installation menu (Figure 9-37
on page 369).
With the SOAP Gateway management utility, you can perform the following tasks:
Start and stop the SOAP Gateway server on Windows systems:
– For z/OS, use the START and STOP console commands.
– For Linux on System z, run the iogstart.sh and iogstop.sh scripts.
Configure SOAP Gateway server properties.
The SOAP Gateway management utility also contains the deployment function, which
previously was performed by the deployment utility.
An iogmgmt command consists of the iogmgmt statement followed by arguments to specify the
command and associated options. The management utility can be used in z/OS and
distributed environments.
The universal iogmgmt command has many options, which are explained with the specific
task. For details about commands, see the IBM Information Management Software for z/OS
Solutions Information Center at the following address, and search for SOAP Gateway
management utility reference:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
You can invoke the iogmgmt command in the following way; Example 9-15 lists the available
options.
SOAP Gateway management utility commands are case-sensitive and must be entered in all
lowercase. Alternatively, you can enter them as shown in the IBM Information Management
Software for z/OS Solutions Information Center at the following address (search for command
reference):
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
This opportunity already existed in IMS 10 and IMS11. Documentation and examples are
available in the following documents:
IMS 11
IMS Phonebook sample
After successful completion of this preparation, Figure 9-38 shows the data sets that are
available.
COBOL Converter
Figure 9-38 Enterprise Suite Project in Rational Developer for System z Version 8
For usage of the SAML token, the correlator can require a manual update to indicate what
token is present in the SOAP header. This information must be set in extendedProperty1.
Example 9-18 shows the SAML11SignedTokenTrustAny correlator file.
Connection bundle
The connection bundle describes the characteristics of the SOAP Gateway connection to the
IMS Connect server. For TCP/IP, the SOAP Gateway is always a client. The name of the
connection bundle has been nominated during the build of the web service, and we have to
create it by using the command shown in Example 9-19.
Many other parameters are available for the connection bundle. In addition to the options
shown in the example, the options listed here are also available. The command is used to
create, update, and delete the bundle.
Example 9-20 Create, delete, update options for the connection bundle
iogmgmt
> -conn -c -n--bundle_name
-d -n--bundle_name
-u -n--bundle_name
-h host_name
-p port_number (default 9999)
-d--datastore_name -i callout_tpipe_name
Most of the other parameters deal with the security references of the SOAP Gateway as a
client as shown in Example 9-21, where:
saf_user_ID, saf_password, and saf_group_name are the default values passed in the
extended IRM type-2 header, if no user information is received from ws_security
-r new_bundle_name to rename the connection bundle
-i callout_tpipe_name
- ....callout_targets... to call out with a client or server and basic authentication
The options in Example 9-22 describe SSL characteristics for the connection to IMS Connect,
when AT-TLS is not used.
Deploying the web service to SOAP Gateway enables the application and allows it to begin
processing client requests. Before you deploy the web service, you must have the WSDL file
name, correlator XML file name, and connection bundle entry name for the web service.
SOAP(header(empty), body)
correlation
SOAP(header(UserNametoken),body)
correlation
IMS Connect
SOAP(header(SAML 1.1), body)
correlation
Connection
3. Specify whether the service requires SSL transport. As shown previously for z/OS, you
can choose between AT-TLS setup and a setup with keystores or truststores. Windows
supports only keystores or truststores. SSL setup will be required in case of an incoming
Message security
SSL deals with the transport-level security. This section explains what happens to the
message security, which is security carried in the SOAP header of the envelope (Figure 9-40
on page 379). The transport-level security should eventually protect the message level
security so that it cannot be read or altered.
It is the function of the SOAP Gateway to pass the message security from the SOAP header
into the header extensions of the IMS Connect request message (IRM TYPE 2) as RACF
recognizable information. By the type parameter, specified during the deployment, the SOAP
Gateway server is aware of the type of security in the SOAP header.
If the security is not available from the SOAP header, defaults are taken for USERID/GROUP
and PASSWORD specifications in the connection bundle.
Default SAF_userID
Default SAF_passwd
Default SAF_groupID
Figure 9-40 Passing WS_security into the IRM request to IMS Connect
COBOL converter
The COBOL converter function did not change from previous releases. To handle the XML
data from the client, you can either modify the IMS application to accept the XML input
message and to return an XML output message, or use the IBM Rational Developer for
System z XML converter drivers to transform the XML data in IMS Connect. These converter
drivers are generated in the COBOL or PL/I language.
The IMS Connect XML adapter function allows the XML adapter to run the converter driver
inside IMS Connect to perform the XML transformation in the IMS Connect address space. To
handle IMS transaction input and output messages in XML format from the SOAP client, the
XML adapter converts the XML-tagged data to the appropriate IMS application format that the
IMS application accepts and vice versa for outgoing messages.
If the IMS application is a multi-segment message processing program (MPP), you must use
IBM Rational Developer for System z to generate the XML converter SOAP Gateway server
administration.
The COBOL XML converter driver that was generated by Rational Developer for System z as
part of the WSDL generation must be uploaded to your host IMS machine and compiled and
link-edited so it can be accessed by IMS Connect.
To configure IMS Connect to convert XML data from the client into COBOL or PL/I IMS
application program data, complete the following basic steps:
1. Specify the HWSSOAP1 user message exit in the EXIT= parameter of the TCP/IP
configuration statement (see 1 in Example 9-24).
2. Include the ADAPTER configuration statement, ADAPTER=(XML=Y), in the IMS Connect
configuration member HWSCFGxx (see 2 in Example 9-24).
3. Define the XML adapter as a BPE exit routine for IMS Connect by coding a BPE exit list in
a IMS PROCLIB data set member (Example 9-25).
5. For COBOL application programs, the XML converter is based on the COBOL copybook of
the IMS COBOL application program that processes the message. For PL/I application
programs, the XML converter is based on the source of the PL/I application program. Each
IMS application that processes messages converted from XML must have its own unique
XML converter.
The XML converters run in an IBM Language Environment for z/OS enclave in the IMS
Connect region and use approximately 33 MB of storage. The IMS Connect region size
must be increased to accommodate this storage requirement. Compile and link the XML
converter into an APF-authorized data set that is concatenated to the STEPLIB in the IMS
Connect startup JCL. When linking the XML converter, specify an additional program
entry name for an internal service as an ALIAS in the link job.
You can find details about this installation in the IBM Information Management Software for
z/OS Solutions Information Center at the following address. Search for configuring XML
conversion support for IMS Connect clients for IMS 12:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
Deploying the web service to SOAP Gateway enables the application and allows it to begin
processing client requests. Before you deploy the web service, you must have the WSDL file
name, correlator XML file name, and connection bundle entry name for the web service.
Configure and remove the configuration for a web service or business event server.
Set up the connection and correlation properties for a web service.
Configure callout properties, and manage the callout threads and thread pool.
Example 9-27 Command to retrieve information from the file system configuration of a stopped server
iogmgmt -view -correlatorfile ALL
The web-based Administrative Console is not available if the server is stopped. However, this
command retrieves information from the master (file system) configuration of a stopped
server instead of the runtime configuration.
To start the SOAP Gateway Administrative Console, choose one of the following options
depending on your environment:
For Windows, from the Start menu, select Start Programs IBM IMS Enterprise
Suite Vx.x SOAP Gateway Administrative Console.
For Linux on System z and z/OS, from a web browser, enter the address shown in
Example 9-28, where hostname is the host name and port is the port number where
SOAP Gateway is running. The default port number is 8080.
Example 9-28 Start the SOAP Gateway Administrative Console from browser
http://hostname:port/imssoap
The Administrative Console opens in a web browser. Click View Deployed web services.
The list of the currently deployed web services is displayed. Each item in the list is a link to the
web service's WSDL file.
For more information, see the IBM Information Management Software for z/OS Solutions
Information Center at the following address, and search for IMS Enterprise Suite SOAP
Gateway for IMS 12:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
JMS is a Java technology to access queue or topic resources. Each vendor offering with the
Java language the possibility “to write to/read from queues (point-to-point (PTP) domain)” or
“to publish to/subscribe on topics (PUB/SUB domain)” has to implement the core abstract
JMS classes. An abstract class contains only the signatures of the methods, but has no logic
included.
Implementation classes provide the actual interface to the resources. IBM has its
implementation classes for WebSphere MQ.
The dlicall “ICAL” for IMS dependent regions allows a program to perform synchronous
callouts when using DLI calls, from traditional languages such as PL/I, COBOL, C, and Java.
The IMS implementation of JMS is limited to supporting the PTP messaging domain only. In
addition, support is provided only for non-transacted QueueSession objects with
Session.AUTO_ACKNOWLEDGE mode. If the JMP or JBP application attempts to call any JMS
method not supported by IMS or pass any unsupported argument to JMS method calls, a
JMSException exception is thrown.
Message E X I Ts
WebSphere
IMS Connect SLSB Application
IMS IMS/TM
HWSJAVA0 TCP/IP 5 Server
Adapter
SYNC MDB
1 OTMA 3 4
6
descriptor
To send a message using the JMP and JBP support for synchronous callout and
synchronously receive a response:
1. Create a com.ibm.ims.jms.IMSQueueConnectionFactory object.
2. Create a JMS QueueConnection instance by calling the createQueueConnection method
on the IMSQueueConnectionFactory object.
3. Create a JMS QueueSession instance by calling the createQueueSession method on the
QueueConnection instance. In the method call, you must set the input parameter values to
false and Session.AUTO_ACKNOWLEDGE to specify that the generated QueueSession
instance is non-transacted and runs in AUTO_ACKNOWLEDGE mode.
4. Create a queue identity by calling the createQueue method on the QueueSession
instance. In the method call, you must set the input parameter value to the OTMA
descriptor name for the synchronous callout operation.
For more information, see the IBM Information Management Software for z/OS Solutions
Information Center at the following address, and search for JMS API:
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
To meet the objectives about common interface and cross-tool integration, in 2010, IBM
announced the following Solution Packs:
IBM Tools Base for z/OS (product number 5655-V93) also called IMS Tools Base for z/OS
for version 1.1
IMS Fast Path Solution Pack for z/OS (5655-W14)
IMS Database Solution Pack for z/OS (5655-S77)
IMS Recovery Solution Pack for z/OS (5655-V86)
IMS Performance Solution Pack for z/OS (5655-S42)
You can still order some of the products included in these packs individually, but most of them
are now only available through those packs. Other products, which are not included in packs,
can be ordered as before.
This section provides an overview of those tools and the major updates made to some of
them to support IMS 12.
Documentation
For complete IMS Tools documentation see the IMS Tools Product Page, which is available
for DB2 and IMS Tools at:
https://www.ibm.com/support/docview.wss?uid=swg27020942
Select the product name at the top of the page to go directly to the information.
This page includes a table with the following information and more:
The product number
The General Availability (GA) date with a link to the announcement letter if you click the
GA date
The End of Marketing (EOM) date with a link to the withdrawal letter
The End of Support (EOS) date with a link on the service discontinuance announcement
The replacement product name and version if it is no longer supported
Maintenance
For a list of available fixes (program temporary fixes (PTFs) and a description of the problems
they solve (APARs), see the IBM DB2 and IMS Tools PTF Listing page at:
https://www-304.ibm.com/support/docview.wss?rs=434&uid=swg27008646&context=SSZJXP
You can also save this information as a comma-separated value (CSV) file to import the list of
APARs into a spreadsheet or database for sorting or downloading. Keep a current level of
maintenance on the tools, and check this web page as soon as you experience a problem.
You might notice that one APAR solves more than one problem, which makes the time to
apply them worth it, especially before starting to use new functions or a new version of IMS.
Compatibility matrix
For information about the minimum maintenance level required for IMS Tools to support all
the current IMS versions, which are Versions 10, 11, and 12, see the IMS Information
Management Tools and IMS for z/OS V11.1 Compatibility page at.
https://www-304.ibm.com/support/docview.wss?rs=434&uid=swg21296180
Check this page before you start a new IMS migration to access the latest tools.
For more information about tools, see the official documentation at websites referenced in the
10.1.1, “Useful links” on page 386.
IMS Tools Generic Exits DB Reorganization Expert IMS HP Fast Path Utilities HP Image Copy
TOSI - Unload, Load, Index Build, IMS DB Repair Facility Database Recovery Facility
Policy Services Prefix Resolution/Update IMS HP Image Copy HP Change Accumulation
IMS Tools Knowledge Base HP Image Copy IMS Library Integrity Utilities Recovery Expert
IMS HD Compression Ext HP Pointer Checker
Library Integrity Utilities
IMS Configuration Manager Command Control Facility Transaction Analysis Batch Terminal Simulator IMS Audit Management Expert
IMS Sysplex Manager ETO Support Workbench Batch Backout Manager IBM Data Encryption for IMS and
HP Sysgen Tools IMS Buffer Pool Analyzer Program Restart Facility DB2 Databases
Queue Control Facility IMS Network Compression
IMS Workload Router Facility
The following sections describe the tools belonging to the Solutions Packs:
IBM Tools Base Pack for z/OS Version 1 (5655-V93)
IMS Fast Path Solution Pack for z/OS (5655-W14)
IMS Database Solution Pack for z/OS (5655-S77)
IMS Recovery Solution Pack for z/OS (5655-V86)
IMS Performance Solution Pack 110 (5655-S42)
Important: This pack is the foundation of the other packs, and must be installed as a
prerequisite to all the IMS solution packs and most of the IMS tools products.
The tools base pack is a no charge product, but is not included in other packs. Therefore,
you must order it.
IMS Tools Base for z/OS, V1.1 (5655-V93), now called IBM Tools Base for z/OS Version 1.1,
combines several IMS products and the entire collection of IMS Tools common infrastructure
components into a single, consolidated installation package.
The IMS Tools Base package provides the following features, functions, and common
services:
IMS Tools Knowledge Base
IMS Tools Common services
IMS Tools Distributed Access Infrastructure
IMS Hardware Compression Extended
This data is centralized in a common report repository and can be managed with its viewing
interface through a complex sysplex environment. Historical report data can be found through
a powerful search, and preserved for future decision making. ITKB is the single interface
within a sysplex environment for multiple IMS tools products to share report output.
Important: IMS Tools Knowledge Base, IMS Tools Common services, IMS Tools
Distributed Access Infrastructure, and IMS Hardware Compression Extended cannot be
ordered separately and are only available from IMS Tools base pack or IBM Tools base
pack.
Attention: IBM Tools Base for z/OS Version 1.2 supports IMS 12 without additional
maintenance
Application: IMS Fast Path Solution Pack for z/OS applies to data entry databases
(DEDBs). IMS Database Solution Pack for z/OS applies to Full Function Databases.
Important: IMS HP Fast Path Utilities (5655-W14) and IMS Database Repair Facility
(5655-E03) are only available from the IMS Fast Path Solution Pack and cannot be ordered
separately. The old versions of those products are not compatible with IMS 12 and are
withdrawn from support on 30 September 2011.
The pack also replaces the old version of Fast Path Utilities product, Fast Path Basic Tools
(5655-E30), and Fast Path Online Tools (5655-F78), which are at end of support (EOS)
since 2006.
IMS Fast Path Solution 5655-W14 HAHQ110 PM21939/UK62565 Preconditioning APAR for IMS 12
Pack 110 PM31908/UK65466 DEDB secondary index
PM36509/UK69078 enhancement Compatibility with
PM37894/UK69302 LIU for IMS12
Compatibility with Recovery
solution Pack DRF function for
IMS 12
IMS High Performance 5655-N45 H1J0420 PM21942/UK62577 Preconditioning APAR for IMS 12
Image Copy 420 PM34347/UK66329 EAV OSAM IMS 12 support for
HPIO
IMS Library Integrity 5655-U08 H27P210 PM21961/UK62602 Preconditioning APAR for IMS 12
Utilities 210 PM46494/UK71799 New function to support indexed
DEDB and Fast Path secondary
index with IMS 12.
FPU OS Batch
IMS Dep Reg
IMS Control
Reg
z/OS
Figure 10-3 IMS High Performance Fast Path Utilities components
IMS HP FP Utilities provides a complete set of high performance utilities such as unload,
reload, reorganize, backup, and verify. It reports on DEDB areas and tunes up the libraries. It
also reduces CPU and elapsed time, processes in parallel multiple DEDB areas in a single
step, and eliminates I/Os for intermediate data sets by enabling unload, reload, analyze and
backup tasks to run in a single step.
Additionally, IMS HP FP Utilities has been enhanced to support the IMS 12 Fast Path
Secondary Index (FPSI) new feature. For information about this support, see 10.2.1, “High
Performance Fast Path Utilities updated for IMS 12” on page 410.
This product is a replacement for IMS Image Copy standard solutions for both full function
(including high availability large databases (HALDB)) and fast path DEDB databases,
whether you are dealing with a batch or concurrent image copy, either standard or DFSMS
COPY image copy.
HPIC includes a High Performance I/O engine for read and write processing, and can
automatically restart for failed image copies. It also detects automatically improved data
recording capability (IDRC) for output tape.
LIU helps to efficiently maintain the numerous IMS data sets that contain definitions, such as
DBDLIB, PSBLIB, ACBLIB, formats MFS. For example, it ensures that a correct definition is in
use to prevent database corruption.
LIU has been enhanced to support the IMS 12 Fast Path Secondary Index new feature. For
more information about this topic, see 10.2.2, “Library Integrity Utilities updated for IMS 12”
on page 423.
The maintenance to support IMS 12 must be applied to individual products for the Database
Solution Pack (see Table 10-2). Other APARs are available. Be current with their maintenance
levels and check the maintenance website.
When you order this pack, you receive all the products included in it. However, in this case, all
the products can be ordered on a unitary basis.
IMS High Performance 5655-N45 H1J0420 PM21942/UK62577 Preconditioning APAR for IMS 12
Image copy 420 EAV OSAM IMS 12 support for
PM34347/UK66329 HPIO
IMS Library Integrity 5655-U08 H27P210 PM21961/UK62602 Preconditioning APAR for IMS 12
Utilities 210 New function to support indexed
PM46494/UK71799 DEDB and Fast Path secondary
index with IMS 12.
IMS High Performance 5655-U09 HPC2310 PM21945/UK62559 Preconditioning APAR for IMS 12
Pointer Checker 310 for the HPPC part
H22K310 PM25552/UK62558 or the Data Repair Facility part
IMS High Performance 5655-E06 H1IN120 PM22119/UK62576 Preconditioning APAR for IMS 12
Unload 120
IMS High Performance 5655-M26 H1IM210 PM22118/UK62579 Preconditioning APAR for IMS 12
Load 210
IMS Index Builder 310 5655-R01 H220310 PM22120/UK62546 Preconditioning APAR for IMS 12
IMS High Performance 5655-M27 H1IP310 PM22121/UK62343 Preconditioning APAR for IMS 12
Prefix Resolution 310
It does not support IMS Partition DB (5697-A06, 5697-D85) or any other products with an
equivalent function.
Online Reorganization Facility tool: IMS Database Reorg Expert shortens the
reorganization process by executing the various steps in parallel, being fully integrated with
other tools. However, the databases are unavailable during this short process. It is not an
online reorganization. Its power is the conditional reorganization with the sensor data.
Online Reorganization (OLR) Facility is an IMS function available from IMS 9 giving a truly
online reorganization, only available for HALDB. The Online Reorganization Facility tool,
product number 5655-H97, provides an almost online reorganization for full function
databases including HALDB. At the end of the reorganization, it needs to make the
database unavailable for a short time to finish applying the updates, renaming the data
sets, and updating the RECON data set.
The Smart Reorg utility (with CRSS) evaluates the full function databases and collects sensor
data. It is then able to tell whether the database needs to be reorganized, according to
policies and statistics stored in the IMS Tools Knowledge Base repository. After this
evaluation it can start the reorganization automatically and evaluate the reorganized
database to check the benefit of the reorganization. Finally, it provides a report on the
database status and the changes operated on it.
Repositories
IMS Tools Knowledge Base
Reports
Report Service IMS Tools
Policy
Knowledge
Conditional Reorganization Base
Server
Support Service Sensor
(including Policy Services) Data
Smart
Reorg Scanning Online DB
Driver Unload
IMS
Online Subsystems
The IMS Parallel Reorganization utilities provide a high performance engine, running unload
and load in parallel in only one step and performing pertinent functions like index building,
image copy and hash pointer checking, and prefix resolution and update.
The JCL becomes easy to write and you do not need to have deep skills in IMS to write a
reorganization batch with this tool.
The heart of HP Unload, the HSSR Engine, can be used by two utility programs, FABHURG1
and FABHFSU. These engine provide the following functions among others:
Unloading of compressed segments without decompression
User exit for additional selection, editing of unloaded segments, or both
Production of statistical reports
Ability to continue after segment sequence errors
Ability to read corrupted data bases
Bypass corrupted pointers
Force access of HIDAM or PHIDAM roots by using an index
Diagnostic report about pointer errors
HP Unload has an API compatible with HSSR, which enables DL/I application programs to
use HSSR to read a database using GN calls. There is no need to change the application to
use the engine; the eligibility is done on the PCB.
IMS HP Load provides a set of high performance reorganization reload procedures for the
following database organizations:
HDAM
HIDAM
HISAM
SHISAM
PHDAM
PHIDAM
HP Load includes a Physical Sequence Sort for Reload (PSSR) engine that is used before
reload to sort the unload data set, or which can be used before migrating to HALDB to sort
into partition sequence. It can use disk or data space to load data during its process.
With IMS Index Builder, you can build (or rebuild) the following:
IMS secondary indexes
Hierarchical Indexed Direct Access Method (HIDAM) primary indexes
Indirect List Data Sets (ILDS)
Supports full-function and partitioned HALDB
IMS Index Builder makes it faster to rebuild indexes instead of creating image copy and
recover. It streamlines the process by creating multiple indexes in a single job step, and uses
both parallel sort and parallel load whenever more than one index is being built, thus reducing
the time needed to build multiple indexes of a single physical database.
Index build can be integrated with the DRF recovery process and reorganization process,
which is available in the IMS Recovery Solution Pack. See 10.1.6, “IMS Recovery Solution
Pack for z/OS (5655-V86)” on page 399.
Important: IMS Database Recovery Facility for z/OS 310, IMS Database Recovery Facility
Extended Functions for z/OS 110, and IMS High Performance Change Accumulation Utility
for z/OS 140 are only available from this pack.
The old versions of these products are not compatible with IMS 12 and are withdrawn from
support on 30 September 2011.
The five products come with the IMS Recovery Solution Pack when ordered.
IMS Recovery Expert for z/OS 110 has been renamed to IMS Data Recovery Facility
Extended Functions for z/OS 110. It has been merged with the IMS Database Recovery
Facility for z/OS 310 into a single library, and they provide a single interface.
IMS Recovery Expert for z/OS Version 2 is another product available outside this pack. For an
overview of this product, see “IMS Recovery Expert for z/OS 210” on page 407.
Table 10-3 lists the for APAR and PTF numbers. Other APARs are available. Be sure to stay
current with the maintenance levels by checking the maintenance website. When you order
this pack, you receive all the products included in it.
IMS Index Builder 5655-R01 H220310 PM22120/UK62546 Preconditioning APAR for IMS 12
310
If Fast Path Solution Path is not available, or one or both APARs is not applied, the
standard IMS utility is used.
. IB
Sub A/S
.. DFSPREC0
DRF Sub A/S HPPC
DEDB PC
JCL Control Sub A/S
Function
HPIC
FP
Logs DBDS / Area Area
Restore Copy
Report
DBDS
CAs RECON DBDS
ICs
© 2011 IBM C ti
Figure 10-5 Data Recovery Facility at a glance
It runs calling internally High Performance Change Accum to read and sort the change accum
records.
IMS Problem Investigator 5655-R02 H28T220 PM24662/UK65183 Preconditioning APAR for IMS 12
Compatibility with CEX
PM34000/UK65462 Support for new CEX journal
PM36967/UK68755 records with IMS 12
IMS Performance Analyzer 5655-R03 H23K420 PM24585/UK64657 Preconditioning APAR for IMS 12
IMS Connect Extension 5655-S56 H28S220 PM32394/UK65339 Preconditioning APAR for IMS 12
PM24860/UK22288
IMS PI is an easy tool to use, with minimum setup required. It provides an enhanced level of
problem determination services for IMS Transaction Manager (IMS TM), IMS DB, Common
Queue Server (CQS), IMS Connect (with IMS Connect Extension (CEX) tool) and others. It
can include log record information coming from IBM IMS, CEX, DB2, CQS, System
Management Facilities (SMF), and IBM OMEGAMON®.
IMS PI can select dynamically the right log data sets from the RECON information or from
information from IMS Connect Extension and format them quickly so that you can easily
navigate them.
IMS Problem Investigator is integrated with IMS Performance Analyzer and IMS Connect
Extension. It provides the most benefits to IMS Transaction Manager.
Another product, called IBM Transaction Analysis Workbench, extends the capabilities of IMS
PI to CICS and offers more z/OS logging information formatting. For an overview of this
product, see “IBM Transaction Analysis Workbench” on page 408.
Disk or tape:
IMS PA
SLDS (or OLDS) ISPF dialog
JCL
IMS Connect Extensions
journal data sets Report/extract
request
IMS Connect records with input file
ddnames
Reports
The reports provided by IMS PA can help to perform the following tasks:
Daily monitoring, using dashboards for a daily system health check, transit response time
reports to help identify problem transactions, or management exception reports, which
identify when target service levels are not being met
Long-term capacity planning and service levels, using the transaction history file, which
accumulates transaction performance information, or loading into DB2 to build a
performance database and to report on a host or workstation using your favorite SQL
reporting tool
Analysis of a performance problem and exploration with other reports, such as transit
reports for bad response time, Resource Utilization reports for IMS resource constraint, or
specialized traces to track specific events.
IMS Connect Extension provides event collection and instrumentation for IMS Connect, and
many features such as transaction routing and workload balancing, security feature such as
ACEE caching for IMS Connect security, dynamic addition, reload, deletion, disabling or
enabling of user exits, or even setting the transaction expiration time for OTMA transactions.
IMS Connect
Pre-OTMA processing OTMA processing
Post-OTMA processing
Cancel sessions
The
system
view
shows the Extensive details on
status of individual sessions
all
systems
You can see the activity on all systems, cancel sessions, stop or start IMS Connect, data
stores or ports, or reload exits. Alternatively, you can also have an individual IMS Connect
view. It is also easy to issue IMS Connect commands or IMS commands.
IMS Connect Extension has been updated to support new IMS 12 enhancement such as IMS
Connect Multiple Systems Coupling link using TCP/IP or IMS Connect to IMS Connect
communication. For more information, see 10.2.3, “IMS Connect Extension updated for IMS
12” on page 424.
IMS Connect Extension is integrated with IMS PI, IMS PA, and IBM Transaction Analysis
Workbench.
IMS Recovery Expert for z/OS 210 is a storage-aware backup and recovery solution. It
integrates storage processor fast-replication facilities with IMS backup and recovery
operations. It allows instantaneous backups with no application downtime, reduces recovery
time, and simplifies disaster recovery procedures while using less CPU, I/O, and storage
resources.
IMS Recovery Expert for z/OS 210 offers the following major benefits:
Backup of the entire IMS systems instantaneously with no application downtime.
Quick and easy recovery using intelligent recovery managers for local and remote
recovery support.
Effortless IMS backup and recovery management through an easy to use ISPF interface.
Less CPU, I/O, and tape resource than image copy.
Automatic backup validation to achieve successful recoveries every time.
Use of one backup for multiple purposes.
Transformation of disaster recovery into a disaster restart process, reducing recovery time
objectives.
Proof of compliance with internal or federal regulations.
IMS Sysplex Manager offers a real-time management of the IMS sysplex environment. It is
not a performance monitor.
IMS Sysplex Manager provides a single point of control to get a single system image of your
sysplex through a simplified ISPF user interface. You can get through this interface displays of
IMS resources or structures like the shared queue or the IRLM lock structure and you can drill
down to display and manage for example lock or message information. You can issue global
type1 command or OM type2 commands, z/OS commands such as SVC capture, get some
z/OS performance basic information and get a dashboard of key system indicators and
threshold monitoring.
IMS Sysplex Manager gives management functions to intercept system exceptions and
generates console alerts. It can produce real-time IRLM Long Lock Report, can browse,
delete and recover messages on Shared Queues, delete RM resource structure entries and
assign affinity for transactions in shared queues environment.
IBM Transaction Analysis Workbench tool is a performance tool, adapted to CICS or IMS
Transaction Manager environments. This tool can provide an instantaneous view of z/OS
logging information for a unit of work, whether it comes from CICS or IMS, including MQ and
DB2 data. The tool is transaction-oriented and does not replace a DB2 log analysis tool such
as the IBM product DB2 Log Analysis (product number 5655-T56).
IMS
MQ log IMS TM
DL/1
MQ
OMEGAMON
SMF WAS/z
114/115
CICS ATF/TRF
120/111 CTG
DB2 log
CEX
DDF
z/OS
OPERLOG SMF / RMF
30,64,88 … /7x
Figure 10-11 Transaction Analysis Workbench reading all information in a z/OS environment
The IBM Transaction Analysis Workbench tool has an ISPF session environment, which
allows you to share information with other users and to write down tags to explain details. The
tool is fully compatible with IMS Performance Analyzer and CICS Performance Analyzer, so
that you can edit reports combining all of this information.
Important: IBM Transaction Analysis Workbench does not include IMS Performance
Analyzer. It also does not include CICS Performance Analyzer or IMS Connect Extension,
but it is integrated with them.
You do not need IMS PI to run IBM Transaction Analysis Workbench. In IBM Transaction
Analysis Workbench, IMS PI functions are strengthened and expanded to support CICS
transactions in this product. IBM Transaction Analysis Workbench is better integrated with
z/OS logging and provides a way to share information with other users. For example, it can
run without IMS, in a CICS or DB2 environment.
Table 10-5 lists the utilities included in this pack and indicates whether they are available
online in Fast Path Online (FPO), or offline in Fast Path Basic (FPB) or Fast Path Advanced
(FPA) components.
Analyzing Integrity verification and ANALYZE DEDB Pointer Checker Online Pointer Checker (OPC)
analysis -DEDB AREAs - DEDB AREAs -DEDB AREAs
-Secondary
Index DBs
Extracting Extract segments EXTRACT Use Unload/Reload Online Data Extract (ODE)
user exit routines
IMS and the FPA INDEXBLD function support two database structures for Fast Path secondary
indexes: HISAM and SHISAM.
Both secondary index database structures offer sequential key access to primary DEDB
databases. This function builds the specific secondary index databases in case of system
failures, rebuilds secondary index databases faster than initially loading data to a DEDB, and
reduces the amount of time that it takes to build multiple secondary index databases using
both parallel sorting and parallel loading.
Moreover the FPA INDEXBLD function provides two new reports, a “Secondary Index Definition
Report” and a “Secondary Index Processing Report.”
For more information about the parameters, see IBM Fast Path Solution Pack for z/OS V1R1,
IMS High Performance Fast Path Utilities User’s Guide, SC19-2914.
The INDEXBLD command has been added in sysin HFPSYSIN, as shown in Example 10-1.
Only the INDEXDBD parameter is new. The other parameters were still existing for other
functions and have the same meaning:
DBD identifies the DBD that contains the areas that are to be processed.
INDEXDBD specifies the DBD for building or processing secondary index databases.
AREAn
HPFPU
Indexbld
AREA
Statistics
AREA1 reports
Index
Database
ACBLIB
5. If step 3 is “change the access of DEDB to “Read Only,” take DEDB offline.
6. Make an online change to use the new ACBLIB.
7. Start the DEDB and FPSI databases for IMS online access.
FPSI data set names are identified with the member in the DFSMDA library. You can also
specify in the JCL DD statements.
AREAn
HPFPU
Indexbld
Statistics
reports
AREA1
Broken
Index
Database
ACBLIB
6. Start the DEDB and FPSI databases for IMS online access.
FPSI data set names are identified with the member in the DFSMDA library or can be
identified by JCL DD statements.
If a DEDB area to build the broken FPSI databases is identified, you can specify its name in
the HFPSYSIN IAREA parameter.
DEFINITION OF SEC. INDEX (XDFLD) : XNAME7 (TARGET SEG.: ROOTSEG1, SOURCE SEG.: DD1 )
ATTRIBUTES:
- DB ORGANIZATION : HISAM
- POINTER SEG. NAME : DD1X
- NULLVAL : X'E8'
- MULTISEG : NO
- INDEX MAINTENANCE EXIT ROUTINE : INDEXXIT
- PARTITION SELECTION EXIT ROUTINE : DBFPSEYM
- PARTITION SELECTION OPTION : SNGL
RECORD LAYOUT:
DEDB DBD
OFFSET LENGTH FIELD
------ ------ ----------------- …..
0 4 DUP. KEY POINTER
4 1 DELETE BYTE ROOTSEG SEGM NAME=ROOTSEG1, PARENT=0, BYTES=(50,20)
5 11 SEARCH
16 0 SUBSEQ ROOTFLD1 FIELD NAME=(ROOTKEY1,SEQ,U), BYTES=12, START=3, TYPE=C
16
16
0
12
DDATA
SYMBOLIC POINTER
…..
28 3 USER DATA
LCHILD NAME=(DD1X,(DEDBGS25,DEDBGS26,DEDBGS27,DEDBGS28)),
PTR=SYMB
XDFLD NAME=XNAME7,SRCH=DD1F1,PSELRTN=DBFPSEYM,
SEGMENT=DD1, PSELOPT=SNGL,NULLVAL=C'Y' ,EXTRTN=INDEXXIT
When errors are detected, the FPA ANALYZE function issues error messages and shows the
errors in a report. It provides two new reports:
A secondary index analysis report
The secondary index analysis report shows the number of verified pointer segments in
each secondary index database, as you can see in Figure 10-17.
*:ERRORS DETECTED
Error !
Figure 10-17 Secondary Index Analysis Report
THE TARGET SEGMENTS THAT ARE POINTED BY THE FOLLOWING POINTER SEGMENTS ARE NOT FOUND:
------------ ---- ----------------------------------------------------------------------- ----------------------------------
SEARCH 0000 06781005 C8C6D7C1 40404040 *....HFPA *
SYMBOLIC PTR 000C 02000000 00010002 6A300108 5C5C5C40 C1C2C3C4 C5C6C7C8 407BF0F1 60F0F440 *............*** ABCDEFGH #01-04 *
------------ ---- ----------------------------------------------------------------------- ----------------------------------
You can use the FPA ANALYZE function in a lot of scenarios. Three of them are presented in the
following sections.
.
.
AREAn
HPFPU Statistics
AREA Analyze reports
AREA1
Index
Database
ACBLIB
compare
Pointer segment Pointer segment
Pointer segment Pointer segment
Compares all pointer segments that are made from all Areas to
pointer segments that are read from Index Databases
AREA1
Index
Database
ACBLIB
The FPA ANALYZE function with JCL verifies DEDB area and pointer segments for FPSI
databases as shown in Figure 10-23.
compare
Pointer segment Pointer segment
Pointer segment Pointer segment
.
.
AREAn
AREA1
Index
Database
ACBLIB
Tip: If areas are identified, you can specify only identified area names in the IAREA
parameter.
compare
Pointer segment Pointer segment
Pointer segment Pointer segment
compare
Pointer segment Pointer segment
Pointer segment Pointer segment
IMS Connect Extension captures all the IMS Connect log records related to the new
workloads supported by IMS 12. This way you can analyze activity in IMS Performance
Analyzer and IMS Problem Investigator or IBM Transaction Analysis Workbench.
By merging these log data sources together, you can track and view MSC activity from across
multiple systems and identify latencies. For example, are they within the processing of a given
IMS, or are these latencies in the transmission of messages between IMS systems?
File Menu Edit Mode Navigate Filter Time Labels Options Help
----------------------------------------------------------------------------
BROWSE CEX000.QADATA.MSC.ICON.LOCAL.D110728 Record 00000235 More: < >
Command ===> Scroll ===> CSR
Forwards / Backwards . . 00.00.00.000100 Time of Day . . 08.28.20.388036
Code Description Date 2011-07-28 Thursday Time (Relative)
/ ---- ------------------------------------------------------ ----------------
1. The first IMS system begins processing the message (Origin IMS OLDS).
2. It is picked up by the local IMS Connect system (Origin CEX journal).
3. The local IMS Connect sends a message to the remote IMS Connect (Origin CEX journal).
4. The remote IMS Connect begins processing the message (Target CEX journal).
5. The message is sent to the remote IMS system (Target IMS OLDS).
Notice that each step described here is captured in a separate log file, but IMS Problem
Investigator provides a time-sequence merged view of this activity.
IMS Problem Investigator tracking can further connect the log records on the local and remote
IMS systems. With tracking, you can filter out log records that are not part of a single logical
When you have identified the locality of the problem, you can drill down to view individual log
records. Figure 10-27 shows that IMS Connect Extension journal events have been modified
to include new information related to MSC TCP/IP.
You can also drill down to individual fields, as shown in Figure 10-28.
- - - - - - - - - - - - - - - - - - - - - - Field Zoom - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
File Menu Help
----------------------------------------------------------------
BROWSE CEX000.QADATA.MSC.ICON.LOCAL.D110728 Line 00000000
Command ===> Scroll ===> CSR
******************************* Top of data *******************************
+0024 IRM_ARCH... 04 Architecture Level
Off IRM_ARCH0.......... 00
Pre Reroute support
Off IRM_ARCH1.......... 01
Support - IRM_REROUT_NM/
IRM_RT_ALTCID to be in the
transmitted IRM
Off IRM_ARCH2.......... 02 Support - IRM_REROUT_NM
- IRM_RT_ALTCID
- IRM_TAG_ADAPT
- IRM_TAG_MAP to be in the
transmitted IRM
Off IRM_ARCH3.......... 03 Support for SYNCH CALLOUT CORTKN -
includes all previous fields
On IRM_ARCH4.......... 04 ICON to ICON support including MSC -
includes all previous fields
******************************* End of data *******************************
Figure 10-29 shows a merged view of activity for a logical collection of IMS systems. With the
statistics provided for each message exit, you can view message rates and volumes for new
workload types. In addition to viewing summary statistics, you can view in-flight sessions,
including the communication between IMS Connect systems in a single aggregated view.
The standard IMS PA transit analysis reports that provide response time breakdown and
resource usage have not changed for IMS V12, but it is worthwhile recapping other recent
enhancements in case you are not aware of them.
Cumulative primary WADS write I/O time 9.893 0.001 Average per write
Cumulative secondary WADS write I/O time 0.000 0.000 Average per write
Cumulative primary OLDS write I/O time 35.037 0.005 Average per write
Cumulative secondary OLDS write I/O time 0.000 0.000 Average per write
List of transactions
The report in Figure 10-31 shows a list of transactions with their performance characteristics.
Notice the database I/O and locking times on the right side of the report. They provide insight
into how application database calls are performing.
IMS Tran
Start Trancode CPU InputQ Process OutputQ Total DB Call DB IO DB Lock
11.11.19.630968 ORDER1 PST Time Time Time Time IMS Time Count Time Time
11.11.20.344759 INQUIRY 49 0.096589 0.001158 0.403111 0.000000 0.404269 12 0.138300 0.040908
11.11.20.944888 ORDER4 7 0.020045 0.026232 0.053847 0.001330 0.081409 34 0.002073 0.000000
11.11.19.556276 DEBIT 72 0.010482 0.002265 0.088212 0.000000 0.090477 8 0.028338 0.000170
11.11.19.638251 CREDIT 46 0.010484 0.007017 0.088466 0.000000 0.095483 6 0.003618 0.000000
11.11.20.259686 ORDER2 71 0.030846 0.015915 0.162369 - 0.178205 15 0.046730 0.000169
11.11.21.019302 STOCK 44 0.006333 0.001018 0.022178 0.000000 0.023196 6 0.005249 0.010670
11.11.19.298489 ORDER5 43 0.019248 0.015754 0.137182 0.000000 0.152936 10 0.045357 0.014709
Transaction summary
The summary report is based on the summary report form. The form is customized to display
the information you need to see in the report. Notice the statistical functions, in particular the
new RANGE function that reports the percentage of transactions exceeded the specified
threshold.
SUMM0001 Printed at 14:09:05 25Aug2011 Data from 12.00.02 12Aug2011 to 12.12.27 12Aug2011
Avg Avg >1.0 Max Avg >1.0 Max Avg Avg Max Avg Max
Tran CPU InputQ InputQ InputQ Process Process Process DB Call DB IO DB IO DB Lock DB Lock
Trancode Count Time Time Time Time Time Time Time Count Time Time Time Time
DEBIT 77 0.071474 0.142351 2.60% 1.926484 0.449301 3.90% 1.273803 12 0.143757 0.239690 0.115140 0.828762
CREDIT 76 0.064885 0.110116 1.32% 1.336226 0.488499 6.58% 2.067948 12 0.142218 0.253004 0.155761 0.897495
INQUIRY 61 0.068369 0.055915 0.00% 0.576881 0.416315 1.64% 1.000078 12 0.141126 0.307672 0.096347 0.401015
ORDER1 75 0.069100 0.090177 0.00% 0.935750 0.439957 2.67% 1.639714 12 0.141532 0.213003 0.122320 0.501924
ORDER2 81 0.067513 0.158074 3.70% 1.696349 0.521486 8.64% 2.439268 12 0.140062 0.217337 0.199445 1.145306
ORDER3 77 0.010437 0.004651 0.00% 0.057456 0.063084 0.00% 0.352568 9 0.006178 0.018066 0.007178 0.036744
ORDER4 73 0.014389 0.003081 0.00% 0.054293 0.066289 0.00% 0.267417 9 0.006174 0.023644 0.006984 0.054332
STOCK 56 0.013033 0.006077 0.00% 0.142264 0.066195 0.00% 0.308798 6 0.004719 0.043581 0.005047 0.058183
From the report we can determine that 6.5% of CREDIT transactions took longer than one
second to process. Also, some of the average and maximum database I/O times seem high,
especially because so few DLI DB calls were issued.
Sysplex SC63 had two general use central processors (CPs), two System z9 Integrated
Information Processors (IBM zIIPs) and two System z Application Assist Processors (zAAPs).
SC64 had four general use CPs, one zIIP and one zAAP. Example A-1 shows the LPAR
configuration on z/OS.
CPC ND = 002094.S18.IBM.02.00000002991E
CPC SI = 2094.710.IBM.02.000000000002991E
Model: S18
CPC ID = 00
CPC NAME = SCZP101
LP NAME = A04 LP ID = 4
CSS ID = 0
MIF ID = 4
We also used two interface control checks (IFCC) coupling facility LPARs on the same
machine (Example A-2).
Both systems were running z/OS 1.12 with JES2 in a multiple access spool (MAS)
configuration. Both systems were also accessible from anywhere within the IBM intranet
using a TCP/IP connection.
WTSC63 -9.12.6.70
t t
c c
p IM12AHW1 SCI I12A I12C SCI IM12CHW1 p
i i
p p
DLI DBRC
CF databases RECON
WTSC64 - 9.12.6.9
t t
c c
p IM12BHW1 SCI I12B I12D SCI IM12DHW1 p
i i
p p
Our IMS system is an IMSPlex, composed of four control regions. Two of the systems have
shared queues and all four use shared databases and a single shared recovery control
(RECON). The systems were remotely located on two LPARs systems named WTSC63 and
WTSC64.
Starting with a simple IMS built from the IMS 12 Install or the Installation Verification Program
(IVP), we updated the stage1 so that each system had a unique SUF=x and MSVID=nnn value.
That change was also reflected in the IMS.PROCLIB(DFSPBxxx) member.
Each IMS had Virtual Telecommunications Access Method (VTAM) Multiple Systems
Coupling (MSC) links defined to connect to the other three systems. (MSC is redundant with
shared queues but the links were included in the IMS stage1 system generation anyway.)
A single IRLM region is running on each LPAR to reduce complexity. There is an SCI running
on each LPAR. Each system has its own local OM. There is one shared RM (running without
using a RM coupling facility structure).
Example A-3, Example A-4, Example A-5, and Example A-6 show the definitions for each of
the four IMS systems. For each link the partner ID and the suffix on the MSPLINK name
match one another. As a convention, we defined a VTAM link with PARTNER=local/remote (for
example, PARTNER=AB for I12A connected to I12B) and a TCPIP link with
PARTNER=remote/local (for example, PARTNER=BA for I12A connected to I12B). The same
naming scheme was used for link names, partner IDs, and MSNAMEs.
The MSC system identifier (SYSID) values are defined as shown in the following example.
The SYSID values used for TCP/IP simply had 100 added to the value assigned for a VTAM
link.
As shown in Example A-3, IMS I12A uses 1 (VTAM) and 101 (TCP/IP).
As shown in Example A-4, IMS I12B uses 2 (VTAM) and 102 (TCP/IP).
As shown in Example A-5, IMS I12C uses 3 (VTAM) and 103 (TCP/IP).
Example A-7 shows the complete MSC link network using the IMS MSC Verification Utility.
Example A-7 MSC links (output from the MSC Verification Utility)
| MS001 | MS002 | MS003 | MS004 |
-----------------------------------------------------------------
0001 | LOCAL | 001 AB | 001 AC | 001 AD |
0002 | 002 AB | LOCAL | 002 BC | 002 BD |
0003 | 003 AC | 003 BC | LOCAL | 003 CD |
0004 | 004 AD | 004 BD | 004 CD | LOCAL |
0101 | LOCAL | 001 BA | 001 CA | 001 DA |
0102 | 002 BA | LOCAL | 002 CB | 002 DB |
0103 | 003 CA | 003 CB | LOCAL | 003 DC |
0104 | 004 DA | 004 DB | 004 DC | LOCAL |
-----------------------------------------------------------------
VTAM | AB 001--------AB 001 | | |
VTAM | AC 003----------------------AC 001 | |
VTAM | AD 005------------------------------------AD 001 |
TCP | BA 002--------BA 002 | | |
TCP | CA 004----------------------CA 002 | |
TCP | DA 006------------------------------------DA 002 |
VTAM | | BC 003--------BC 003 | |
VTAM | | BD 005----------------------BD 003 |
TCP | | CB 004--------CB 004 | |
TCP | | DB 006----------------------DB 004 |
VTAM | | | CD 005--------CD 005 |
TCP | | | DC 006--------DC 006 |
-----------------------------------------------------------------
We also added a set of transactions (Example A-8) that were defined as local in one system
and as remote in each of the other three systems. The xMSC transaction was assigned to the
VTAM MSC links. The xMSC1 transaction was assigned to the TCP/IP MSC link.
The COBOL program that the transaction runs simply makes some IMS INQY calls and builds
a six-line reply with the data returned from those calls.
The DBRC initialization member is shown in Example A-10. The DBRC exit DSPSCIX0 was
installed so that the IMSPLEX and GROUPID parameters were set automatically. Using that
exit also meant that Automatic RECON Loss Notification (ARLN) was active. The RECON
was updated to have MINVERS(‘12.1’).
The BPE configuration had minor differences for the recorder trace data set. IMS Connect
configuration members had definitions that were unique to each system.
The BPE configuration in Example A-13 has the new recorder trace. The HWSRCORD was
removed from JCL.
Example A-14, Example A-15, Example A-16, and Example A-17 show the HWSCFG
members for each copy of IMS Connect. Example A-14 is shown with comments to document
each parameter. SSLPORT is not used.
STRUCTURE NAME(I12X_MSGQ)
SIZE(16000)
INITSIZE(8000)
MINSIZE(8000)
PREFLIST(CF2,CF1)
REBUILDPERCENT(1)
ALLOWAUTOALT(YES)
FULLTHRESHOLD(60)
STRUCTURE NAME(I12X_MSGQOFLW)
SIZE(8000)
MINSIZE(8000)
PREFLIST(CF2,CF1)
REBUILDPERCENT(1)
ALLOWAUTOALT(YES)
FULLTHRESHOLD(60)
STRUCTURE NAME(I12X_EMHQ)
SIZE(16000)
INITSIZE(10000)
MINSIZE(10000)
PREFLIST(CF2,CF1)
REBUILDPERCENT(1)
ALLOWAUTOALT(YES)
FULLTHRESHOLD(60)
STRUCTURE NAME(I12X_EMHQOFLW)
SIZE(8000)
MINSIZE(8000)
PREFLIST(CF2,CF1)
REBUILDPERCENT(1)
STRUCTURE NAME(I12X_LOGRMSGQ)
SIZE(16000)
INITSIZE(11000)
PREFLIST(CF2,CF1)
STRUCTURE NAME(I12X_LOGREMHQ)
SIZE(4000)
PREFLIST(CF2,CF1)
REBUILDPERCENT(1)
STRUCTURE NAME(I12X_RSC)
SIZE(16000)
INITSIZE(8000)
MINSIZE(8000)
ALLOWAUTOALT(YES)
FULLTHRESHOLD(60)
DUPLEX(ALLOWED)
PREFLIST(CF2,CF1)
A z/OS log stream is needed for IMS CQS as shown in Example A-20.
Example A-21 shows the JCL used for IMS CQS. The CQSINIT and SSN parameters were
suffixed with the IMS ID letter (for example CQSINIT=12A and SSN=C12A).
The following examples show the parameters for IMS CQS proclib members:
Example A-22 shows the parameters for CQSSGI2X.
The IMS startup member DFSPBxxx was also updated to point to the DFSSQxxx members
using the SHAREDQ=xxx parameter. The DFSSQxxx member identifies the local CQS and the
queue structures that IMS should use. Example A-27 shows the PROCLIB members for I12A.
The checkpoint and structure recovery data sets were created by modifying the JCL from
phase O of the IMS 12 Install/IVP to use our data set names.
This appendix examines recent maintenance for IMS 12 that relates in a general way to new
functionality and critical corrections.
Keep in mind that these lists of APARs represent a snapshot of current maintenance at the
time of writing. As such, they may be incomplete or even incorrect by the time you read this
publication; they are presented here simply to help identify areas of functional improvements.
At the time of your installation, be sure to contact your IBM Service Representative for the
most current maintenance. Also check RETAIN to determine the applicability of these APARs
to your environment and to verify prerequisites and postrequisites.
Use the Consolidated Service Test (CST) as the base for service.
Table B-1 lists various APARs that provide functional enhancements to IMS 12. The list is not
exhaustive, so check RETAIN and the IMS website for current information.
PM19025 Coexistence Required for Common Service Layer (CSL) Resource UK63960
Manager (RM) 1.3 on IMS 10
PM31420 CICS With the CCTL DRA Open Thread TCB enhancement, UK70991
users of CCTL DRA (including CICS TS 4.2) are
allowed DL/I processing on the application task control
block (TCB)
PM32394 IMS Connect IMS Connect Extension (CEX) trace correction UK65339
PM32805 Repository Validate tran, routing code, and program attributes OPEN
before writing to repository
SEVT structure event trace table TOSI Tools Online System Interface
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Some publications referenced in this list might be available in softcopy only.
IBM CICS Scalability: New Features in V4.2, REDP-4787
Federated Identity Management and Web Services Security with IBM Tivoli Security
Solutions, SG24-6394
IMS 11 Open Database, SG24-7856
IMS Version 11 Technical Overview, SG24-7807
IBM z/OS V1R12 Communications Server TCP/IP Implementation: Volume 4 Security and
Policy-Based Networking, SG24-7899
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
IBM Fast Path Solution Pack for z/OS V1R1, IMS High Performance Fast Path Utilities
User’s Guide, SC19-2914
IMS and SOA Executive Overview, GC19-2516
IMS High Performance Fast Path Utilities for z/OS, SC18-9869-04
IMS Version 12 Application Programming, SC19-3007
IMS Version 12 Application Programming APIs, SC19-3008
IMS Version 12 Commands, Volume 1: IMS Commands A-M, SC19-3009
IMS Version 12 Commands, Volume 2: IMS Commands N-V, SC19-3010
IMS Version 12 Commands, Volume 3: IMS Component and z/OS Commands,
SC19-3011
IMS Version 12 Communications and Connections, SC19-3012
IMS Version 12 Database Administration, SC19-3013
IMS Version 12 Database Utilities, SC19-3014
IMS Version 12 Diagnosis. GC19-3015
IMS Version 12 Exit Routines, SC19-3016
IMS Version 12 Installation, GC19-3017
Online resources
These web sites are also relevant as further information sources:
IBM Information Management Software for z/OS Solutions Information Center
http://publib.boulder.ibm.com/infocenter/dzichelp/v2r2/index.jsp
Whitepaper WebSphere MQ for z/OS and IMS Transaction Expiration
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105559
IMS Tools Product Page
https://www-304.ibm.com/support/docview.wss?uid=swg27020942
Product Lifecycle for DB2 and IMS Tools
https://www-304.ibm.com/support/docview.wss?rs=434&uid=swg27008621
Index 465
development IMS 252 DSPURX00 utility 138, 283, 316–317
DFS0730I message 7, 105 dynamic allocation 6, 47, 55, 73, 107–108, 315
DFS0798I message 65 dynamic resource definition (DRD) 15, 199
DFS0798W message 65 use 24, 251
DFS0842I message 105 considerations 24
DFS2082 message 130, 330 dynamically disable automatic export 233
DFS2169I message 178 DYNP parameter 310
DFS2291I message 6, 102–103
DFS2404A message 104
DFS2406I message 101 E
DFS2838I message 101 EAS (extended address space) 53
DFS2842I message 100 EAV (extended address volume) 11, 54
DFS3688I message 134 support 54, 57
DFS3770W message 78 EAV support 55
DFS993I message 105 ECB (event control block) 34
DFSAFMD0 283, 310 ECSA (extended common service area) 3, 41, 45–46,
DFSAFMX0 module 310 97–98
DFSAOE00 exit routine 317 EMH (Expedited Message Handler) 49
DFSCGxxx member 20, 208, 213 EMHB (Expedited Message Handler Block)
DFSDCxxx AOS parameter 110 parameter 310
DFSDCxxx member 38, 111, 184 end-of-memory (EOM) 64
DFSDFxxx member 20, 68–70, 212–213, 315, 318 EOT trace 65
CG section 26 end-of-task (EOT) 64
DIAGNOSTIC_STATISTICS section 103 trace 65
repository section 214 Enterprise Suite V2.1
DFSDFxxx member 208 contents 324
DFSERA10 41 environment 2, 17, 110, 455
DFSERA30 41 EOM (end-of-memory) 64
DFSLUEE0 5, 109, 133 EOT trace 65
DFSMDA member 107, 315 EOT (end-of-task) 64
DFSMS 45 trace 65
DFSMSCE0 109, 132, 285, 309 EPS (extended pointer set) 76
DFSPBxxx 111, 310, 436, 453 ETR (external throughput rate) 125
DFSPBxxx RRS parameter 110 event control block (ECB) 34
DFSPPUE0 exit routine 317 exit 5, 20, 41, 76–78, 109, 118, 193, 255, 284, 443
DFSRAS00 exit routine 317 calls 50, 77, 106, 133
DFSSQxxx 453 routine 78, 83, 91, 132, 284–285
DFSUSVC0 310 user 9, 50, 91, 132, 256, 284–285
DFSVSMxx 44, 69–70 Exit Interface Block Data Store (XIBDS) 319
DFSYDRU0 exit routine 317 exit routine 5, 50, 83, 92, 118, 132, 285, 308–309, 316
DFSYDTx 162–163, 315 IMS 12 119, 316
DFSYPRX0 exit routine 317 EXITDEF parameter 318
DL/I 58, 98, 107–108, 169, 315 Expedited Message Handler (EMH) 314
address space option 58 Expedited Message Handler Block (EMHB) 34
call 58, 107 parameter 310
DLIModel utility 9, 311 EXPORT command 21, 215, 221
DMB (data management block) 25, 34, 68 panels 264
number 108 export data 291
DMBL (data management block list) 36 EXPORT DEFN
DMCB (DEDB master control block) 33 command 21
DRA startup table 314 target 220
DRD (dynamic resource definition) 15, 199 exporting resource 253, 256
use 24, 251 extended address space (EAS) 53
considerations 24 extended address volume (EAV) 11, 54
DSME (data space mapping entry) 33 support 54, 57
DSP1235W message 160 extended common service area (ECSA) 3, 41, 45–46,
DSP1236E message 160 97–98
DSPCEXT0 exit routine 309, 312, 316 extended communication name table (ECNT) 34
DSPDCAX0 exit routine 312 extended pointer set (EPS) 76
DSPSCIX0 exit routine 443 extended recovery facility (XRF) 17, 30, 138
External throughput rate (ETR) 125
Index 467
upgrade processing 157 SOAP Gateway
value 299 Project 374
work 284–285 V2.1 324
IMS 11 IVP IMS Explorer
dialog 288 graphical editor 333
phase selection menu 290 project 333
variable 293 IMS Fast Path Solution Pack for z/OS 386
IMS 12 14, 21, 110, 139, 163, 324 IMS Hardware Compression Extended 390
CBPDO 280 IMS High Performance Fast Path Utilities 95
changed log records 285 IMS Performance Solution Pack for z/OS 386
coexistence 281 IMS Queue Control Facility for z/OS (QCF) 312
command 14, 33, 123, 144, 230, 313 IMS Recovery Solution Pack for z/OS 386
communication 119, 330 IMS request message (IRM) 327
CQS enhancements 51 IMS resource
DBRC 140 definition
diagnosis 316 repository 21
DRD enhancement 21 IMS Single Point 165
EAV support 54 IMS SOAP Gateway 449
Enterprise Suite V2.1 324 IMS Tools Common services 390
exit routine 119, 316 IMS Tools Distributed Access Infrastructure 390
explicit migration considerations 314 IMS Tools Knowledge Base 390
full function database buffer pools 68 IMS.PROCLIB 26, 162, 168
host z/OS system 310 IMSID 26, 184, 187, 220
installation 308 IMSPlex 12, 115, 171, 203–204, 314, 443
message 124 name 196
new function 190 IMSplex 3, 12, 111, 115, 200–202, 284, 318, 435
new support 170 environment 25, 27
object modules 309 migration 318
program directory 308, 318 IMSPLEX Name 240
PSP upgrade name 308 IMSRSC repository 15, 199, 215, 287
RACF return code 330 administrative functions 230
release planning 308, 313 non-system RDDS 228
RRS dependency 110 inactive IMS
SPOC enhancements 14 system 240
system administration 207–208, 308, 310 Index Builder 398
system definition 59, 62, 113, 209, 213–214, 221 Index Builder tool 76
UPDATE option 12 INDEXBLD function 95
IMS 12.01.00 indirect list data set (ILDS) 76
Database Manager 280 indirect list key (ILK) 84
DB Level Tracking 280 individual OLDS 43
Recovery Level Tracking 280 indoubt work 255
Transaction Manager 280 INIT.OLC PHASE(COMMIT) 29
IMS 12.1 312 INIT.OLC PHASE(PREPARE) TYPE(ACBMBR) 28
IMS 8 110 INITIATE 24
IMS Application INITIATE OLC
menu 12, 19, 263, 304 command 25, 29
IMS application 130, 263, 324 command master 25
callout request 348, 364 phase 25
developer 324 input message 116
development task 324 installation
program 130–131, 134, 329 IMS 12 308
IMS Connect 2, 115, 126, 161–162, 284, 431, 435 Installation Verification Program (IVP) 115, 286–287,
commands 5, 165, 175, 319 436, 443
IMS Control Center 14 application 275
IMS Database Recovery Control facility 138 job, additional documentation 300
IMS Database Recovery Facility 95 installing 280, 283
IMS Database Solution Pack for z/OS 386 interactive development environment (IDE) 324
IMS Dump Formatter 51 Interactive Problem Control System (IPCS) 64
IMS Enterprise Suite 9, 311, 323, 328 verb exit 64
reference information 328 Interactive System Productivity Facility (ISPF) 14, 190,
Index 469
Message Format Service (MFS) 25 network connectivity 170
message processing program (MPP) 379 new or changed resource definitions 252
message queue data sets 57, 311 NOFULLSEG 77
message-driven bean (MDB) 382 non-shared queues 124
messages numeric character 60
IMS 12 124
metadata 323
method call 330, 384 O
MFBP (message format buffer pool) 63 ODBA (Open Database Access) 8, 314
MFS (Message Format Service) 25 ODBM (Open Database Manager) 195–197, 314
migration 5, 47, 74, 110–111, 137, 194, 200, 215, 279, Offline Dump Formatter utility 64, 283
282 module 64
consideration 113, 308, 313 OLC Phase 24
to repository 205 OLCSTAT data 26
MINVERS 114, 156, 312, 443 OLDS
MINVERS level 317 individual 43
MODBLKS 17, 20, 199–200 OLDS (online log data set) 3, 30–31, 38, 41, 43, 115, 285
MODBLKS data 171, 200, 221 OM (Operations Manager) 5, 12, 17, 24, 71, 152, 204,
MODBLKS resource 200 208–210, 284, 329, 436
mode 44, 110, 130, 138, 316 OM API 5, 14, 240
movement between queue 109 OM API command 71
MPORT 21 online change 17, 26, 94, 97, 200, 213, 221
MPP 111 background information 25
MPP (message processing program) 379 description 26
MQ online environment 29, 43, 255, 317
message online log data set (OLDS) 3, 30–31, 38, 41, 43, 115, 285
flow 135 Online Reorganization Facility tool 396
level 135 Open Database 2, 107, 314
MR (Manage Resources) 263 Open Database Access (ODBA) 8, 314
application 265 Open Database Manager 314
DEFN panel 272 Open Database Manager (ODBM) 195–197, 314
QUERY command panels 274 Open Transaction Manager Access (OTMA) 4, 31, 33,
MSC (Multiple Systems Coupling) 4, 132, 161, 170, 256, 110, 126–127, 156, 162, 189, 311, 314, 319
284–285, 436 client 124–125, 314
generic link 185 help 129
generic support 183 instance 129
link 170 only limited OTMA transaction monitoring 314
new status 178 CM1
local IMS 183, 187 SL0 transaction monitoring message 116, 121
TCP/IP 170 transaction 132
MSDB (main storage database) 34, 313 customer 110
MSPLINK 4, 170 descriptor 162–163, 314–315
multiple field 82 Destination Resolution
multiple IMS exit routine member client application 127
address space 14 Destination Resolution exit routine 317
subsystem 48 member client instance 126
system 202 support for asynchronous IMS to IMS communications
Multiple Systems Coupling (MSC) 4, 132, 161, 170, 256, 162
284–285, 436 transaction
generic link 185 expiration 135
generic support 183 instance block 38
link 170, 178 monitoring 314
local IMS 183, 187 monitoring IMS 12 280
TCP/IP 170 processing 132
multisystem 110, 112 transaction pipe 125
MVS 37 operations 12, 111, 138, 204, 230, 313, 321, 436
Operations Manager (OM) 5, 12, 17, 24, 71, 152, 204,
208–210, 284, 329, 436
N OSAM 3, 43, 53, 69
NAK response 124, 329 buffer pool change 70
NAME parameter 170 OSAM (Overflow Sequential Access Method) 43, 69
Index 471
QUERY IMSCON Type 167 remote system 162
QUERY IMSCON TYPE(RMTIMSCON) NAME(*) transaction definition 164
SHOW(ALL | showparm) 167 reorganization number 75–76
QUERY IMSCON TYPE(RMTIMSCON) NAME(rmtinm) repository 200, 255–256, 302
SHOW(ALL | showparm) 167 automatic import 214
QUERY IMSCON TYPE(SENDCLNT) NAME(*) data 203
SHOW(ALL) 168 migration to 205
QUERY MEMBER 28, 178 repository data set (RDS) 201–202, 244
QUERY OTMADESC 163 existing RDS 244
QUERY RMTIMSCON NAME(*) 166 types 202–203
QUERY RMTIMSCON NAME(rmtinm) 166 repository name 202, 205
QUERY TRAN 21 Repository Server (RS) 51, 200–201, 244
DESC 263 catalog repository 201
QUERY TRAN command 255 user repository 203
Queue Control Facility for z/OS 312 request block 37
request block prefix (RBP) 37
request status message (RSM) 191, 330
R RESLIB 66
RACF (Resource Access Control Facility) 104, 126–127, resource 2–3, 12, 17, 31, 110, 139, 199–200, 280–281,
161, 169, 230, 285 311
return code in IBM 12 330 access 12, 16, 201, 212
RBA (relative byte address) 84 adapter 311, 314
RBP (request block prefix) 37 creation 22
RC (return code) 133, 246 definition 15, 17, 21, 200, 203, 235, 252
RDD (resource definition data) 15, 21, 202 in DRD-enabled systems 15
RDDS 200 list 237, 253
RDDS (resource definition data set) 15, 20, 200, 205 name 4, 37, 202, 204, 231, 234, 240
DRD online 25
environment 251, 253 profile 240, 260
export 254 structure 26, 49, 204, 231
import 240 type 20, 36, 261
RDS (repository data set) 201–202, 244 Resource Access Control Facility (RACF) 104, 126–127,
existing RDS 244 161, 169, 230, 285
types 202–203 return code in IMS 12 330
RDS (restart data set) 54, 72 resource definition data (RDD) 15, 21, 202
RECON 7, 9, 43, 73, 75–76, 114, 137–138, 148, 283, resource definition data set (RDDS) 15, 20, 200, 205
310, 320, 436 DRD
data 75, 108, 138, 283, 310 environment 251, 253
backup copies 139 export 254
CA group 146 import 240
record keys 139 Resource Manager (RM) 26, 200–201, 204, 209, 284,
records 316 436
secondary extents 139 address space 211
data set 7, 108, 138, 283, 311, 313, 316 IMS Repository Server 227
record type 139, 316 service 214
RECON data utility 201, 204
subsystem records 312 Resource Recovery Services (RRS) 5, 110–111
recorder trace 161 dependency 110
GDG base 189 restart 15, 108, 162, 283
RECOVPD value 140 descriptor definitions 17
Redbooks website 461 IMS 171
refresh runtime resource definitions 252 restart data set (RDS) 54, 72
relative byte address (RBA) 84 return code (RC) 133, 246
release 303, 305, 308 REXX SPOC API 14
appropriate service 308 RM (Resource Manager) 26, 200–201, 204, 209, 284,
planning for IMS 12 308, 313 436
Syntax Checker 306 address space 211
remote IMS 162 IMS Repository Server 227
connection 164 service 214
datastore ID 164 utility 201, 204
Remote Site Recovery (RSR) 138, 320
Index 473
SYS1.PARMLIB 308 IMS resources 15
SYSIN DD 75 TYPE=TCPIP 170
sysplex 24, 48, 106, 110, 115, 291, 432 type-2 command 5, 13, 18, 26, 108, 167, 191, 195, 240,
system 315, 326
administration
IMS 12 207–208, 308, 310
definition 2, 17–18, 94, 184, 302, 309, 436 U
IMS 12 59, 62, 113, 209, 213–214, 221 UDATA 140
generation 17–18, 170–171, 302, 310, 436 UK47070 134
information 94 UK50901 134
RDDS 219, 251 UML (Unified Modeling Language) 324
system log data set (SLDS) 3, 41, 43, 152 Unicode 327
SystemPac 309 Unified Modeling Language (UML) 324
Unit of Recovery (UOR) 116, 196
unit of work (UOW) 38, 255
T UOR (Unit of Recovery) 116, 196
target segment 78–79 UOW (unit of work) 38, 255
concatenated key 82 UPD MSPLINK Name 173
task control block (TCB) 37 UPDATE 3, 14, 68–69, 163, 212, 285
TCB (task control block) 37 option 12, 15, 21, 252
TCP 4, 161, 164, 168, 285, 319, 435–436 IMPORT command 21
TCP/IP 4, 162, 164, 169, 319, 436 UPDATE DB 101
client 183, 193 UPDATE IMSCON Type 167
MSC 170 UPDATE IMSCON TYPE(RMTIMSCON) NAME(rmtinm)
network 194 START(COMM) 167
TERMINATE 26 UPDATE IMSCON TYPE(RMTIMSCON) NAME(rmtinm)
termination 73 STOP(COMM) 168
test environment 206–207, 360 UPDATE OTMADESC command 163, 168
TEXT 455 IMS 12 162
threadsafe 100 UPDATE POOL command 70
TIB (transaction instance block) 38 UPDATE RMTIMSCON NAME(rmtinm) START(COMM)
timeout 6, 77, 102–103, 130, 285 166
timestamp UPDATE RMTIMSCON NAME(rmtinm) STOP(COMM)
recovery 75 166
Index Builder tool 76 UPDATE TRAN command 24
TLS (Transport Layer Security) 358 UPDATE TRAN SET() command 314
TM (Transaction Manager) 2, 103, 131, 319 user
TMEMBER 125–126 access 257
TMS (Transport Manager System) 320 data 49, 83, 140
Tools Base for z/OS Version 1.2 390 exit 9, 14, 284, 316–317
Tools Base Pack for z/OS Version 1 388 ID 190, 257, 259
Tools Common services 389 interface 263
Tools Distributed Access Infrastructure 389 repository 200–201
Tools Knowledge Base 389 audit level 261
tpipe 125, 132, 162 detailed information 246
tpipe name 132 general information 226
TRANSACT 18, 314, 441 List status information 225, 241
transaction 126, 130–131, 323 member level 259
expiration 132, 314 user-defined node 30, 123
expiration time 133 utility
input 379 DBRC 138, 316
response mode 131
transaction instance block (TIB) 38
Transaction Manager (TM) 2, 103, 131, 280–281, 319 V
transaction pipe 125 variable gathering table 296
Transport Layer Security (TLS) 358 VIEWDS 195–196, 319
Transport Manager System (TMS) 320 VIEWHWS 165, 319
TRCLEV 51, 189 VIEWPORT 193, 196, 319
TSO 12, 71 VIEWRMT ALL 165
TSO SPOC 12, 14–15, 71 VIEWRMT rmtimscon 165
IMPORT UPDATE option 16 Virtual Storage Access Method (VSAM) 3, 44, 52, 69,
W
WADS (write-ahead data set) 41
wait for input (WFI) 110
web service 332, 374
artifact 324
connection bundle entry name 377
consumer 348
provider scenario 326
request 348
server 348
Web Services Description Language (WSDL) 348
WebSphere 2, 109, 126
WebSphere Application Server for z/OS 314
WFI (wait for input) 110
workload balancing 48
write to operator with reply (WTOR) 5, 13, 31, 165, 167,
175
write-ahead data set (WADS) 41
WSDL (Web Services Description Language) 348
WTOR (write to operator with reply) 5, 13, 31, 165, 167,
175
X
XCF (cross-system coupling facility) 4, 110, 156, 188,
201, 285, 319
communication 119
group 208
name 208, 215
name parameter value 216
indicator 111
input transaction message 119
XDFLD statement 82, 84, 86
XFACILIT class 260
catalog repository 261
XIBDS (Exit Interface Block Data Store) 319
XML 71, 161, 192, 298–299
converter 192, 379
format 330
DBD metacode 338
return output 330
XRF (extended recovery facility) 17, 30, 138
Z
z/OS 2, 13, 49, 51, 126, 135, 141, 167–168, 201, 204,
227, 282, 291, 432
1.12 106
Enterprise Suite V2.1 324
Index 475
476 IBM IMS Version 12 Technical Overview
IBM IMS Version 12 Technical
Overview
IBM IMS Version 12 Technical Overview
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
IBM IMS Version 12 Technical Overview
IBM IMS Version 12 Technical Overview
IBM IMS Version 12 Technical
Overview
IBM IMS Version 12 Technical
Overview
Back cover ®
Explore the new IBM Information Management System (IMS) provides leadership in
performance, reliability, and security to help you implement the most INTERNATIONAL
features and
strategic and critical enterprise applications. IMS also keeps pace with TECHNICAL
functions of IMS 12
the IT industry. IMS, Enterprise Suite 2.1, and IMS Tools continue to SUPPORT
evolve to provide value and meet the needs of enterprise customers. ORGANIZATION
Understand
With IMS 12, integration and open access improvements provide
advantages and
flexibility and support business growth requirements. Manageability
applicability of IMS 12 enhancements help optimize system staff productivity by improving
ease of use and autonomic computing facilities and by providing
Plan for installation of increased availability. Scalability improvements have been made to the BUILDING TECHNICAL
or migration to IMS 12 well-known performance, efficiency, availability, and resilience of IMS INFORMATION BASED ON
by using 64-bit storage. PRACTICAL EXPERIENCE
IBM IMS Enterprise Suite for z/OS V2.1 components enhance the use of IBM Redbooks are developed
IMS applications and data. In this release, components (either by the IBM International
orderable or downloaded from the web) deliver innovative new Technical Support
capabilities for your IMS environment. They enhance connectivity, Organization. Experts from
expand application development, extend standards and tools for a IBM, Customers and Partners
service-oriented architecture (SOA), ease installation, and provide from around the world create
simplified interfaces. timely technical information
based on realistic scenarios.
This IBM Redbooks publication explores the new features of IMS 12 Specific recommendations
and Enterprise Suite 2.1 and provides an overview of the IMS tools. In are provided to help you
addition, this book highlights the major new functions and facilitates implement IT solutions more
database administrators in their planning for installation and migration. effectively in your
environment.