SG 248527
SG 248527
Redbooks
IBM Redbooks
May 2022
SG24-8527-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx
Chapter 4. Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1 Fast index traversal enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Index look-aside optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.3 INSERT enhancements for partition-by-growth table spaces . . . . . . . . . . . . . . . . . . . . 59
4.3.1 Partition retry process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3.2 Improved cross-partition search in a descending partition number sequence . . . 61
4.4 SORTL enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.5 Reducing the CPU impact in a data-sharing environment when using PBR table spaces
with relative page numbers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.5.1 Considerations for function level and application compatibility level. . . . . . . . . . . 62
4.6 More performance enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.6.1 REORG INDEX performance improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.6.2 Group buffer pool residency time enhancements . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Contents v
8.1.4 Abends, restarts, and commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
8.1.5 Db2 catalog objects for utility history. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8.1.6 Cases where utility history is not collected . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8.1.7 Queries on utility history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8.1.8 Cleaning up the utility history information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
8.2 Enhanced inline statistics with page-level sampling (function level 500) . . . . . . . . . . 127
8.3 Enhanced space-level recovery (function level 100). . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.4 REORG INDEX NOSYSUT1 as the default option (function level 500) . . . . . . . . . . . 129
8.5 REPAIR WRITELOG dictionary support (function level 100) . . . . . . . . . . . . . . . . . . . 130
8.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 . . . . . . . . . . . . . . . . . . . . . . . 175
12.1 IBM Db2 Administration Tool for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
12.2 IBM Db2 Automation Tool for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
12.3 IBM Db2 Log Analysis Tool for z/OS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
12.3.1 Application-level relationship discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
12.4 IBM Db2 Recovery Expert for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
12.5 IBM Db2 Query Monitor for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
12.6 IBM Db2 Query Management Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
12.7 IBM Db2 Sort for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
12.8 IBM Data Virtualization Manager for z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
12.8.1 Key features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
12.8.2 Db2 for z/OS usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
12.9 IBM OMEGAMON for IBM Db2 Performance Expert on z/OS . . . . . . . . . . . . . . . . . 184
12.9.1 Monitoring SQL Data Insights accounting metrics . . . . . . . . . . . . . . . . . . . . . . 185
12.9.2 Monitoring GBP residency times (IBM z16) . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
12.9.3 Monitoring SET CURRENT LOCK TIMEOUT related Db2 changes. . . . . . . . . 192
12.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
Contents vii
14.1.1 Software dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
14.1.2 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
14.1.3 Configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
14.2 Administering Db2 Data Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
14.2.1 Selecting a target database type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
14.2.2 Creating a Db2 Data Gate instance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
14.2.3 Pairing a Db2 Data Gate instance with a Db2 for z/OS subsystem . . . . . . . . . 228
14.2.4 Managing tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
14.2.5 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
14.3 Using the most recent source data of an independent cloud replica of Db2 for z/OS by
using WAITFORDATA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
14.3.1 How WAITFORDATA works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
14.3.2 WAITFORDATA behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
14.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
Contents ix
x IBM Db2 13 for z/OS and More
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
CICS® IBM Watson® Redbooks®
Db2® IBM Z® Redbooks (logo) ®
DS8000® IBM z14® SPSS®
FlashCopy® Informix® UrbanCode®
GDPS® InfoSphere® WebSphere®
IBM® OMEGAMON® z/OS®
IBM Cloud® Parallel Sysplex® z15™
IBM Cloud Pak® Plex®
IBM Research® RACF®
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United
States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
IBM® Db2® 13 for z/OS® delivers a host of major new advancements for using, maintaining,
and administering this enterprise-critical database platform. In parallel with the development
of the IBM z16 hardware and the z/OS 2.5 operating system, Db2 13 leverages the latest
transformative innovations on the IBM Z® platform to improve database performance,
availability, scalability, reliability, and security. The integration of database artificial intelligence
(AI) technology in Db2 13 increases too, which helps you simplify management, improve
performance, and easily extract analytical insights from your data by using the SQL API.
This IBM Redbooks® publication describes the many innovations in Db2 13 and the
companion products that comprise the Db2 for z/OS ecosystem. The contents of this book will
help application developers, database administrators, and solution architects understand the
newest capabilities of Db2 for z/OS and related tools and learn how to leverage them to get
the greatest value from your database.
In addition, many Db2 12 enhancements are provided in the continuous delivery stream. This
publication contains appendixes that describe the most important of the many new features
that were delivered after the initial Db2 12 release.
Authors
This book was produced by The IBM Db2 V13 Project Team.
Allan Lebovitz is a Software Engineer who has been working at IBM for over 34 years. He
has been a Db2 for z/OS developer for 22 years who is based at the IBM Silicon Valley
Laboratory. Allan's focus in Db2 is primarily related to Sort and Sparse Index, but he also has
experience in other areas of Db2, such as RDS Runtime and Data Manager Work File
Manager.
Bart Steegmans is a Consulting Support Specialist from IBM Belgium, currently working for
the IBM Silicon Valley Laboratory in San Jose to provide technical support for Db2 for z/OS
performance problems. He has over 34 years of experience in Db2. Before joining IBM in
1997, Bart worked as a Db2 system administrator at a banking and insurance group. His
areas of expertise include Db2 performance, database administration, and backup and
recovery. Bart is a co-author of numerous IBM Redbooks publications about Db2 for z/OS.
Bill Bireley is a long-time Db2 for z/OS developer and quality assurance engineer that is
based at the IBM Silicon Valley Laboratory. He has led development and test teams delivering
features in both the Db2 relational engine and in related ecosystem components. Bill currently
leads the System Test squad, and he is focused on IBM Db2 Analytics Accelerator stress
testing.
Binghui Zhong is an IBM Software Engineer who is based at the IBM Silicon Valley
Laboratory. He has 20 years of experience working with Db2 Relational Data System (RDS),
and has been involved in multiple projects in the area of availability and scalability.
Craig Friske is a Software Engineering Manager with Rocket Software who is responsible for
Db2 utilities development. He has 30 years of experience working with Db2 and leading
projects like Online Reorg and PBR/RPN table spaces. He co-authored Db2 for z/OS Utilities
in Practice, REDP-5503.
Dengfeng Gao is a Db2 for z/OS developer. She has 27 years of experience with general
database management systems and 14 years of experience in Db2 development. Her focus is
on the Db2 bind process with extended knowledge in RDS.
Dennis Lukas Mayer is a Product Manager for IBM Db2 Analytics Accelerator for z/OS and
IBM Db2 for z/OS Data Gate who is base at the Boeblingen Lab, Germany. He joined IBM in
2011 and has worked with IBM Z and Db2 for z/OS customers since 2014 in various roles by
supporting sales and deployment teams around the world. He has presented products
roadmaps and strategy at various technical conferences and user group meetings.
Derek Tempongko is a Db2 for z/OS developer in the Distributed Data Facility (DDF) team.
He has been working at IBM, specifically in the DDF team, for over 20 years. He also
co-authored IBM DB2 for z/OS: Configuring TLS/SSL for Secure Client/Server
Communications, REDP-4799.
Doug Dailey is an IBM Product Manager with over 10 years of experience in building
software and solutions for customers. Doug worked in technical support for Informix Software
(now an IBM company and known as IBM Informix®) and then transitioned to a Technical
Account Manager role, and then became a CSM Manager for the IBM Accelerated Value
Program. He specializes in data federation and data virtualization technologies.
Eric Radzinski is a technical writer and editor who is based at the IBM Silicon Valley Lab. He
has spent his career documenting IBM Z database technology, including Db2 for z/OS, Db2
Tools, and IMS. He is a co-author of The IBM Style Guide and Developing Quality Technical
Information: A Handbook for Writers and Editors. He is working on projects that are related to
transforming IBM Z applications for today's modern application development environments.
Frances Villafuerte is a Software Engineer in the Db2 core engine area. She has over 22
years of experience working with Db2, where she develops and leads projects in the key
functional areas of availability, scalability, and performance. She has wide technical
knowledge and is a technical lead in the Data Manager component of Db2.
Frank Chan is a Software Engineer in the Db2 core engine area and technical lead of the
Instrumentation facility. He has 15 years of experience working with Db2 and 20 years in the
database field. He holds a degree in Computer Science from San Jose State University. His
areas of expertise include parsing and compiling relational database programming
languages, and profile monitoring and system performance report analysis.
Guenter Georg Schoellmann is an IT Specialist for IBM Db2 Analytics Accelerator for z/OS
and IBM Db2 for z/OS Data Gate who is based at the Germany development lab as a
member of the Center of Excellence team. As part of the development organization, his
responsibilities include customer consultation, proof of concepts, deployment support,
migrations, resolution of critical situations, and education for IBM Db2 Analytics Accelerator
for z/OS. During his more than 37 years with IBM, Guenter has worked as a developer, team
leader, customer advocate, and PMI certified project manager.
Hanif Kassam is a Software Engineer working as a Client Success Representative in Db2 for
z/OS Development Lab. He holds a Bachelor of Science degree in Computer Science with a
minor in Management Information Systems. He has been working with Db2 for z/OS since
version 1.3 and has over 30 years of experience. His subject matter expertise is in Installation
and Migration, for which he has developed internal courses. He currently handles USAGE
cases in all components of Db2 for z/OS.
Harsha Dodda is a Software Engineer who is based at t he IBM Silicon Valley Lab. He is a
full-stack developer working on user interface and API development for the Db2 for z/OS
administrator and developer experience transformation. He holds a degree in Computer
Science from the University of Illinois at Chicago, and he has over 4 years of experience
building software.
Hoang Luong is a Software Engineer in the Db2 for z/OS core engine area. He holds a
Master's Software Engineering degree from San Jose State University. He has
7 years of experience working with database commands, Database Exception Table (DBET),
and real-time statistics.
Jason Cu is a Software Engineer working on Db2 for z/OS at the IBM Silicon Valley Lab.
John R Lyle is a Software Engineer in Db2 for z/OS with more than 30 years of development
experience. He worked in the RDS area with a focus on Db2 system catalog, migration, and
fallback, and the Data Definition Language (DDL) and index manager areas. He has been the
lead designer and implementer of the catalog and fallback changes since Db2 5. He has
worked on numerous other development items over the years, including the Clone Table
function and all the Db2 8 - 11 Utility related enabling new function mode (ENFM) catalog and
directory changes. John was the design and development lead for the Db2 12 continuous
development process and Online Migration capabilities. For Db2 13, John designed and
helped implement the revolutionary changes to the release migration process. John is
working in the Db2 Provisioning-Deployment-Migration team as the team leader. John speaks
regularly at IBM conferences worldwide (such as IDUG and SHARE), user conferences, and
regional user groups (RUGs) on migration and other topics.
Katherine Soohoo is the technical lead for IBM Db2 for z/OS Developer Extension on VS
Code. She has been working on the development team for the Db2 Developer Extension
since its first release in 2020. In her previous role, Katherine was a key member of the team
integrating automated Db2 for z/OS application deployments by using Git, Jenkins,
IBM UrbanCode® Deploy, and IBM Db2 Tools.
Preface xv
Keziah Knopp is a Db2 for z/OS specialist who is based at the IBM Z Washington Systems
Center. In addition to working with Db2 for z/OS customers, she is the host of the IBM Z
Customer Councils on the west coast, delivers the Db2 for z/OS: REST and Hybrid Cloud
Workshop, and has presented sessions at IDUG and SHARE.
Laura Kunioka-Weis is a Software Engineer with Rocket Software. She is the technical lead
for the Db2 backup and recovery utilities, and led the redirected recovery and utility history
projects.
Long Tu is a Software Engineer who is based at the IBM Silicon Valley Laboratory. He holds
a Master's degree in Software Engineering and has over 10 years of experience working with
Db2 RDS. He is involved in and leads projects about Db2 availability.
Maria Sueli Almeida is an IBM Certified Thought Leader and member of Db2 for z/OS
development who is based at the IBM Silicon Valley Lab in San Jose, California. She has
extensive experience with Db2, including database administration and systems programming.
She has worked on Db2 for z/OS advanced technology and solution integration. Currently,
she is engaged with Db2 for z/OS support for user experience transformation, including
DevOps adoption and cloud provisioning database-as-a-service (DBaaS) for DBAs,
application developers, and sysprogs. Sueli also consults with Db2 customers worldwide.
Mark Rader is a Db2 for z/OS specialist who is based at the IBM Z Washington Systems
Center. He has worked with many organizations that use Db2 for z/OS, starting with version
2.2. He has led many Db2 data sharing implementations and presented on many topics at
conferences, including at IDUG, SHARE, IBM Technical University, and at Db2 RUG
meetings. He has worked closely with Db2 development to deliver the Early Support
Programs for Db2 10, Db2 11, and Db2 12. He is an author of IBM Redbooks publications
about IBM Parallel Sysplex® application considerations, Db2 for z/OS data sharing, and Db2
for z/OS distributed function.
Mehmet Cüneyt Göksu is an Executive IT Specialist for IBM Z solutions in the context of
Data and AI who is based at the IBM Germany development lab as a member of the Center of
Excellence team. He holds a Bachelor degree and PhD in Computer Science, and an MBA.
He has worked with IBM Db2 for z/OS and IBM Z for over 30 years. Cuneyt focuses on
IBM Db2 Analytics Accelerator and Data Gate to help a diverse set of customers with
proof-of-concepts, deployments, and migrations. Before joining IBM, Cuneyt was a member of
the IBM Db2 Gold Consultant Team; was named an IBM Champion; became a member of the
BOD in the IDUG community; and was a regular instructor for IBM Education Programs. He is
a L3 Certified IT Specialist, and he actively attends certification board reviews and Db2 for
z/OS Liaison@IDUG EMEA.
Mike Shadduck has been a Software Engineer working at IBM for over 30 years. He has
been a Db2 for z/OS developer for 23 years, and he is based at the IBM Silicon Valley
Laboratory. Mike's focus in Db2 has been in the Index Manager component, where he helps
to deliver and support variable length index keys, index key versioning, index key
compression, and index Fast Traversal Blocks (FTBs).
Paul Bartak has been working with IBM databases since 1985, including the first version of
Db2 on MVS. As an IBM customer for 14 years, Paul worked in Application Development,
Database Administration, and DBA. Currently as a Distinguished Engineer with Rocket
Software, Paul focuses on the tools that are used to maximize z/OS platform value with
emphasis in database administration, cloning, and DevOps processes. He supports multiple
Db2 User Groups; has been a speaker at several IDUG, IBM Z Day, and Insight/Think
conferences; is an IBM Redbooks and IBM Redpaper publications author; and
IBM Champion.
Randy Nakagawa is a Software Engineer who is based at the IBM Silicon Valley Laboratory.
He has over 35 years of experience at IBM working on Db2 for z/OS and related technologies.
He has extensive experience in several areas within the Db2 for z/OS product, and is focused
on SQL statement preparation and dynamic statement caching.
Regina Liu is a Software Engineer who graduated from the University of California at
Berkeley with a Bachelor's degree in Computer Science. She has over 24 years of experience
in Db2 for z/OS development. As the technical lead for the DDL area, she leads and drives
many availability and DBA-related enhancements across the RDS, core engine, and Utility
areas in Db2. Her innovative approach for implementing DDL alterations as pending definition
changes spear-headed online schema evolution, and she is continually driving improvements
to DDL operations and concurrency.
Robert Lu is a Software Engineer currently working in the Agent Services area of Db2 for
z/OS. He previously worked on the Buffer Manager component for Db2 for z/OS.
Sarbinder Kallar is a Software Engineer working in the DDF area of Db2 for z/OS. He is
based at the IBM Silicon Valley Laboratory.
Sharon Roeder is a Software Engineer who is based at the IBM Silicon Valley Laboratory in
the Db2 core engine area with a focus on log manager. She also has experience in the
service controller, command processor, executive, optimizer, and Instrumentation Facility
Component (IFC) components.
Shirley Zhou is a Software Engineer who is based at the IBM Silicon Valley Laboratory. She
has been a developer for over 20 years in the Db2 for z/OS core engine area.
Sowmya Kameswaran is the technical lead for the administration and development
experience transformation for the Db2 for z/OS engine, tools, and IBM Db2 Analytics
Accelerator who is based at the IBM Silicon Valley Lab. She holds a Master's degree in
Software Engineering from San Jose State University. She speaks at various events and
conferences, and writes blogs on various topics that are related to the system and application
architecture transformation.
Steven Ou is a Software Engineer who is based at the IBM Silicon Valley Laboratory with
6 years experience working on Db2 for z/OS. His current focus is on the IFC. Steven attended
San Jose State University where he earned a Bachelor of Science degree in Computer
Science.
Sydney Villafuerte is a Software Engineer who joined the IBM Db2 core engine area after
graduating from the University of California at San Diego with a Bachelor of Science degree in
Computer Science with a focus in Bioinformatics. Since then, she has worked in the XML,
IFC, and Storage Management areas. Over the last year, she focused on Storage Manager
contraction and contributed to the development effort that is described in this publication.
Tammie Dang is a Software Engineer working on Db2 for z/OS who is based at the IBM
Silicon Valley Lab. She has over 30 years of experience in relational databases, and she is
co-author of IBM DB2 12 for z/OS Technical Overview, SG24-8383. One of her current roles
is leading the customer requirement assessment effort in the new application area.
Tim Hogan is a technical writer who is based at the IBM Silicon Valley Lab. He leads the Db2
AI for z/OS content development team.
Preface xvii
Tom Toomire is a Software Engineer who is based at the IBM Silicon Valley Laboratory. He
has over 33 years of experience at IBM working on Db2 for z/OS and related technologies. He
has extensive experience in several areas within the Db2 for z/OS product, and is working in
the DDF area as the lead developer for the Db2 DDF native RESTful services function.
Tracee Tao is a Software Engineer in Db2 for z/OS development who is based at the IBM
Silicon Valley Lab. Her focus is primarily in the Authorization area of Db2.
Tran Lam is a Software Engineer who is based at the IBM Silicon Valley Laboratory. She has
been working on Db2 for z/OS for 7 years after graduating from San Jose State University
with Bachelor of Science degree in Software Engineering with a minor in Mathematics. Her
areas of expertise include installation, ZPARM, samples, and Db2.
Ute Baumbach is a software developer who is based at the IBM Research® & Development
Lab at Boeblingen, Germany. During her more than 30 years with IBM, Ute worked as a
developer and team leader for various IBM software products, mostly related to Db2 for Linux,
UNIX, and Windows (LUW) and Db2 z/OS topics and tools. Currently, she works as a
member of the Analytics on IBM Z Center of Excellence team, which is part of the Db2
Analytics Accelerator development organization, and she focuses on Db2 Analytics
Accelerator proofs of concepts, customer deployment support, and education. She has
co-authored various IBM Redbooks publications, focusing on Db2 for LUW topics and Db2
Analytics Accelerator.
Vassil Dimov is a Technical Lead in the IBM Db2 for z/OS Data Gate development team who
is based at the IBM Research & Development Lab at Boeblingen, Germany. He initially
worked as a Software Engineer for IBM Db2 Analytics Accelerator 5 years ago with an
emphasis on data replication technologies. Later, he became the development focal point for
the L3 support team for all customer-related data replication issues. Two years ago, he moved
to the Db2 for z/OS Data Gate team to provide a modern alternative to IBM Db2 Analytics
Accelerator in the cloud.
Vicky Li is a Software Engineer who is based at the IBM Silicon Valley Laboratory. She has
over 19 years of experience working with Db2 RDS, and she is involved in and leads DDL and
database descriptor management (DBDM) projects for Db2 availability.
Xiao Wei Zhang is a Software Engineer who is based at the IBM China Laboratory. He has
over 15 years of experience with database development and performance at IBM.
Yong Kwon is a Software Engineer working at IBM for over 17 years. He has been a Db2 for
z/OS developer for all 17 years, and he is based at the IBM Silicon Valley Laboratory. Yong's
focus in Db2 has been primarily related to Data Manager and Authorization.
Zijin Guo is a Software Engineer working on Db2 for z/OS who is based at the IBM Silicon
Valley Lab. After joining IBM, he worked on RDS for 3 years and then focused on the Buffer
Manager component in th Db2 core engine area.
Ada La, Akiko Hoshikawa, Ben Budiman, Bituin Vizconde, Bob Tokumaru, Chris Leung,
David Nguyen, Fen-Ling Lin, Haakon Roberts, Jim Pickel, Julie Chen, Ka-Chun Ng,
Kate Wheat, Ke Wei Wei, Kim Lyle, Koshy John, Laura Klappenbach, Leilei Li, Limin Yang,
Maggie Lin, Manvendra Mishra, Mary Lin, MaryBeth White, Matthias Tschaffler, Meg Bernal,
Mo Townsend, Nicholas Marion, Nina Bronnikova, Patrick Malone, Paul McWilliams,
Ping Wang, Ralf Marks, Randy Nakagawa, Sueli Almeida, Teresa Leamon, Thomas Eng,
Tom Majithia, Xiao Feng Meng, Xiaohong Fu, Xu Qin Zhao, Wei Li
This publication was produced by the IBM Redbooks publication team with valuable
contributions from Martin Keen, IBM Master Inventor and IBM Redbooks Project Lead, and
Wade Wallace, Senior Content Specialist and IBM Redbooks Team Lead.
Find out more about the residency program, browse the residency index, and apply online at:
https://www.redbooks.ibm.com/residencies
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
https://www.redbooks.ibm.com/contact
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xix
Stay connected to IBM Redbooks
Find us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Db2 13 was released approximately 5 years after the introduction of Db2 12 for z/OS. Db2 12
introduced a continuous delivery model that enables the controlled addition of functional
enhancements through regularly delivered function level upgrades. Over the past 5 years, this
model delivered over 100 small and medium enhancements across 10 individual function
levels, including several that require simple Db2 catalog changes. More importantly, with the
continuous delivery model, you can access new features sooner and selectively activate
features as needed.
For more information about these Db2 12 function level enhancements, see the following
appendixes:
Appendix A, “IBM Db2 12 continuous delivery features and enhancements” on page 233
Appendix B, “Application development” on page 253
Appendix C, “Database administration” on page 263
Appendix D, “System administration and system management” on page 289
However, the continuous delivery model was never intended to eliminate the need to create
an entirely new version of Db2. The decision to deliver a new version was based on the scope
of new enhancements, infrastructure changes, advances in related technologies, and
business value. The convergence of all of these factors is here in the form of Db2 13 for z/OS.
The remainder of this chapter introduces the primary themes, highlights, and ecosystem
components that make up the Db2 13 experience. Subsequent chapters explain the major
changes by functional area and describe the value of each one. For more information about
Db2 13 and its related products, see the relevant product documentation.
Db2 13 handles more concurrent user activity with efficient use of compute and memory
resource. Here are the most impactful enhancements:
Online conversion from a partition-by-growth (PBG) to a partition-by-range (PBR)
partitioning scheme.
By using this enhancement, you can leverage PBR with minimal impact to an application.
The ability to delete an active log data set from the bootstrap data set (BSDS) while Db2 is
running.
As a DBA, you can encrypt active log data sets, increase their size, or change the key
label periodically without bringing down Db2.
Extended common service area (ECSA) constraint relief.
ECSA storage constraints previously restricted workload growth and caused outages due
to insufficient storage. Db2 13 better handles peak workloads and supports new types of
workloads by improving storage conditions and reducing unexpected system contention
while running with many concurrent threads.
1.4 Performance
A cornerstone of every new Db2 version is the continued pursuit of improved performance for
queries, transactional workloads, utilities, and connected applications. Faster turnaround,
greater throughput, and reduced computing resource usage are improved in each new
version and in continuous delivery enhancements. Chapter 4, “Performance” on page 57
describes the following major enhancements:
Index Look-aside Optimization
Enjoy the query performance benefits of more indexes on tables while minimizing the
index maintenance costs during INSERT, UPDATE, and DELETE operations. With Index
Look-aside Optimization in Db2 13, root-to-leaf index traversals are further minimized for
all INSERT, UPDATE, and DELETE operations regardless of the index cluster ratio.
A larger maximum key size for Fast Index Traversal
This index traversal technique, also known as Fast Traversal Blocks (FTBs), now can be
used by more tables that contain large index keys.
Insert performance for PBG table spaces
Db2 13 improves the cross-partition search logic, along with the partition-locking strategy
that is used on INSERT operations to improve the performance and the efficient use of
space within existing partitions.
1.5 Security
Security and governance of data and processes in Db2 for z/OS are of utmost importance. As
the application landscape and network connectivity have grown more open and complex, Db2
for z/OS kept ahead by implementing advances in encryption, rules processing, and definition
of roles while allowing flexibility for using Db2 -based security or external security managers.
However, increased security that comes at the cost of performance is not an acceptable
tradeoff. Db2 continues to innovate with a focus on performance in its authorization checking.
In Db2 13, these trends continue with the following key enhancements:
Continuous compliance.
As security regulations become more complex and stringent, organizations often struggle
to interpret regulations, implement controls, and collect evidence of continuous
compliance. Db2 13 is enhanced to provide the evidence that is needed for continuous
compliance. This capability can work in concert with an external security compliance
reporting tool, such as IBM Z Security and Compliance Center. Db2 for z/OS listens to the
ENF 86 signal that is generated by the z/OS Compliance Agent services. Then, Db2
generates System Management Facilities (SMF) 1154 trace records for the recommended
system security parameter settings.
Improve productivity and ease of use for administrators deploying or authorizing
packages.
Db2 13 provides the flexibility for DBAs to control the ownership of plans, packages, and
SQL routine packages by using the DBA role without depending on the security
administrator. New capability is added to identify the type of owner for plans and
packages.
Reduce contention and decrease downtime for concurrent IBM RACF® changes.
Db2 13 introduces a feature to cache a successful execution of plan authorization checks
when using the Access Control Authorization (DSNX@XAC): Exit for access control.
Before Db2 13, this feature was available only for Db2 internal security checks. Removing
this limitation improves performance for external security users by taking advantage of the
caching in Db2. It also provides consistent behavior between Db2 native and external
security controls.
For example, the functional upgrade and catalog upgrade are controlled separately. Specific
feature activation is governed by the function level that is set by you in the Db2 subsystem or
at the application or package level. After a small initial catalog migration to Db2 13 (with no
structural changes), you can defer further catalog migrations until you need a specific function
that depends on that catalog level.
Additionally, because the initial catalog migration makes no structural changes, the contention
with other concurrent activity is minimal, enabling true online version migration.
Migration to Db2 13 requires that your Db2 12 system is brought to function level 510 (FL510)
and catalog level 509 (CL509), and it must have the fallback SPE APAR PH37108 applied.
For more information about both the migration and the installation processes, see Chapter 2,
“Installing and migrating” on page 13.
Application management
Managing application behavior for optimizing serialization and performance often requires
individual editing or complex processes for using the same application in different contexts.
Enhancements in Db2 13 help in several areas:
More granular lock control is available at the application level by introducing a new lock
timeout special register and deadlock weighting global variable. These items can be used
to improve DDL break-in while reducing impact to applications.
You can turn on and off some special registers and global variables without updating their
local applications through an expansion of using application profiles.
A new mechanism is introduced to the application profile tables that you can use to
dynamically control the package RELEASE option. This mechanism increases your
likelihood for successful DDL completion when Data Manipulation Language (DML)
applications are running concurrently.
An easy process is now available for allowing certain application packages to use new
functions immediately while restricting others to an earlier level. Although introduced as a
Db2 12 continuous delivery feature, changes in the Data Server Driver connectivity
packages let you control new function level usage without performing package rebinds
after each function level upgrade.
For more information about these application management changes, see Chapter 5,
“Application management and SQL changes” on page 65.
A new Instrumentation Facility Component Identifier (IFCID) 396 record and three new
catalog fields were added to record more detailed information about index splits. IFCID 396 is
always enabled by default when Statistics Class 3 or Performance Class 6 are turned on.
When an index split is considered an abnormal split process (for example, when the total
elapsed time of an index split is greater than 1 second), a new IFCID 396 is generated, which
includes detailed information about the index split. This information provides more analysis
and should be helpful for both you and IBM to identify the root cause of INSERT performance
issues.
1.7 Utilities
The list of utilities enhancements in both Db2 13 and in the Db2 12 functional service stream
is impressive. The Db2 12 enhancements are listed in detail in Appendix A, “IBM Db2 12
continuous delivery features and enhancements” on page 233.
Utilities history
Db2 can now record real-time and historical information about utility executions. This
historical information can help you optimize your utility jobs and your overall utility strategy by
providing the following capabilities:
Check daily utility executions for failures and take immediate corrective actions.
Ensure adherence to best practices or site standards for utilities.
Analyze and compare utility information from one execution to another one.
Analyze comparable utility executions.
IBM performance measurements showed a significant, elapsed time and CPU time
improvement.
The RECOVER syntax for specifying recovery of the entire table space can now successfully use
any space-level or partition-level image copies that are found when the RECOVER utility is
running. This enhancement applies to recovery from sequential image copies,
IBM FlashCopy® image copies that are created at the partition or piece level, or inline
sequential or FlashCopy image copies taken by the LOAD or REORG TABLESPACE utilities.
Here are a few recent high-value enhancements that extend and improve your acceleration
capability:
IBM Integrated Synchronization, which is an improved incremental update technology that
is more tightly integrated with the Db2 subsystem with improved data coherency.
Run accelerator queries that contain even more pass through-only analytical built-in
functions that are recognized by the accelerator but not by Db2 for z/OS native SQL.
Run accelerator queries that require pagination of result sets.
Use the accelerator for data-sensitive uses cases that require masked columns.
Ensure that certain queries are always routed or not routed to the accelerator by using
dynamic plan stability.
Extend accelerator-only tables (AOTs) to be defined on multiple accelerators for high
availability (HA).
For more information about Db2 Analytics Accelerator, see Chapter 13, “IBM Db2
Analytics Accelerator for z/OS” on page 201.
The ability to perform an online migration has been at the top of Db2 administrators’ wish list
for the past releases, which means that it also has been a key focus point for the Db2
development team. Although great strides were made in this area over the past couple of
releases, Db2 13 introduces some important enhancements that finally allow most, if not all,
migrations to be accomplished online. The innovative differences that began in Db2 12 with
function level V12R1M510 (FL 510) continue with trailblazing changes to the CATMAINT
(DSNTIJTC) job and process in Db2 13, which can drastically improve your migration
experience.
After a successful migration to Db2 13, the initial function level is V13R1M100 (FL 100), and
the catalog level is also V13R1M100 (CL 100).
After FL 100, Db2 13 subsystems and data-sharing groups progress through FL 500, catalog
level V13R1M501 (CL 501), and finally to FL 501. These three function levels are available at
general availability of Db2 13.
A new function that does not depend on new-release catalog changes becomes available
when you activate FL 500. Then, after you update the catalog level to CL 501, you can
activate FL 501 to start using the full complement of Db2 13 GA-level new functions.
Function-level and catalog-level changes are group-wide events in data sharing. As such,
each member in a data-sharing group has the same function and catalog levels. When one
member changes a function or catalog level, that change takes effect for all active
members.
The most significant improvement to the Db2 13 migration process is the timing of the new
release catalog changes. In earlier Db2 releases, new release catalog changes were made
during the initial release migration process by catalog migration (CATMAINT) (job DSNTIJTC)
processing.
Db2 13 still uses an initial release migration CATMAINT, but this CATMAINT job no longer
changes the structure of the Db2 catalog, that is, it does not create tables, indexes, or
columns in existing catalog tables.
You might wonder why a CATMAINT job is still needed if the structure of the catalog is not
changing. The reason is so that you can control the timing of when the migration completes
and when a subsystem is available for use. This initial CATMAINT process also sets internal
information to indicate that the catalog level is now V13R1M100 (CL 100) and that the
function level is V13R1M100 (FL 100).
With the elimination of package and plan invalidations, SQL contention, and deadlock issues,
the final roadblocks are removed for achieving true online migrations.
Although Db2 13 still allows the usage of applications that are bound with APPLCOMPAT
values of V10R1 and higher, it is a best practice to resolve incompatibilities and bind or rebind
packages with more current APPLCOMPAT levels.
The prerequisite catalog level for V12R1M510 is V12R1M509. Another requirement is that no
packages that were used in the previous 18 months were last bound or rebound in any
release earlier than Db2 11.
Figure 2-1 SELECT statement to determine whether the V12R1M510 activation will complete
If the DSNTIJPE SAMPLE job returns any rows for this sample query, it creates a data set
that contains the REBIND statements that, when run, allow V12R1M510 to be activated.
As a best practice, run this sample query in advance of a scheduled function level change to
V12R1M510 to ensure that a system catalog is in a good condition for the coming changes.
During activation of V12R1M510, the sample query runs. If it returns any rows, activation of
V12R1M510 fails. A REBIND of all displayed packages allows V12R1M510 to be activated.
Db2 12 subsystems and members must have the PH37108 code with Db2 started before
attempting to migrate to Db2 13. If the fallback SPE APAR (PH37108) is not on all active
members of a data-sharing group or a non-data-sharing subsystem before a Db2 13
migration, the migration to Db2 13 fails.
However, the major Db2 13 migration process enhancement is that the structure of the Db2
catalog is not changed during the initial CATMAINT (DSNTIJTC) processing for catalog level
V13R1M100 (CL 100). Running a CATMAINT job is still part of the migration process, but
unlike in previous new Db2 versions, it does not add or change any existing catalog objects
during processing. It functions as a switch that allows you to determine when migration
processing completes.
If no catalog changes are required for a function level to be activated, the code level
requirement might be the only activation prerequisite.
Db2 13 provides the code for FL 100, FL 500, and FL 501, so you do not need to apply any
program temporary fixes (PTFs) to activate any of these function levels.
You can use the DISPLAY GROUP command to determine the code level of a subsystem and, for
a data-sharing group, the code levels of all members and their current function and catalog
levels. If the DISPLAY GROUP command shows that the data-sharing group has any active Db2
12 members, the entire group is not eligible for FL 500 activation.
You can also use the ACTIVATE command with the TEST option to determine whether a
subsystem or data-sharing group is eligible for the activation of a particular function level. The
output of the ACTIVATE command provides information such as the output of the DISPLAY
GROUP command.
You can also use a sample job that the installation CLIST generates to run this command,
which is described in 2.6, “Using the installation CLIST to migrate to Db2 13 FL 100, FL 500,
or FL 501” on page 19 .
Returning to an earlier function level can be useful when you want to prevent the usage of a
new function from a higher function level. A star function level indicates that the system or
data-sharing group was previously at a higher function level.
You can query the SYSLEVELUPDATES catalog table to see the history of catalog changes
and function level activations. You can also use the DISPLAY GROUP command to determine the
highest activated function level.
You can make these CL 501 catalog changes in advance of activating FL 501. There is no
requirement that the CL 501 changes are made immediately before FL 501 activation.
In continuous delivery, input to CATMAINT can be either the wanted function level or catalog
level. CATMAINT processing determines the catalog level that is needed for the specified
value. In function level V13R1M501, you can specify a level of V13R1M501 only because at
Db2 13 GA, no other valid catalog or function level options are available.
Function level V13R1M501 can be activated by using the following ACTIVATE command:
ACTIVATE FUNCTION LEVEL (V13R1M501)
You can also use the installation CLIST to generate a sample job for this command, which is
described later in 2.6, “Using the installation CLIST to migrate to Db2 13 FL 100, FL 500, or
FL 501” on page 19.
The following sections document both the traditional approach to migration and using
z/OSMF to migrate a Db2 subsystem.
Next, on panel DSNTIP00, as shown in Figure 2-3, the installation CLIST initializes the
CURRENT FUNCTION LEVEL to V12R1M510 and TARGET FUNCTION LEVEL to V13R1M100. These two
fields are not updatable, and they are used by the installation CLIST to generate the jobs that
migrate a Db2 subsystem to function level V13R1M100.
The Db2 13 migration process requires the CATMAINT (DSNTIJTC) job to be run with a level
of V13R1M100. This Db2 13 CATMAINT job does not change the structure of the Db2
catalog, but the Db2 13 migration process is not considered complete until this CATMAINT
job is run. After it completes successfully, the function level becomes V13R1M100 (FL 100),
and the catalog level changes to V13R1M100 (CL 100). Function and catalog levels are
group-wide values in data sharing. Therefore, each member is always at the same level for
both.
Next, on panel DSNTIP00, you specify the function level to be activated for TARGET FUNCTION
LEVEL, as shown in Figure 2-5.
Depending on the TARGET FUNCTION LEVEL that you specify, the installation CLIST generates
the jobs that are required to complete the target function level activation from V13R1M100.
Next, on panel DSNTIP00, you specify the function level to be activated for TARGET FUNCTION
LEVEL, as shown in Figure 2-7 on page 23.
The installation CLIST generates the two z/OSMF workflows, which contain the steps for
completing the migration from Db2 12 to the Db2 13 function level that you specified in the
TARGET FUNCTION LEVEL field. The step numbers that mentioned in these workflows do not
correspond to the step numbers that are listed in Migrating your Db2 subsystem to Db2 13.
To install a Db2 13 subsystem, you invoke the installation CLIST. The first panel, DSNTIPA1,
shown in Figure 2-8, displays a list of operations to choose from. For a new Db2 13
installation, you specify INSTALL for INSTALL TYPE.
For a traditional Db2 installation, you specify NO for USE Z/OSMF WORKFLOW; for a Db2
installation that uses z/OSMF, you specify YES for USE Z/OSMF WORKFLOW as shown in
Figure 2-8.
Next, on panel DSNTIP00, the installation CLIST initializes the TARGET FUNCTION LEVEL field
as V13R1M501, as shown in Figure 2-9 on page 25. This field is not updatable, and the
installation CLIST always generates the jobs to install a Db2 subsystem at function level
V13R1M501.
For new Db2 13 installations, you must run a CATMAINT job (DSNTIJTC) during installation
processing. This required CATMAINT job determines whether any CCSID-related updates
must be made. The encoding of some special characters can vary between different
Extended Binary Coded Decimal Interchange Code (EBCDIC) coded character set identifiers
(CCSIDs). This required CATMAINT job identifies these encoding differences in the system
catalog information and changes any characters that must be updated based on the source
and target CCSIDs.
Table 2-1 Default values of Db2 subsystem parameters that are changed in Db2 13
Subsystem parameter Old default value New default value
DSN6FAC.DDF NO AUTO
DSN6FAC.MAXCONQN OFF ON
DSN6FAC.MAXCONQW OFF ON
DSN6SPRM.FTB_NON_UNIQUE_INDEX NO YES
DSN6SYSP.STATIME_MAIN 60 10
In addition to these changes to subsystem parameter default values, the behaviors of the
following subsystem parameters also changed:
DSN6SPRM.AUTHEXIT_CACHEREFRESH: If the subsystem parameter value is set to ALL and the
z/OS release is 2.5 or later, Db2 refreshes the entries in the plan authorization cache
when a resource access on the plan object profile is changed in Resource Access Control
Facility (RACF) and when the access control authorization exit (DSNX@XAC) is active.
DSN6SPRM.DSMAX: The maximum value is increased from 200000 to 400000.
Important: For best results, check the subsystem parameters settings that are used in
your Db2 12 environment, especially in data-sharing environments. If the current setting
does not match the setting that is listed in Table 2-2, evaluate whether any other changes
are needed to accept the new-behavior settings before migrating to Db2 13.
The subsystem parameters that are shown in Table 2-2 are removed in Db2 13, and their
respective values can no longer be changed.
AUTHCACH 4K
DDF_COMPATIBILITY NULL
HONOR_KEEPDICTIONARY NO
DSVCI YES
EXTRAREQ 100
EXTRSRV 100
IMMEDWRI NO
IX_TB_PART_CONV_EXCLUDE YES
MAXARCH 10000
MAXTYPE1 0
OPT1ROWBLOCKSORT DISABLE
PARA_EFF 50
PLANMGMTSCOPE STATIC
REALSTORAGE_MANAGEMENTa AUTO
RESYNC 2
SUBQ_MIDX ENABLE
TRACSTR NO
The DSNG014I messages indicate the exact time a function level or catalog change was
made. This message is issued on each member of a data-sharing group for each catalog or
function level change. You can also query the SYSLEVELUPDATES catalog table to
determine when catalog and function levels changed. This table maintains a history of all
such activities for a subsystem or data-sharing group.
Activation of lower function levels can be used to prevent a new function from being used, for
example to allow a new function to be implemented in a staged approach or to avoid some
unexpected behavior. Some new functions use application compatibility (APPLCOMPAT)
levels instead of function levels to determine usage, so issuing a REBIND command with an
earlier APPLCOMPAT value might be required to prevent new function use.
You can use the DISPLAY GROUP command to determine what the various function and catalog
levels are. The DISPLAY GROUP example in Figure 2-11 on page 29 shows that the current
function level is V13R1M100* (FL100*):
In this case, you can tell that the previous function level of the data-sharing group was FL 500
because the value in the HIGHEST ACTIVATED FUNCTION LEVEL field is V13R1M500. If a higher
function level was previously activated, you can query the SYSLEVELUPDATES catalog table
to determine what the previous function level was before the return to FL 100*.
In Figure 2-12, the current function level is V13R1M100*, and you want to know whether you
can activate FL V13R1M501.
2.10 Summary
The ability to perform an online migration has been at the top of Db2 administrators’ wish lists
since data sharing was introduced in Db2 4. The Db2 13 migration process was redesigned
and greatly improved with online migration requirements in mind. These migration
improvements ensure that Db2 13 migrations can be accomplished online in data-sharing
environments and improve your migration experience.
To support your business growth, Db2 13 enables concurrent user activities at a level higher
than before through improved efficiency and scalability by using compute, memory, and other
system resources.
Starting with function level V13R1M500, you can delete active logs from the BSDS by using
the new -SET LOG REMOVELOG command while Db2 is running. This new command
complements the existing -SET LOG NEWLOG command for adding active logs while Db2
remains running.
By using both online removal and addition, you can replace existing active log data sets
without shutting down Db2 and incurring any application impact. This new feature helps you
avoid Db2 outages and increases high availability support, particularly in the following
situations:
When the number of active log data sets for a copy reaches the maximum limit of 93,
existing log data sets must be deleted before replacing them with larger log data sets.
To take advantage of Db2 12 support for active log data sets greater than 4 GB, existing
active log data sets can be replaced with active log data sets that are greater than 4 GB.
To leverage z/OS DFSMS data set encryption for log data sets, which was introduced in
Db2 11, unencrypted log data sets must be replaced with new encrypted log data sets.
Encrypted log data sets must be redefined periodically to replace encryption keys with
new keys.
Moving active log data sets to a different storage system might require them to be deleted
and added.
The -SET LOG REMOVELOG command behaves like DSNJU003 in that it removes only the log
from the BSDS. The physical data set can be deleted as a separate step after the log is
removed from the BSDS.
The -SET LOG REMOVELOG command generates one of the following outcomes:
A log that is not in use is successfully deleted from the BSDS.
A log that is in use is marked REMOVAL PENDING.
The command fails due to an error (for example, a log was not offloaded or is active).
You can monitor the status of a log that is marked for REMOVAL PENDING by using the
following methods:
To identify log data sets that are in the REMOVAL PENDING status, you can use the
-DISPLAY LOG command with the new DETAIL option. Alternatively, you can use DSNJU004
to show information about the REMOVAL PENDING status for local active log data sets.
For data-sharing environments, you can use the command D
GRS,RES=(*,<data-set-name>) to determine whether peer members still have the active
log data set allocated or in use.
Example 3-1 - Example 3-4 on page 34 illustrate some common scenarios of online deletion
of active logs in Db2 13.
Example 3-5 shows the usage of the new DETAIL option for –DISPLAY LOG.
Note: The -SET LOG REMOVELOG command is not supported in IBM GDPS® active-active
environments.
If the data sets of the table space for the altered table are already defined, the conversion
results in a pending change that is materialized by a subsequent REORG. Otherwise, the
conversion is immediate. The containing table space is converted to use relative page
numbers (RPNs). The conversion process handles any existing indexes on the table;
however, Db2 does not change any aspects or attributes of those indexes. Therefore,
consider creating partitioned indexes on the table after completing the conversion.
If you encounter these issues for tables in PBG table spaces, consider converting them to
PBR.
In Db2, the DATA CAPTURE attribute of a table dictates whether a full or partial before image is
written out for updates to the table’s rows. By default, when a table is created, its DATA
CAPTURE attribute is defined as NONE, which means that Db2 writes out only partial before
images in log records for updates to the table’s rows.
If a table is not created with the DATA CAPTURE CHANGES attribute, you must alter the table by
issuing an ALTER TABLE statement with the DATA CAPTURE CHANGES clause to enable the writing
of full log records.
Before version 13, Db2 performs the following quiescing as part of the DATA CAPTURE
alteration processing:
Quiesces static packages that depend on the altered table.
Quiesces cached dynamic statements that depend on the altered table.
In HA environments, making time available to run the alteration can be challenging. Repeated
executions of ALTER TABLE DATA CAPTURE might time out and fail due to continuous concurrent
activities on the table. As a result, you might not be able to enable or disable data replication
in a timely manner, or you might need to incur an application outage to complete the
alteration.
The ALTER TABLE DATA CAPTURE enhancement provides the following benefits:
Static packages that depend on the altered table are no longer quiesced.
Cached dynamic statements that depend on the altered table are no longer quiesced.
The APPLCOMPAT level must be at V13R1M500 or later for the ALTER TABLE enhancement to
be effective.
Without the concurrency enhancement, OBD updates are visible only to other concurrent
activities after the alteration is committed. For DML statements that are run in the same
transaction (unit of work), all log records for the statements are either full or partial log
records, and a mixture of full and partial log records for DML statements within the same
transaction is not possible.
The timing of when catalog updates are visible to other threads is unchanged. This aspect
depends on a reader's isolation level.
Another significant aspect that is associated with the concurrency enhancement is related to
the rollback behavior of the DATA CAPTURE alteration. Rollback can occur if the ALTER
statement encounters an error and fails, or if a ROLLBACK statement is issued for the ALTER
transaction. If the DATA CAPTURE alteration is rolled back, the OBD update is also rolled back
and immediately visible to concurrent DMLs against the table on the same Db2 member.
3.3.3 Considerations for data replication and other tools that read Db2 log
records
Introducing a mixture of partial and full log records for DMLs in a concurrent transaction might
impact data replication products or other tools that consume Db2 log records.
To prevent this potential impact, adhere to the following processes for enabling and disabling
replication. Otherwise, assess whether the replication tools that you use are impacted and
prepare for the changes.
If this prescribed order of operations is followed, the changes that are introduced by this
concurrency enhancement have no impact on data replication tools.
For tools that consume Db2 log records for purposes other than data replication, you must
assess the potential impact.
Additionally, as a result of agent storage improvement and Database Access Thread (DBAT)
availability improvement in Db2 13, the amount of ECSA storage that is required for running
distributed applications changed.
3.4.1 Calculating the ECSA storage for the Distributed Data Facility
component
To determine the amount of required ECSA storage for Distributed Data Facility (DDF)
processing, you must first determine the network protocol that your Db2 for z/OS server
supports. Db2 for z/OS supports two types of network protocols: TCP/IP and SNA.
TCP/IP connections
If you operate with z/OS 2.2 or later, TCP/IP connections use zero bytes of ECSA storage.
If you operate with z/OS 2.1 or earlier, add 1 KB for each TCP/IP connection up to the
maximum limit of your MAX REMOTE CONNECTED field (CONDBAT subsystem parameter).
For example, if CONDBAT is set to 10000, set aside 10,000 KB (1 * 10000) of ECSA storage for
TCP/IP connections.
SNA connections
Allow 1 KB per SNA connection up to the maximum limit of your MAX REMOTE
CONNECTED field (CONDBAT subsystem parameter). For example, if CONDBAT is set to 10000,
set aside 10,000 KB (1 * 10000) of ECSA storage for SNA connections.
After you calculate the amount of ECSA storage for DDF processing according to the type of
network protocols that your Db2 server supports, for the ECSA storage requirements, add
1 KB for each server thread (DBAT) to obtain the total.
Example 3-6 shows an example of calculating the ECSA storage requirements for DDF
processing.
Example 3-6 Stand-alone Db2 for z/OS server that uses only TCP/IP connections to service remote
clients and access other remote sites
OS level = z/OS 2.4
Db2 SSID = DB2A
MAXDBAT = 500
CONDBAT = 10000
SYSIBM.IPNAMES
IPADDR = 198.168.1.100
IPADDR = 198.168.1.200
Total ECSA storage requirement =
2.5 MB + (1KB * 500 MAXDBAT) + (1KB * 2 downstream sites) = 3,135,448 bytes
Before Db2 13, dynamic SQL can be a heavy user of agent local BTB storage. The SQL
statement text can be as large as 2 MB, and multiple dynamic SQL statements can be nested
due to triggers, UDFs, and stored procedures. Db2 keeps a copy of the dynamic SQL input
statement text and the attribute string in agent local BTB storage during the PREPARE and
EXECUTE IMMEDIATE executions.
In Db2 13, the statement text and attribute string for PREPARE and EXECUTE IMMEDIATE are
stored in agent local ATB storage instead. The benefit of this enhancement can be greater
than anticipated because of the way that Db2 handles dynamic SQL in native SQL routines.
When the application runs a simple statement like INSERT INTO T1 VALUES(1), the trigger
calls the stored procedure and runs an UPDATE to T2.
Before Db2 13, the initial application call to Db2 to prepare the INSERT statement acquires
enough agent local pool BTB storage to fit the SQL text length plus other control structures.
When INSERT runs, the trigger invokes the native SQL stored procedure. When this stored
procedure prepares UPDATE, Db2 allocates agent local BTB storage based on the defined
length of the input SQL variable, which is 2 MB.
In Db2 13, the PREPARE statement for both the application and the stored procedure allocates
agent local ATB storage instead of BTB, and the amount of storage that is allocated for the
SQL statement text is based on the length.
In addition, Db2 continues to optimize the interface between the DBM1 and DIST address
spaces. Some control blocks that are used during SQL execution, which previously were in
agent local pool BTB storage in DIST, are now allocated in shared ATB storage. These
changes not only reduce the BTB storage consumption, but also improve performance by
avoiding cross-memory copy operations of the data.
Instead of requiring manual assessment and setting of CONTSTOR as an on/off switch, Db2 13
introduces a smarter storage manager BTB and ECSA contraction scheme that detects the
best time to contract precious storage based on workload consumption that exceeds normal
levels. This capability allows you to create workloads or grow existing ones by supporting
more concurrent threads.
With the removal of subsystem parameter CONTSTOR, the IFCID 001 serviceability field
QSST_CONSTOR_NUM in DSNDQSST is no longer relevant. This field is renamed to
QSST_CNTRCT_NUM to represent the number of 31-bit agent local pools that are
contracted by storage manager.
After storage consumption in the ECSA falls below the 85% threshold, contraction is no
longer triggered.
In addition, Db2 13 adds logic to a Db2 daemon to monitor storage consumption above the
2G bar. If the number of available free frames falls below the calculated threshold that is
provided by z/OS, memory object contraction is triggered.
Common causes of many concurrent DBAT terminations include a short-term spike in new
work and new connections that are created to Db2. One example of a new connection spike is
when an application or application server starts and immediately creates and populates a
local connection pool with many connections. A quick spike in new connections results in a
corresponding spike in the number of DBATs that are needed to service the work. When the
new connection spike is over, the additional DBATs that might have been created to service
the spike in new connection requests can end with similar expiration times that are specified
by the POOL THREAD TIMEOUT (POOLINAC) subsystem parameter. The Db2 DDF error monitor
service, which is responsible for detecting and terminating DBATs that exceed their expiration
time, currently runs by using a 2-minute interval cycle. A DBAT concurrent termination spike
can result when many DBATs are detected as expired within a single 2-minute DDF error
monitor cycle.
Most of the changes that are described in this section are applicable only when Db2 DDF
inactive thread support is enabled by setting the DDF THREADS (CMTSTAT) subsystem
parameter to INACTIVE.
When a DBAT is not processing a client request, it does not need to be attached to the client
connection that is awaiting the next request from a client. This capability allows Db2 to service
many more client connections with a smaller population of DBAT server threads. The focus of
the gradual DBAT contraction enhancement is related to the disposition and treatment of
DBATs that are no longer actively processing a client request.
The three general categories of Db2 DBATs that are affected by the changes in this
enhancement are described in the following sections:
Normal disconnected pooled DBATs
KeepDynamicRefresh DBATs
High-performance DBATs
Setting the POOLINAC subsystem parameter to 0 prevents Db2 from terminating pooled
DBATs, but it does not prevent Db2 from terminating a DBAT that was used for over 200
requests.
KeepDynamicRefresh DBATs
For some distributed application environments, most notably the ones that use Db2 packages
that are bound with the KEEPDYNAMIC option, Db2 does extra processing beyond the basic
inactive connection processing to maintain performance. With the KEEPDYNAMIC(YES) option, a
dynamic SQL statement can be prepared, run, and rerun in the next unit of work without
being reprepared.
For long-running applications that use KEEPDYNAMIC packages, Db2 periodically recycles the
DBAT (and associated connection) to clean up persistent storage and package resources that
are tied to the DBAT due to the KEEPDYNAMIC usage. The Db2 function that allows a DBAT with
KEEPDYNAMIC packages to periodically recycle is called KeepDynamicRefresh (KDR)
processing. KDR processing involves changes in both the client drivers and Db2. When a
client connection indicates to Db2 that it both supports KDR processing and is performing
either sysplex workload balancing or seamless failover, the Db2 server takes one of the
following actions:
At a clean transaction boundary, if the KDR DBAT has existed for over 1 hour, the entire
DBAT and connection are terminated.
At a clean transaction boundary, if the KDR DBAT has not been used for a new transaction
request for over 20 minutes, the entire DBAT and connection are terminated.
When the client driver detects a KDR connection termination, the driver obtains a new
connection transport either to the same member (seamless failover) or to a different member
(Sysplex WLB), and all the current statements that are related to the application connection
are marked as needing to be reprepared.
High-performance DBATs
Although KDR provides significant performance benefits for certain applications, it is not a
suitable solution for most applications. Therefore, an alternative approach for improving
performance was created that is called high-performance (HIPERF) DBATs. You can use the
following methods to activate HIPERF DBAT processing:
By issuing the -MODIFY DDF PKGREL(BNDOPT|BNDPOOL) command
For packages that are used by an application connection, by binding at least one of the
packages with the RELEASE(DEALLOCATE) option
When the DBAT is used for up to 200 requests from its client connection, it is disconnected
and terminated like a pooled DBAT. Also, if the DBAT is not used for the amount of time that is
specified on the POOLINAC subsystem parameter since the last request it processed, the DBAT
is disconnected and terminated. A subsystem parameter POOLINAC value of 0 is not
acknowledged. Instead, when a value of 0 is detected on the POOLINAC subsystem parameter,
HIPERF DBATs are terminated after 120 seconds of not being used.
The changes that are introduced in Db2 13 to improve DBAT termination behavior have two
main objectives:
To reduce the overall frequency and number of DBAT terminations
To reduce the number of concurrent DBAT terminations that are caused by a short-term
DBAT usage spike
Note: Db2 12 APAR PH36114 made a change that limits the maximum number of
pooled and disconnected DBAT terminations to 50 DBATs for each error monitor
cycle.
– Other situations, such as DDF SUSPEND or RESUME, can cause normal disconnected
pooled DBATs to terminate.
HIPERF DBATs are terminated in the following situations:
– When a clean DBAT at a commit point with release deallocate packages is used 200
times by its client connection.
– When its client connection terminates (BNDOPT).
– When a DBAT does not receive a new transaction request within the amount of time
that is specified by the POOLINAC subsystem parameter.
Before Db2 13, the rationale for terminating DBATs after 200 transactional uses was based on
product experiences where thread storage was acquired within the DBM1 address space
below the bar. Because that storage area was limited and could easily be fragmented,
terminating DBATs after 200 uses was considered an optimal solution to this issue. In Db2 13,
most thread storage is ATB and is shared, which means that terminating DBATs after 200
uses might be detrimental to performance in some cases.
These changes take effect only when the CMTSTAT subsystem parameter is set to INACTIVE.
Table 3-1 and Table 3-2 on page 50 show a summary of the expanded columns. To improve
availability, the LOCKMAX of RTS table spaces SYSTSTSS and SYSTSISS and the
associated history table spaces SYSTSTSH and SYSTSISH are changed from -1
(LOCKMAX SYSTEM is in effect) to 0 to avoid lock escalation during RTS externalization.
REORGINSERTS BIGINT The number of rows or LOBs that were inserted into the
table space or partition or loaded into the table space
or partition by using the LOAD utility without the REPLACE
option since the last time the REORG or LOAD REPLACE
utilities ran or since the object was created.
A null value indicates that the number of inserted rows
or LOBs is unknown.
REORGDELETES BIGINT The number of rows or LOBs that were deleted from
the table space or partition since the last time the REORG
or LOAD REPLACE utilities ran or since the object was
created.
A null value indicates that the number of deleted rows
or LOBs is unknown.
REORGUPDATES BIGINT The number of rows that were updated in the table
space or partition since the last time the REORG or LOAD
REPLACE utilities ran or since the object was created.
A null value indicates that the number of updated rows
is unknown.
STATSINSERTS BIGINT The number of rows or LOBs that were inserted into the
table space or partition or loaded into the table space
or partition by using the LOAD utility without the REPLACE
option since the last time that the RUNSTATS utility ran or
since the object was created.
A null value indicates that the number of inserted rows
or LOBs is unknown.
STATSDELETES BIGINT The number of rows or LOBs that were deleted from
the table space or partition since the last time that the
RUNSTATS utility ran or since the object was created.
A null value indicates that the number of deleted rows
or LOBs is unknown.
STATSUPDATES BIGINT The number of rows that were updated in the table
space or partition since the last time that the RUNSTATS
utility ran or since the object was created.
A null value indicates that the number of updated rows
is unknown.
COPYCHANGES BIGINT If the COPY utility ran with a SHRLEVEL value other than
CHANGE, this value is the number of INSERT, UPDATE, and
DELETE operations, or the number of rows loaded, since
the last time that the COPY utility ran.
If the COPY utility was run with SHRLEVEL CHANGE, this
value is the total number of INSERT, UPDATE, and DELETE
operations, or the number of rows loaded, while the last
COPY utility ran and since the last time that the COPY
utility ran.
This value does not include operations that result in no
change to the data, such as an update that sets the
value of a column to its existing value.
A null value indicates that the number of INSERT,
UPDATE, and DELETE operations or the number of rows
loaded is unknown.
REORGINSERTS BIGINT The number of index entries that were inserted into the
index space or partition since the last time the REORG,
REBUILD INDEX, or LOAD REPLACE utilities were run, or
since the object was created.
A null value indicates that the number of inserted index
entries is unknown.
REORGDELETES BIGINT The number of index entries that were deleted from the
index space or partition since the last time the REORG,
REBUILD INDEX, or LOAD REPLACE utilities ran, or since
the object was created.
A null value indicates that the number of deleted index
entries is unknown.
REORGAPPENDINSERT BIGINT The number of index entries with a key value that is
greater than the maximum key value in the index or
partition that were inserted into the index space or
partition since the last time the REORG, REBUILD INDEX,
or LOAD REPLACE utilities were run, or since the object
was created.
A null value indicates that the number of inserted index
entries is unknown.
STATSINSERTS BIGINT The number of index entries that were inserted into the
index space or partition since the last time that the
RUNSTATS utility was run or since the object was
created.
A null value indicates that the number of inserted index
entries is unknown.
STATSDELETES BIGINT The number of index entries that were deleted since
the last RUNSTATS on the index space or partition was
run or since the object was created.
A null value means that the number of deleted index
entries is unknown.
COPYCHANGES BIGINT If the COPY utility was run with a SHRLEVEL value other
than CHANGE, this value is the number of INSERT, UPDATE,
and DELETE operations since the last time that the COPY
utility ran.
If the COPY utility ran with SHRLEVEL CHANGE, this value is
the total number of INSERT, UPDATE, and DELETE
operations or the number of rows loaded while the last
COPY utility ran, and since the last time that the COPY
utility ran.
A null value indicates that the number of INSERT,
UPDATE, and DELETE operations is unknown.
z/OS 2.5 provides an enhancement that moves a portion of the control blocks for open data
sets ATB. Much of the improvement is transparent to Db2, and you can take advantage of the
enhancement by migrating to z/OS 2.5. In z/OS 2.5, dynamic allocation processing supports
scheduler work blocks (SWBs) for data sets in 64-bit storage, which helps reduce BTB
storage usage for address spaces with many data sets. Moving SWBs above-the-bar requires
exploitation from Db2. Although z/OS 2.5 provides relief, Db2 uses this feature to allow extra
room for you to increase the number of concurrent data sets or other Db2 activities, such as
more concurrent threads. Db2 performance might be improved when opening or closing many
data sets concurrently.
To use this feature, make sure that you use z/OS 2.5 or later and complete one of the
following actions:
Update the ALLOCxx parmlib member to set the SYSTEM SWBSTORAGE value to ATB:
SYSTEM SWBSTORAGE(ATB)
The default value is SWA, which means that SWBs are in 24-bit or 31-bit storage. ATB
indicates that SWBs may be in 64-bit storage if the application enables it.
Issue the SETALLOC SYSTEM,SWBSTORAGE=ATB system command.
Updating the ALLOCxx parmlib is best practice because it remains effective across initial
program loads (IPLs). If the SETALLOC command is used to enable SYSTEM SWBSTORAGE, you
must restart Db2 for the change to take effect.
The following new DSNB280I message is issued during Db2 restart if the new dynamic
allocation function that supports SWB blocks for data sets in 64-bit storage is enabled
successfully:
DSNB280I DYNAMIC ALLOCATION SUPPORT FOR 64-BIT RESIDENT SWB BLOCKS IS ENABLED
Db2 13 increases the upper bound of DSMAX to 400000, but it is a best practice that you migrate
to z/OS 2.5 and enable the new feature before increasing DSMAX.
The DSNDB01.SYSLGRNX directory table space contains the RBA ranges when data sets
are opened or closed for updates. Due to the increasing workloads and the increasing
number of table spaces, the SYSLGRNX table space size can approach the 64 GB limit.
When migrating from Db2 12, the first run of REORG TABLESPACE with SHRLEVEL CHANGE or
REFERENCE on SPT01 and SYSLGRNX at function level V13R1M500 or higher converts the
DSSIZE of the table spaces to 256 GB. The DSSIZE column in SYSTABLESPACE and
SYSTABLEPART for the table spaces are updated to 256 GB. If the function level is reverted
to V13R1M100*, a table space that is already converted to 256 GB remains unchanged. For a
table space that is not converted yet, any run of REORG in V13R1M100* does not convert it to
256 GB.
Recovery to a point in time (PIT) before a REORG is supported. If the recovery of SPT01or
SYSLGRNX is to a PIT before the REORG that converts it to 256 GB, the table space DSSIZE is
reverted to 64 GB. The next REORG on the table space converts it back to 256 GB if the
function level is V13R1M500 or higher.
If any one of the catalog or directory objects is recovered to an earlier PIT, all the catalog and
directory objects should be recovered to the same PIT.
Under heavy workloads, the structure utilization monitoring by z/OS automatically expands
the active CF lock structure storage, but that does not help when a spike in locking activities
results in rejection of lock requests due to insufficient Record List Entries (RLEs) even before
z/OS had a chance to start the CF lock structure alteration. Because IRLM has a higher level
of granularity to determine the storage needed, it added a new internal monitoring
mechanism that can dynamically expand the CF lock structure size and process the lock
requests, instead of rejecting them.
Message DXR189I
The syntax of this message is as follows:
DXR189I <irlmname> ALTERING LOCK STRUCTURE SIZE
When an IRLM member detects a full condition for storage in the CF lock structure, IRLM
alters the lock structure storage size if the allocated size is less than the maximum size and
the CFRM policy permits altering the lock structure. This message is displayed on the IRLM
member that initiated the storage alteration.
IRLM initiates an internal function to alter the CF lock structure size while remaining online.
Message DXR190I
The syntax of this message is as follows:
DXR190I <irlmname> ALTER LOCK STUCTURE COMPLETED
The ALTER function for CF lock structure storage that is initiated by IRLM completed. If the CF
structure size alteration was successful, the increased record list space is available for use.
3.13 Summary
Db2 13 provides many new and changed features that greatly enhance its availability and
scalability. Some of these important features focus on PBR table spaces that use RPN
because this table space attribute provides unmatched scalability and availability. You can
take advantage of PBR RPN without incurring higher CPU usage or detrimental performance
impact.
Db2 13 also provides essential optimization and expansion of database and system
resources, including storage usage, database connections, real-time statistics, concurrent
opened data sets, directory table spaces, and locking capacity.
Together, these enhancements, which are available at function level V13R1M500, aim to
significantly increase the availability and scalability of your Db2 system.
Chapter 4. Performance
As in previous versions, Db2 13 focuses on providing important performance enhancements
and benefits, especially in the areas of CPU cost reduction, scalability, reliability, and
consistency. These enhancements help to optimize the performance of your Db2 system and
reduce the need for unplanned performance tuning.
At the time of writing, Db2 13 performance evaluations are not yet finalized. IBM expects that
Db2 13 performance will be equivalent or better than Db2 12 on the same workloads.
This chapter describes the most notable performance enhancements in Db2 13, many of
which are available by migrating to Db2 13 (function level V13R1M100).
When FTB was initially introduced, you had to define indexes as UNIQUE to qualify for FTB
access. The feature supported unique indexes with INCLUDE columns, but the length of the
index entry, including the key and any additional columns, was capped at a maximum size of
64 bytes.
Function level 508 (V12R1M508) removed some of the restrictions. Only the key size of the
ordering columns had to be 64 bytes or less, and columns on the INCLUDE list no longer
counted toward the size limit for the index key. However, fast index traversal was not used for
the columns in the INCLUDE list.
APAR PH30978, when applied to FL 508, strengthened FTB support for non-unique indexes.
After you reset the value of the FTB_NON_UNIQUE_INDEX subsystem parameter from the default
NO to YES, all non-unique indexes became eligible for fast traversal processing. But, like for
unique indexes, the key size for the columns of non-unique indexes had to be 56 bytes
or less.
With FL500 (V13R1M500), Db2 13 continues to enhance FTB and extends the maximum key
length for FTB eligible indexes as follows:
For unique indexes, the key length is limited to 128 bytes.
For unique indexes with INCLUDE columns, the unique part of the index must not exceed
128 bytes.
For non-unique indexes, the key length is limited to 120 bytes.
As described in 4.1, “Fast index traversal enhancements” on page 58, Db2 12 introduced an
in-memory technique called fast index traversal (FTB). However, when FTB is not used for a
particular index, Db2 must traverse the index tree from the root page to access the leaf pages
for non-clustering indexes with a low cluster ratio for INSERT and DELETE operations or for all
indexes for UPDATE operations. This process is CPU-intensive, which produces a huge
performance impact.
In Db2 12, index look-aside is used only by SQL INSERT and SQL DELETE operations on the
clustering index or non-clustering index with a high cluster ratio that is based on catalog
statistics.
Db2 13 enhances the usage of index look-aside by storing the last accessed index non-leaf
and leaf page information for SQL INSERT, DELETE, and UPDATE statements when the thread
does more than three INSERT, DELETE, and UPDATE operations in the same commit scope.
Index look-aside is now enabled for all indexes during SQL INSERT, DELETE, and UPDATE
operations, regardless of the cluster ratio. You do not need to make any application or index
changes to leverage this enhancement.
This index look-aside enhancement reduces index maintenance cost for workloads that are
INSERT, DELETE, and UPDATE intensive by reducing the number of index GETPAGE requests. The
same benefits are also evident when FTB is used because it reduces the index GETPAGE cost
for the INSERT, DELETE, and UPDATE operations.
In addition, workloads can benefit from index look-aside even when the catalog statistics on
the index are not accurate because RUNSTATS has not run recently. This enhancement
dynamically adjusts the usage of index look-aside to avoid any impact when Db2 detects that
the INSERT, DELETE, and UPDATE pattern is random and index look-aside does not provide a
benefit.
Db2 internal measurements show performance improvements with no CPU or elapsed time
regression for the INSERT, DELETE, and UPDATE workloads that were evaluated in both
data-sharing and non-data-sharing environments. You can expect more benefits when fast
index traversal is not used for these indexes. A greater reduction in GETPAGE activity is
expected for non-clustering indexes, especially when they have a high cluster ratio.
By reducing index maintenance cost during SQL INSERT, DELETE, and UPDATE operations, you
can create and maintain more indexes on a table to provide faster access to the data and
index-only access to data while avoiding sort processing for certain SQL statements. With
these CPU improvements, your Db2 installation can handle new workloads while
accommodating existing workload growth.
Chapter 4. Performance 59
4.3.1 Partition retry process
When inserting into a PBG table space, the INSERT transaction goes through the Db2 locking
hierarchy protocol by first acquiring a partition-level lock and then lower-level locks, such as
page or row locks. To achieve better performance in a high-concurrency environment, the
transaction requests the partition lock conditionally when the PBG table space contains more
than one partition. If the Internal Resource Lock Manager (IRLM) rejects the lock request for
this partition, the INSERT operation skips the partition and the space search algorithm moves
on to the next partition. The same operation is performed for the next available partition until
the INSERT operation either completes successfully or fails after an exhaustive search through
all existing partitions.
In this case, the INSERT operation fails due to being unable to acquire a conditional partition
lock. The application receives an SQL return code -904 (resource not available). The reason
code that is associated with this SQL code can be either be “partition full” (00C9009C) or
“conditional lock failure” (00C90090). Either way, there is not enough information for you to
diagnose the problem.
If the lock request is conditional, there is no lock wait time when IRLM detects contention for
an incompatible lock; IRLM informs Db2 that the lock is not available. Because the INSERT
operation does not retry after cycling through all available partitions, the INSERT statement is
terminated.
Db2 13 improves the cross-partition search algorithm for INSERT transactions by retrying the
partition after the initial conditional lock failure. Db2 maintains an in-memory list of up to
10 partitions that previously experienced a partition lock request failure. This list typically
represents the higher partition numbers within the table space. However, even though up to
10 partitions are tracked, only five or fewer partitions are retried. The retry process can
acquire the partition lock conditionally or unconditionally. When an unconditional partition lock
is requested, the lock timeout interval is adjusted to approximately 1 second regardless of the
setting of the IRLMRWT subsystem parameter (ZPARM). This interval helps prevent that long lock
wait time that contributes to the degrading of system performance.
An application can still fail after the cross-partition retry process if the INSERT operation
cannot successfully acquire the unconditional partition lock. The DSNT376I timeout message
and the DSNT501I resource unavailable message, as shown in the following output, provide
information about the lock contention, timeout interval, and resource consumption, which you
can use to take corrective actions:
DSNT376I -DB2A PLAN=DSNTEP3 WITH 691
CORRELATION-ID=INSERTA4
CONNECTION-ID=BATCH
LUW-ID=DSNCAT.SYEC1DB2.DAE53CAF57D2=9
THREAD-INFO=SYSADM:BATCH:SYSADM:INSERTA4:DYNAMIC:3:*:*
IS TIMED OUT. ONE HOLDER OF THE RESOURCE IS PLAN=DSNTEP3
WITH
CORRELATION-ID=TQI01005
CONNECTION-ID=BATCH
LUW-ID=DSNCAT.SYEC1DB2.DAE53CA276CB=8
THREAD-INFO=SYSADM:BATCH:SYSADM:TQI01005:DYNAMIC:1:*:*
ON MEMBER DB2A
REQUESTER USING TIMEOUT VALUE=1 FROM IRLMRWT
In earlier Db2 versions, when a PBG table space has many empty partitions at the end, the
descending cross-partition search algorithm often uses the last physical partitions and can
leave many empty partitions unused in between. When Db2 reaches the first physical
partition during a descending partition search, it wraps around and looks at the last physical
partition next.
Db2 13 improves the descending partition search algorithm by reusing other trailing empty
partitions more efficiently before the last physical partition is used. To do this task, Db2 now
tracks the highest non-empty partition within the table space at run time.
During the execution of an INSERT statement, Db2 caches the highest non-empty partition that
is accessed in the memory. Tracking starts when a data set is opened until it is closed.
When performing a descending cross-partition search and after reaching partition 1, the
cached highest non-empty partition is used as the next partition to search. This information is
tracked separately by each data-sharing member without any cross-communication or
validation between the different members of a data-sharing group. The accuracy of the last
non-empty partition that was accessed depends on the activities within each data-sharing
member. The workload balance between each data-sharing member can be an important
factor for reusing trailing empty partition more efficiently.
With APAR PH31684, Db2 introduced the use of the SORTL instruction for sorting data during
the execution of SQL statements by using the z15 sort accelerator. Initially, several rules and
restrictions were added to the Db2 sort component to determine when and whether the SORTL
instruction can be used.
Chapter 4. Performance 61
Here is a list of some conditions that must be met for SORTL to be used by the Db2 sort
component:
A z15 processor must be used.
There are no plans or packages before Db2 12.
The key size must be greater than 0 but equal to or smaller than 136 bytes, and the data
size must be equal to or smaller than 256 bytes.
Db2 13 maximizes the use of SORTL by analyzing data from previous executions, such as
SRTPOOL size, key size, and data size. This enhancement helps reduce the workfile usage and
storage consumption, which results in the reduction of CPU usage and elapsed time for
running SQL statements.
To address this issue, Db2 must generate more unique hash values for lock resources, which
are used in IRLM and z/OS cross-system extended services (XES) for locking. Db2 13
introduces internal changes to the resource hash values of page P-locks. The new hash
algorithm provides a more balanced distribution of resource hash values, which reduces false
locking contentions and CPU processing impacts. Db2 13 also modifies data page and index
page header definitions. Large object (LOB) and XML pages are not affected by this change.
If you activate V13R1M500, run REORG, and start using the new lock hash value for PBR RPN,
you can activate a lower function level V13R1M100*. In this situation, Db2 continues to use
the new hashing algorithm for page P-locks because V13R1M100* does not allow
co-existence with or fall back to Db2 12. A new flag that is called LOGRPNH2 in page set open
and checkpoint log records indicates whether the table spaces were converted.
This capability was added as a Db2 12 continuous delivery enhancement and then simplified
with the externalization of an online-changeable subsystem parameter (ZPARM) that is called
REORG_INDEX_NOSYSUT1. IBM performance measurements showed that this enhancement
resulted in a significant reduction of elapsed time and CPU time. For partitioned indexes, the
tests showed up to 86% reduction of elapsed time, and the CPU reduction was even better.
Db2 13 further simplifies the execution of the REORG INDEX utility. With the activation of
function level 500 (V13R1M500), you no longer need to rely on the REORG_INDEX_NOSYSUT1
subsystem parameter. The NOSYSUT1 behavior that is described is used whenever possible for
REORG INDEX with SHRLEVEL REFERENCE or CHANGE.
For more information, see Chapter 8, “IBM Db2 for z/OS utilities” on page 111.
Both metrics are accessible through more fields in Instrumentation Facility Component
Identifier (IFCID) 230 and 254 trace records and by using the -DISPLAY GROUPBUFFERPOOL
command. For more information, see Chapter 9, “Instrumentation and serviceability” on
page 131.
Chapter 4. Performance 63
4.7 Summary
Db2 13 continues the never-ending pursuit of minimizing processing cost and reducing
elapsed time for all types of workloads in Db2. Through a combination of internal
enhancements, monitoring improvements, and IBM Z feature exploitation, Db2 13 helps you
run new and existing workloads faster while minimizing cost and optimizing performance.
You can now use Db2 13 to set application-granularity lock controls such as timeout interval
and deadlock resolution to match the individual application’s need. You can do this task
without changing the application’s source code, regardless of whether the application is local
on the z/OS system or originates from a remote system.
Db2 13 introduces a mechanism to optimize the success of DDL break-in without needing to
duplicate versions of the application packages and without impacting non-dependent
applications.
Another challenge for Db2 administrators is managing new function permission for remote
applications that connect through Db2 Data Server Drivers. As an administrator, you can
configure two or more collections of packages for these drivers to allow new functions for one
set of applications while blocking new functions from another set. The assignment to the
appropriate collection can now be a managed by using the DSN_PROFILE tables.
Db2 13 added support for SQL Data Insights with several new built-in functions. For more
information about this feature, see Chapter 6, “SQL Data Insights” on page 81.
The first version of Db2 for z/OS incorporated the Internal Resource Lock Manager (IRLM) to
handle locking, and since that first Db2 version, all lock requests are susceptible to contention
resolution, granting, waiting, and resuming. A key variable in lock-handling behavior is the
timeout interval, which specifies the maximum amount of time a lock request can wait in
contention before the request is denied. Nearly all logical lock requests from a Db2 system to
IRLM used a common timeout interval that is set in the IRLMRWT subsystem parameter. This
parameter, which is specified in seconds, is used as the system-wide timeout interval for lock
requests regardless of where they come from. In Db2 12 and previous versions, this
subsystem parameter was not online changeable, which means that you had to recycle the
Db2 subsystem whenever you had to change this value.
However, not all applications have the exact same characteristics. Some applications can
tolerate a lock wait time interval that is longer or shorter than the IRLMRWT value. Splitting an
application off into another Db2 subsystem with a more suitable timeout interval according to
application affinities increases management cost and decreases availability and resiliency.
To help manage applications that have different characteristics in the same Db2 subsystem or
data-sharing group, Db2 13 provides the new CURRENT LOCK TIMEOUT special register.
You can set this new special register at an application level to control the number of seconds
that the application waits for a lock before timing out. The Db2 for z/OS support is compatible
with Db2 for Linux, UNIX, and Windows (LUW) support of this special register. To use this
feature, Db2 must be at function level V13R1M500 or later and the application must be set to
use application compatibility (APPLCOMPAT) (V13R1M500) or later. For more information,
see 5.1.1, “CURRENT LOCK TIMEOUT” on page 66.
Similarly, when different lock requests from multiple threads form a deadlock cycle, IRLM
resolves the deadlock by choosing a victim whose request is then denied. Previously, your
application could do little to influence which thread is chosen as the deadlock victim. Batch
data definition jobs frequently fail due to deadlocks. Attempting to address this issue through
restart or retry logic and careful scheduling is increasingly difficult in continuous availability
environments. Therefore, when an SQL Data Definition Language (DDL) statement runs,
failing that statement instead of other statements might not be optimal.
The SET CURRENT LOCK TIMEOUT statement accepts the following values:
NULL The IRLMRWT value is used to determine how long a lock request waits.
This NULL value is the default value for the special register. A query of
CURRENT LOCK TIMEOUT returns the IRLMRWT value.
0 or NOT WAIT The lock request is conditional. If a lock cannot be obtained, an error is
returned immediately. In this case, the DSNT376I message and
Instrumentation Facility Component Identifier (IFCID) 196 are
not written.
-1 or WAIT A timeout cannot occur. The application waits indefinitely until the lock
is released by the holder.
WAIT n or n n is an integer or variable value 1 - 32767 that indicates the number of
seconds to wait for a lock.
Important: Be careful when using a high value or -1 for the CURRENT LOCK TIMEOUT
special register because the application thread might be holding many resources for a long
time, which might impact other concurrent application threads that need those resources.
To aid in diagnosing such situations, you can use IFCID 437 to monitor the value that is
specified on the SET CURRENT LOCK TIMEOUT SQL statement. The Db2 accounting trace
record includes a new counter for successful executions of this statement.
Additionally, you can use the new SPREG_LOCK_TIMEOUT subsystem parameter to control the
maximum value that can be specified for the SET CURRENT LOCK TIMEOUT statement. The
default value for SPREG_LOCK_TIMEOUT_MAX is -1, which allows all valid values to be set for the
CURRENT LOCK TIMEOUT. After you migrate to Db2 13, the subsystem parameter load
module must be rebuilt. If not, the SPREG_LOCK_TIMEOUT_MAX value might be set to 0
unintentionally, which can cause a SET CURRENT LOCK TIMEOUT statement to fail.
Because no corresponding bind option exists for this special register, CURRENT LOCK
TIMEOUT is applicable to locks for both static and dynamic SQL statements in an application.
New fields in the trace record IFCID 196 and new text in the DSNT376I message are added
to help identify when a lock request uses the IRLMRWT or CURRENT LOCK TIMEOUT value.
For example:
DSNT376I PLAN=APPL1 WITH CORRELATION-ID=correlation-id1
CONNECTION-ID=connection-id1 LUW-ID=luw-id1 THREAD-INFO=thread-information1 IS
TIMED OUT.
ONE HOLDER OF THE RESOURCE IS PLAN=APPL2 WITH
CORRELATION-ID=correlation-id2
CONNECTION-ID=connection-id2 LUW-ID=luw-id2 THREAD-INFO=thread-information2 ON
MEMBER member-name.
REQUESTER USING TIMEOUT VALUE 10 FROM Special Register. HOLDER USING
TIMEOUT VALUE 30 FROM IRLMRWT.
You can use the following IRLM commands to obtain information or change the setting of the
IRLM deadlock value:
F irlmproc,STATUS displays the IRLM deadlock interval.
F irlmproc,SET,DEADLOCK=nnnn (where nnnn is number of milliseconds) to change the
IRLM deadlock interval.
Lock requests from a DDL statement, such as CREATE, ALTER, or DROP, wait longer than the
IRLMRWT value when the DDLTOX subsystem parameter is set to a value greater than 1.
However, the DDLTOX setting affects all DDL statements. A database administrator (DBA) who
performs an object schema change might consider setting the CURRENT LOCK TIMEOUT to
a specific value that is suitable for the specific DDL operation being performed.
Additionally, the CURRENT LOCK TIMEOUT special register value is also acknowledged by
certain Db2 internal synchronization processes that are similar to lock requests. Those
processes include claims and drains, and when a DDL statement waits for cached dynamic
SQL statements to be quiesced. Conversely, certain Db2 internal processes might use a
different value than the CURRENT LOCK TIMEOUT value in effect. For example, a P-lock
request does not acknowledge the CURRENT LOCK TIMEOUT value because P-locks are
owned by the Db2 subsystem and not by the application. The CURRENT LOCK TIMEOUT
special register affects logical locks only in an application.
5.1.2 DEADLOCK_RESOLUTION_PRIORITY
Before Db2 13, influencing how Db2 chooses a requester to cancel when multiple processes
form a deadlock cycle with lock requests such that no process can proceed is not easy. Some
techniques are available, but most have drawbacks or are hard to control. For example, a
process that performs many database update activities with many log records is less likely to
be denied than a process that makes fewer updates. Also, setting the BMPTOUT subsystem
parameter for Information Management System (IMS) batch messaging processing (BMP)
transactions and the DLITOUT subsystem parameter for IMS data language interface (DLI)
transactions can give those transactions a heavier weighting factor in the deadlock resolution
process. However, not all IMS BMP or DLI applications have the same characteristics. Setting
a shared high priority for all these types of applications might not be the best decision,
especially when they are run with important, non-IMS applications. An IBM Customer
Information Control System (IBM CICS®) or IMS transaction can specify its own weighting
factor on the attachment API Create Thread API, but this method is allowed only for those
types of applications.
Like other global variables, the WRITE privilege is required on this global variable to run the
SET SYSIBMADM.DEADLOCK_RESOLUTION_PRIORITY statement, and the READ privilege is
required for an application to query the value of this global variable. Therefore, the DBA has a
way to control which application uses this global variable and can monitor its setting because
the usage of this global variable can affect other applications.
A DBA can choose a deadlock resolution priority value for a scheduled data definition process
that helps ensure that the process is denied in an eventual deadlock scenario, thus optimizing
the likelihood of success.
Other factors are involved in the deadlock resolution process. For example, a lock request for
a no-logged table is unlikely to be denied. A higher DEADLOCK_RESOLUTION_PRIORITY
value does not ensure that an application will never be denied. Therefore, the application
setting of this global variable should still be ready to handle SQLCODE –911 or –913 errors.
The trace record IFCID 172 reports the worth values of the holders and waiters that are
involved in the deadlock cycle and whether the worth values are from the
DEADLOCK_RESOLUTION_PRIORITY global variable.
In conclusion, Db2 13 provides two new special registers and a system built-in global variable
to enable an application to control its own lock timeout and deadlock behaviors that suit the
application. Furthermore, the DBA can set the CURRENT LOCK TIMEOUT and
DEADLOCK_RESOLUTION_PRIORITY values for certain applications without having to
incur the cost of changing the applications, as described in 5.2, “Profile table enhancements”
on page 69.
Although up until now these features were applicable to remote applications only, they are
useful because remote applications do not need to be changed to take advantage of certain
special registers or global variables. Making application changes can be a resource-intensive
task that involves understanding the applications and thoroughly testing the changes before
they are promoted to the production system. Using a profile to add the setting of a special
register or global variable eliminates the need for multiple versions of an application,
especially when these settings are temporary.
To further reduce the costs that are associated with the application change process and
simplify the application management task, Db2 13 extends the profile table function to qualify
both remote and local application processes and adds support for the CURRENT LOCK
TIMEOUT special register and the DEADLOCK_RESOLUTION_PRIORITY global variable.
For example, you can specify that when an application package with the client application
name ABCD runs or when the primary authorization ID UTHRED runs any application
package, the CURRENT LOCK TIMEOUT is set to 5. This behavior means that lock requests
for SQL statements in these local or remote applications can wait for a maximum of
5 seconds in a case of contention.
The Db2 DDF address space must be started to use the system monitoring function of the
profile tables, which includes setting special registers and global variables. Even if you use
the profile tables for local applications only, Db2 must be started with the DDF subsystem
parameter set to AUTO or COMMAND. Db2 issues the message DSNT761I if DDF is not started,
as shown in Figure 5-1 on page 71.
In general, after issuing the -START PROFILE command, you should query the
DSN_PROFILE_HISTORY and DSN_PROFILE_ATTRIBUTES_HISTORY tables for any
rejected rows. An error message “DDF NOT LOADED” might be issued in a
DSN_PROFILE_ATTRIBUTES_HISTORY row if the Db2 DDF address space is not active.
Function level V13R1M500 is required for the SET CURRENT LOCK TIMEOUT statement, and
function level V13R1M501 is required for the SYSIBMADM.DEADLOCK_RESOLUTION_PRIORITY
statement when these statements are specified in the DSN_PROFILE_ATTRIBUTES table.
The row is rejected if the current function level requirement is not met when the -START
PROFILE command is issued.
When setting special registers or global variables for local applications, you can identify the
local applications by using only the following filtering criteria in the columns of the
DSN_PROFILE_TABLE table:
AUTHID, ROLE, or both
COLLID, PKGNAME, or both
One of CLIENT_APPLNAME, CLIENT_USERID, or CLIENT_WORKSTNNAME
Table 5-1 and Table 5-2 on page 72 shows how you might specify the new special register
and global variable in profile tables.
1 COLLECTION1 2022-03-26-22.46.45.78344
2 DB2DEV 2022-03-26-22.46.47.78344
In this example, after the profile is started by issuing the –START PROFILE command, the
following actions occur:
A remote package that is bound in collection ID COLLECTION1 uses the CURRENT
LOCK TIMEOUT value of 10 seconds.
A local package that is run in a trusted context with role name of DB2DEV uses the
CURRENT LOCK TIMEOUT value of 15 seconds.
Both remote and local packages running under the trusted context with the role name of
DB2DEV use the DEADLOCK_RESOLUTION_PRIORITY of 5.
Db2 looks up the profile tables to set the special register, global variable, or both when each
local package is loaded for execution. A package is loaded on the first SQL statement
execution in that package. If the package is released at COMMIT or ROLLBACK due to the
RELEASE(COMMIT) bind option behavior, the next SQL statement that runs after COMMIT or
ROLLBACK loads the package again. A package for a stored procedure or user-defined function
(UDF), even if invoked from a remote SQL statement, is considered a local package. At load
package time, if an explicit SQL SET statement has not run for the special register or global
variable from an application, Db2 runs the SET statement in the profile table because the
explicit SQL SET statement in the application takes higher precedence than the SET statement
in the profile table.
Special register values are not saved and restored at package switching time, which is when
a package uses the language CALL statement to invoke another package (not the SQL CALL
statement to run a Db2 stored procedure). Global variables are not saved and restored, even
when a stored procedure or UDF is invoked. To avoid confusion about which special register
and global variable values are used in an application that includes several packages, a best
practice is to not mix the SQL SET statements in both the user application and the Db2 profile
table.
In this example, PackageA and PackageB contain the statements that are shown
in Figure 5-2.
Applications that contain static Data Manipulation Language (DML) statements, such as
SELECT, INSERT, UPDATE, and DELETE, depend on the referenced database objects. To protect
data integrity, these applications must be serialized against the DDL process through a
package lock. Any threads that run the application request the package lock in th share state,
and any DDL processes request the package lock in th exclusive state. The amount of time
that the application threads holds the package lock is controlled by the RELEASE bind option.
The RELEASE(COMMIT) option allows the package to be released at the end of transaction
when there are no open held cursors and the KEEPDYNAMIC(NO) bind option is in effect. The
same package might need to be loaded again on subsequent transactions.
Conversely, packages that are bound with the RELEASE(DEALLOCATE) option remain with the
running thread until the thread is deallocated. Threads that run with packages that use the
RELEASE(DEALLOCATE) bind option tend to exist longer, including high-performance Database
Access Threads (DBATs). As a result, although the applications perform better by using this
option, they can hold the package locks for a long time, which can prevent a DDL process
from obtaining the same lock. Therefore, when a database administrator must run a DDL
statement, the high-performance DBATs must be stopped or deallocated.
Db2 13 further increases the likelihood of DDL success by providing the new
RELEASE_PACKAGE keyword in the profile tables. This keyword provides you more control over
managing packages that are bound with the RELEASE(DEALLOCATE) bind option before you
attempt to make DDL changes.
For example, you can build a profile where either a local application package or any
application that originates from the 191.3.5.1 IP address runs. If the package is bound with
RELEASE(DEALLOCATE), Db2 changes to RELEASE(COMMIT) behavior as soon as possible. This
behavior is controlled by the new RELEASE_PACKAGE value in the
DSN_PROFILE_ATTRIBUTE_TABLE.KEYWORD column.
If a profile match exists, Db2 changes from the RELEASE(DEALLOCATE) behavior to the
RELEASE(COMMIT) behavior at these points.
Note: Because the package is loaded for execution on the first SQL statement in the
package, COMMIT statements should be issued frequently so that the profile tables can
take effect.
The Db2 DDF address space must be started to use the system monitoring function of the
profile tables, which includes setting special registers and global variables. Even if you use
the profile tables for local applications only, you must start Db2 with the DDF subsystem
parameter set to AUTO or COMMAND. Db2 issues the message DSNT761I if DDF is not started,
as shown in Figure 5-3.
In general, after issuing the -START PROFILE command, you should query the
DSN_PROFILE_HISTORY and DSN_PROFILE_ATTRIBUTES_HISTORY tables to ensure
that all inserted rows are accepted and that no rows were rejected. An error message DDF NOT
LOADED might be issued in a DSN_PROFILE_ATTRIBUTES_HISTORY row if the Db2 DDF
address space is not active.
Function level V13R1M500 is required when the ‘RELEASE_PACKAGE’ value is specified in the
DSN_PROFILE_ATTRIBUTES table. The row is rejected if the current function level
requirement is not met when the -START PROFILE command is issued.
Table 5-3 and Table 5-4 show how you might specify the new keyword in profile tables.
99 COLLECTIONA 2021-02-28-14.55.56.78027
7
99 RELEASE_PACKAGE COMMIT 1
Assume that two packages are involved in this example, both of which are run in local plans
only:
COLLECTIONA.P1: Bound with the RELEASE(COMMIT) option.
COLLECTIONA.P2: Bound with the RELEASE(DEALLOCATE) option.
Assume that thread THREAD1 is running package COLLECTIONA.P2, which was loaded
with the RELEASE(DEALLOCATE) option. After the profile starts through the -START PROFILE
command, the following actions occur:
When new threads run package COLLECTIONA.P2, this package is loaded by the
RELEASE(COMMIT) option.
When the existing thread THREAD1, which loaded the package COLLECTIONA.P2
before starting the profiles, goes through COMMIT or ROLLBACK, this package can be
released if there are no open-held cursors.
1 COL1 P1
2 COL2 P2
1 RELEASE_PACKAGE COMMIT 1
2 RELEASE_PACKAGE COMMIT 2
3. Run the -STA PROFILE command, which loads the profile table rows into the in-memory
profile table.
As a result of these steps, the following behavior changes occur:
– Existing threads THREAD1 and THREAD2 release the packages COL1.P1 and
COL2.P2 when they reach a COMMIT or ROLLBACK even though they are bound with the
RELEASE(DEALLOCATE) option, assuming that these packages do not have WITH HOLD
cursors and the KEEPDYNAMIC(YES) bind option.
– New threads load the packages COL1.P1 and COL2.P2 with the RELEASE(COMMIT)
option even though they are bound with RELEASE(DEALLOCATE).
4. Run a DDL statement against the COUNTY.VACCINATION table with the wanted
CURRENT LOCK TIMEOUT and DEADLOCK_RESOLUTION_PRIORITY values.
5. If the DDL statement is successful, disable or delete the rows in the
DSN_PROFILE_TABLE and DSN_PROFILE_ATTRIBUTES tables and restart the profiles.
Optionally, you can also issue the -STOP PROFILE command. Then, the packages COL1.P1
and COL2.P2 are loaded for execution with the RELEASE(DEALLOCATE) option.
Even though dependent packages are run with the RELEASE(COMMIT) option, they can still hold
package locks and are quiesced only at the end of a transaction that concludes with no open
held cursors. Therefore, you can retry the DDL statement until it completes successfully. With
the profile started to temporarily route RELEASE(DEALLOCATE) behaviors, the chance that DDL
can break in improves without having to bind copy a different set of packages for
RELEASE(COMMIT).
In a test environment, you can use the trace record IFCID 177 to monitor the
RELEASE(COMMIT) and RELEASE(DEALLOCATE) behavior. This trace record is written when a
package is successfully loaded for execution. If a package runs with the RELEASE(DEALLOCATE)
behavior, even if several COMMIT or ROLLBACK statements run, only one IFCID 177 trace record
is written.
The same behaviors that make high-performance DBATs advantageous for performance can
make package and DDL management more difficult because packages remain allocated and
locks remain held for a longer duration. To allow DDL or a BIND operation to break in,
high-performance DBATs are temporarily disabled by issuing the -MODIFY DDF
PKGREL(COMMIT) command, which causes Db2 to follow the rules of the RELEASE(COMMIT) bind
option, which are applied the next time that the package is loaded. However, the results of the
-MODIFY DDF PKGREL command are applied at the Db2 subsystem level.
You can use the profile RELEASE_PACKAGE value to selectively disable high-performance
DBATs when DDL or a BIND operation must break into objects that are held by only a few
packages. This approach is an alternative to using the -MODIFY DDF PKGREL(COMMIT)
command, which affects the entire Db2 subsystem.
You can define profiles with the appropriate filtering criteria by using the
RELEASE_PACKAGE value to change packages that are bound with the
RELEASE(DEALLOCATE) bind option. The RELEASE_PACKAGE profile is evaluated and applied
during package load, commit, and rollback. At the next commit or rollback for the server
thread, if all packages that are allocated by that thread no longer have the
RELEASE(DEALLOCATE) behavior (either through the original BIND option or by a profile
downgrade), then the connection goes inactive and that thread is pooled. The result is that
high-performance DBAT behavior is selectively disabled for that thread.
A common approach for controlling the introduction of new Db2 features but still allowing
applications to use new features is to create two sets of Data Server Driver packages: One
set that locks down the minimum set of features in use by applications, and another set that is
used by applications that require new SQL or application-related functions that are introduced
in function levels.
The APPLCOMPAT level is not known until the first package is loaded to process the initial
SQL statement. During the processing of the SQL statement, Db2 ensures that the Data
Server Driver level supports the APPLCOMPAT setting; otherwise, SQLCODE -30025 is
returned.
When an application requires an SQL feature that is delivered at a Db2 APPLCOMPAT level
later than V12R1M500, first you must ensure that the Data Server Driver level that is used by
the application supports the required APPLCOMPAT level.
The following example uses the Db2Binder command to create the IBM Data Server Driver for
JDBC and SQLJ packages under the collection NULLID_NFM with the bind option
APPLCOMPAT V13R1M500:
java com.ibm.db2.jcc.DB2Binder -url jdbc:db2://sys1.svl.ibm.com:5021/STLEC1 -user
sysadm -password XXXXXXXX -bindoptions "APPLCOMPAT V13R1M500" -action REPLACE
Alternatively, you can use the DSNTIJLC job to copy the driver packages from the NULLID
collection to the NULLID_NFM collection, and set APPLCOMPAT V13R1M500.
Notes:
You can use the PKGNAME column value as a filtering category for profile tables
with the SPECIAL_REGISTER value. However, using only PKGNAME is not
recommended as a filtering category for profile tables for IBM Data Server packages
because special register values are set through the profile table only once at the
beginning of a connection (along with the first non-SET SQL statement in the
application). The package that is used for the first non-SET SQL statement can vary
depending on the statement, sections in use, access type, and so on. Therefore,
PKGNAME as a filtering category cannot provide consistent behavior.
When you use Data Server Driver 11.1 Modification 2 Fix Pack 2 or later, the
application does not need to set the clientApplcompat property regardless of the
APPLCOMPAT bind option value of the driver packages.
After you install the SQL DI GUI component that runs on z/OS, you can browse and select
from a set of Db2 tables and views and enable them for AI query. Enabling AI query on a table
or view trains an unsupervised neural network machine learning model that is stored in Db2.
After a Db2 table or view is enabled for AI query, you can run AI semantic queries against the
table or view by using SQL. You can run the SQL through the built-in SQL editor, an
application program, or an SQL tool such as SPUFI or QMF. AI semantic queries use the
semantic similarity, clustering, and analogy functions that are built into the Db2 engine.
You can use similarity queries to find groups of similar entities to decide on market
segmentation or find a group of customers that behaves similarly to other groups, which have
applications in the retail, finance, and insurance industries.
You can use dissimilarity queries to find outliers from the norm, which has applications in
financial anomaly detection and fraud detection.
You can use semantic clustering queries to form a cluster of entities to test whether an extra
entity belongs in the cluster. You can use semantic clustering in many contexts where
similarity or dissimilarity queries are used as a broader test of similarity or dissimilarity to
multiple entities.
You can use analogy queries to determine whether a relationship between a pair of entities
applies to a second pair of entities, which has applications in retail, such as to determine
whether a customer has a preference for a product and whether other customers have the
same degree of preference for other products.
IBM Z Deep Neural Network library (zDNN) is a component of z/OS that provides the
interface to use IBM Z acceleration. Both the SQL DI machine learning model training and AI
SQL functions use zDNN.
On IBM z14® or newer models, the enhanced mathematical library OpenBLAS can
accelerate the SQL DI AI computations. The z/OS component IBM Z AI Optimization (zAIO)
library provides the interface to determine whether OpenBLAS acceleration can be used.
SQL DI is supported on any hardware and software level that is supported by Db2 13.
However, OpenBLAS is not available on IBM Z with processors earlier than the ones that are
in z14. So, while z14, IBM z15, or IBM z16 are not requirements to run SQL DI, only systems
with those processors can provide the AI acceleration.
zDNN, zAIO, and the IBM Z AI Data Embedding Library, which provides the services to train
the SQL DI machine learning models, are available as PTFs for z/OS 2.4 and 2.5. You do not
need to configure these libraries after they are installed. SQL DI training and SQL semantic
functions automatically use them when they are available.
Portions of the machine learning model training process and the Db2 SQL queries that use
the semantic functions are eligible to be run on the IBM Z Integrated Information Processor
(zIIP).
The z/OS requirements must be installed by your system programmer, but require no specific
configuration. SQL DI must be installed by your system programmer and then configured.
Configuration of SQL DI is described at a high level in 6.2.3, “Configuring SQL Data Insights”
on page 85.
SQL DI and the Apache Spark cluster communicate with Db2 through Type 4 Java Database
Connectivity (JDBC). As a result, you must configure the Distributed Data Facility (DDF) in
Db2 and bind the Java packages for JDBC with application compatibility (APPLCOMPAT)
V13R1M500 or higher. This configuration enables you to run queries by using the semantic
functions through the GUI.
SQL DI uses DRDA fast load (zLOAD) and the DSNUTILU stored procedure. You must also
bind packages DSNUT121 and DSNUTILU in Db2.
Before submitting the job, you must customize it to run in your environment by following
instructions in the job itself. You are prompted to decide about choosing storage groups and
default buffer pool assignments for the objects that are listed above. The model database can
grow large if large tables are enabled for AI query.
For discussion and illustration in the rest of this chapter, assume that a Db2 administrator with
ID SYSADM submits the DSNTIJAI job, and the administrator has SYSADM privileges in Db2. Also,
user SQLDIUSR is a nonadministrative user but is granted all necessary permissions to use
SQL DI after it is set up.
Before you begin, make sure that you review the planning considerations and write down the
following information that you need for configuration:
The zFS directory that contains your SQL DI instance. Make sure that sufficient space is
allocated.
Network ports that are allocated for SQL DI and the embedded Apache Spark cluster.
Keystore and key ring information.
Also, collect the following JDBC connection properties information. You need this information
to connect SQL DI to Db2.
TCP/IP hostname or IP address where Db2 runs
Db2 location name
Db2 DDF TCP/IP port
Your Db2 user ID and password
This command invokes the interactive sqldi.sh script and prompts you for the information
that you collected earlier. Enter the information, as shown in Figure 6-1.
After this script completes successfully, SQL DI is configured, started, and ready for use. Note
the URL that you will use to access the SQL DI user interface (UI).
Connecting to Db2
To create a connection to your Db2, open a web browser and go to the URL that is shown in
the sqldi.sh script. You must log in to SQL DI by using your ID and password. Your ID must
belong to the RACF group SQLDIGRP on the LPAR on which SQL DI is running.
When you log in for the first time, your only option is to add a connection to your Db2. Choose
this option and enter the requested information, as shown in Figure 6-2 on page 87.
The relationship between SQL DI and Db2 is one-to-many. You can create more connections
to other Db2 instances, either stand-alone systems or members of a data-sharing group.
Each Db2 must be properly configured as described 6.2.2, “Configuring Db2 for SQL Data
Insights” on page 84 and have all the required pseudo-catalog, model database, and stored
procedures.
You can change the settings for tuning and resource allocation. Review the Db2 load utility
control statement that will be used later when you enable AI query on a table. SQL DI runs the
load utility to load the contents of the model table. Ensure that the data sets are sized and
defined correctly for your system. Customize the control statements if needed.
The examples in this chapter demonstrate how to enable AI query on the CHURN sample
table. You can follow the same process to enable your own tables and views for AI query.
You can select the CHURN table by choosing its schema DSNAIDB from the drop-down list
and then choosing CHURN by name. Check the box next to CHURN and click Add object to
add DSNAIDB.CHURN to the list of AI objects, as shown in Figure 6-5.
Your ID must have the SELECT privilege on the table that you are choosing. The privilege is
not needed when you add the table to the list of AI objects, but it is required when you
eventually enable the object for AI query. The privilege is needed to read the data for model
training. The SELECT privilege for DSNAIDB.CHURN is granted to PUBLIC in the DSNTIJAI batch
job, so it is not a concern for the sample table.
From the list of columns, choose the ones that you want to include in the object model. When
you run AI queries against the object, only the selected columns are used in the queries.
However, if you know that some columns will never be used in a query and their values will
not influence the kinds of semantic questions that will be asked of the data, omit those
columns, which can result in a smaller model, less disk storage, and faster training time.
You also must choose an SQL DI data type for each column. By default, columns with a
numeric SQL type (such as integers, decimals, floats, or decfloats) are assigned the SQL DI
numeric type, and nonnumeric columns are assigned the SQL DI categorical type.
Data of the SQL DI categorical type are discrete values that are treated separately if they are
not exactly equal. A column with a numeric SQL type can be given the SQL DI categorical
type if the values are intended to be treated individually. A decimal column that holds an
account ID or a social security number is a good example of when you might want to choose
an SQL DI categorical type for a column of a numeric SQL type.
Data of an SQL numeric type are continuous values where values that are “close” to one
another are considered together. Only columns that have numeric SQL types can be chosen
to have a numeric SQL DI type. A numeric column that holds the price of a good is an
example of a numeric SQL DI type, where close values such as $9.99 and $10.00 might be
considered together.
A third SQL DI data type is key. Assign the SQL DI key type to columns that represent the
entire row in an AI query. If your table or view has a single-column unique key, it can be
chosen as an SQL DI type key. For the CHURN data, assign the CustomerId column of the
SQL DI data type and keep the default data type assignments for all others, as shown in
Figure 6-8. Later, this chapter shows how you can use the CustomerId column to compare
one customer to another one, that is, how to compare one entire row against another entire
row.
You can specify filter values for a particular column or for the entire table. In the CHURN
example, no filter values are specified, so you don’t need to specify anything on this window,
as shown in Figure 6-9.
In the Enabling phrase, SQL DI reads from DSNAIDB.CHURN into the embedded Apache Spark
cluster. The connection to Db2 uses the authority of user SQLDIUSR that has the SELECT
privilege on the table.
SQL DI analyzes the data from the table and decomposes it into a “vocabulary”, which is a list
of all the unique words that appear in the table. The relationships between each pair of words
in the vocabulary are built into the model that is a collection of numerical vectors that are
associated with the words in the vocabulary.
When the AI query enablement process completes successfully, the status becomes
Enabled, and you are now ready to run AI queries against the table, as shown in Figure 6-10.
To enable and use these new functions, you must activate function level V13M1R500 or
higher. You must also bind the packages for the program that issues the semantic query SQL
statement (that is, the packages for SPUFI) with APPLCOMPAT V13M1R500 or higher.
You can use the three functions in SQL that allow scalar functions. For example, you can
include them in a SELECT statement or as part of a WHERE clause predicate. You can also make
the source of a SET clause of an UPDATE statement or one of the values in an INSERT
statement. However, because the new functions are nondeterministic, they are not allowed in
SQL contexts where nondeterministic functions are disallowed, such as in the definition of an
index-on-expression or a materialized query table.
The context of the similarity comparison for both instances is the column that is named
“FRUIT” that is specified in the USING MODEL COLUMN FRUIT clause. The words ‘APPLE’,
‘RASPBERRY’, and ‘BLACKBERRY’ are words in the FRUIT column, which implies that
relationships among those words within that column are represented in the model. These
words can appear in other columns in the same table, but in those other contexts (columns),
the words might mean something different.
The result of AI_SIMILARITY is a score, which is a floating point number -1.0 - 1.0. A score of
1.0 means that the two entities are similar (or the same), and a score of -1.0 means that the
two entities are dissimilar. The function that compares ‘BLACKBERRY’ and ‘RASPBERRY’ returns
a similarity score of 0.86 because both are berries and close in size. The function that
compares ‘APPLE’ and ‘RASPBERRY’ returns a similarity score of 0.45, which indicates that
there is some similarity, but the two are not as similar as in the other example. Even though
apples and raspberries are fruits, they are not both berries, and they are not similar in size.
Assume that the customer who left has ID ‘3668-QPYBK’. Example 6-2 shows the SQL that
the analyst can issue to find other similar customers.
Example 6-2 Finding customers similar to a customer who closed their account
SELECT
AI_SIMILARITY(X.customerID,'3668-QPYBK') AS SimilarityScore, X.*
FROM DSNAIDB.CHURN X
WHERE X.customerID <> '3668-QPYBK'
ORDER BY SimilarityScore DESC
FETCH FIRST 10 ROWS ONLY
In the result set that is shown in Figure 6-11 on page 95, you can see that most of the results
are similar to but not the same as ‘3668-QPYBK’. The values for the categorical columns are
mostly the same or similar. The values for the numeric columns TENURE, MONTHLYCHARGES, and
TOTALCHARGES are all numerically close to the ones for the ‘3668-QPYBK’ row.
If you enabled AI query on the DSNAIDB.CHURN table in your own Db2 system and tried this
query, your similarity scores and the exact ordering of the results might be slightly different
from the results in this book. The reason for that is that the model training algorithm employs
randomization. As a result, you can get slightly different sets of vectors each time the model is
trained. When AI queries are run, the vectors are slightly different, and the results are slightly
different too. That said, it is to be expected that even though the precise ordering of results
can differ across model trainings, entities that are similar to one another continue to be similar
to one another across trainings. Likewise, entities that are dissimilar continue to be dissimilar
even though the exact similarity scores might change.
Example 6-2 on page 94 showed a use of AI_SIMILARITY that compared the customerID
column, which was given a key SQL DI type when AI was enabled for DSNAIDB.CHURN, which
means that when the similarity score is computed, the entire row for ‘3668-QPYBK’ was
compared to the values in the entire rows for the rest of the table. This behavior is useful
because for this purpose it is important to consider the customerID to be representative of the
entire customer record. “An example of AI_SIMILARITY that examines specific attributes” on
page 97 examines the similarity of a nonkey column.
Example 6-3 Finding customers who are the most loyal by determining which are least similar to a
customer who closed their account
SELECT
AI_SIMILARITY (X.customerID, '3668-QPYBK') AS SimilarityScore, X.*
FROM DSNAIDB.CHURN X
WHERE X.customerID <>'3668-QPYBK'
ORDER BY SimilarityScore ASC
FETCH FIRST 10 ROWS ONLY
The result set for this SQL query is shown in Figure 6-12.
The SQL for this query is nearly identical to the SQL in Example 6-2 on page 94. The only
difference is in the ordering that is specified by the ORDER BY SimilarityScore ASC clause.
Example 6-3 orders by ascending SimilarityScore, from smallest to largest, and Example 6-2
on page 94 orders by descending SimilarityScore from largest to smallest. The ordering in
Example 6-2 on page 94 puts the most similar scores at the front of the result, and the
ordering in Example 6-3 puts the most dissimilar scores, or those items with the lowest
similarity scores, at the front of the result.
This query that is shown in Example 6-4 compares ‘YES’ in the context of the CHURN column
to each value of PAYMENTMETHOD in the context of the PAYMENTMETHOD column. DISTINCT is
added to the SELECT statement to avoid receiving multiple rows with the same values.
The results, which are shown in Figure 6-13, show that the payment method that is most
similar to CHURN ‘YES’ is ‘Electronic check’. The least similar payment method is ‘Mailed
check’. However, looking across the scores, the similarity score for ‘Electronic check’ is
about 0.43, which is not very different from the similarity score of 0.38 for ‘Mailed check’.
The scores indicate that despite an ordered similarity ranking among the different payment
methods from the top to the bottom, the differences are not significant, and payment methods
probably do not affect whether a customer leaves.
The function in Example 6-5 creates a cluster of the last three arguments: ‘RASPBERRY’,
‘BLACKBERRY’, and ‘BLUEBERRY’. The first argument, ‘APPLE’, is a fruit, but it might not fit well
in that cluster because it is not a berry, and it is a different size than other members of the
cluster. The score that is computed for this function is 0.23.
The function in Example 6-6 creates the same cluster with its last three arguments:
‘RASPBERRY’, ‘BLACKBERRY’, and ‘BLUEBERRY’. The first argument ‘STRAWBERRY’ is tested for
inclusion in that group, and its score is 0.82, which is much higher than the score for ‘APPLE’.
‘STRAWBERRY’ is much more strongly related to the group than ‘APPLE’ because it is a berry
and closer in size.
Example 6-7 Using AI_SEMANTIC_CLUSTER with a cluster that is formed from a customer who
closed their account
SELECT AI_SEMANTIC_CLUSTER
(X.CUSTOMERID,
'3668-QPYBK', '2207-OBZNX', '2108-XWMPY')
CLUSTERSCORE, X.*
FROM DSNAIDB.CHURN X
ORDER BY CLUSTERSCORE DESC
FETCH FIRST 10 ROWS ONLY
It should be no surprise that the first three results that fit best in this semantic cluster are the
three customers that form the group, as shown in Figure 6-14. You can see that several of the
subsequent results also have high clustering scores, which indicate a strong fit in this group.
The first pair of arguments to the AL_ANALOGY function in Example 6-8 specifies the two
entities to establish the relationship. The relationship is applied to the second pair of
arguments to determine whether the analogy is good. In this case, the analogy is good, so it
has a higher analogy score of 0.87.
The conceptual example in Example 6-9 represents a poor analogy. The first two arguments
are the same as in Example 6-8, so the relationship between them is the same, which is the
relationship that is based on color. When the color-based relationship is applied to the second
pair of arguments, ‘BLUEBERRY’ and ‘ORANGE’, it does not hold. Blueberries are not colored
orange, so it is a poor analogy, and it has a much lower analogy score of -0.24. By comparing
the two analogy scores, you can see that the score that is produced by the AL_ANALOGY
function in Example 6-8 indicates a better analogy than the score that is produced by the
same function in Example 6-9.
Jane can see from the query results that DSL is a good analogy, and no internet service is not
as good of an analogy, as shown in Figure 6-15. The query does not specify the relationship,
which happens to be that fiber optic internet service is the most common internet service that
is subscribed by those customers with month-to-month contracts. The relationship is inferred
through the model and tested against other pairs of entities.
To see the data analysis, go back to the list of AI objects and choose the Analyze data
option from the action menu, as shown in Figure 6-16.
The influence score is a calculation of data presence. A column with high prevalence of null or
missing data has a low influence score, which means that such a column cannot influence the
result of a query as much as a column that has relatively few or no null or missing values. The
CHURN table does not have any null values, so the influence score is 1.0 for every column,
which means that each of the columns can influence the results to the same degree. If you
had specified filter values from the CHURN table, those filtered values are also treated as
missing when the influence score is computed.
Because influence scores are not interesting for the CHURN table, select the Discriminator
checkbox to display only the discriminator scores, as shown in Figure 6-17 on page 103.
The discriminator score indicates how varied the values in each column are. Columns with
very few distinct values, such as the gender column that contains only the values “Female”
and “Male”, have a low discriminator score. A low discriminator score diminishes the ability to
distinguish between co-occurrences of entities within rows and therefore reduces that
column’s ability to contribute to the similarity calculation. Conversely, the Tenure and
TotalCharges columns have very high discriminator scores because they have many distinct
values and can contribute more to the similarity calculation.
6.5 Summary
SQL DI brings AI capability directly to your Db2 data in-place by extending Db2 SQL with new
built-in functions that you apply to use cases across multiple industries, which are
custom-built for IBM Z servers that can use multiple forms of hardware and software AI
acceleration. After you install SQL DI and all its prerequisites, you can use the UI to create AI
objects from Db2 tables or views, and then enable the objects for AI query, which trains an
unsupervised machine learning model. You can then run AI queries against the objects either
through the SQL editor or your own application programs.
Depending on the objective of your queries, you can select the AI_SIMILARITY,
AI_SEMANTIC_CLUSTER, or AI_ANALOGY function. The AI_SIMILARITY function produces a
similarity score describing how similar two entities are. The AI_SEMANTIC_CLUSTER function
produces a score describing how closely an entity belongs within a group of up to three other
entities. The AI_ANALOGY function examines a relationship between a pair of entities and
determines the strength of the same relationship between a second pair of entities.
By using the power of these new SQL functions with SQL DI, you can now easily glean new
insights from your Db2 data without having to move the data off-platform or acquiring a large
set of machine learning and AI skills.
Chapter 7. Security
The key security enhancements in Db2 13 for z/OS focus on reducing processing costs for
RACF or other external security products; better performance for Db2 access checks,
increased flexibility, and improved productivity for administrators deploying packages; and
automated compliance data collection to support the IBM Z Security and Compliance Center
solution.
An enhancement that was introduced in Db2 13 improves performance for external security
users by leveraging the caching in Db2. Because authorization checks that are done with Db2
native security already leverage this cache, this feature provides consistent behavior between
Db2 native and external security controls.
Plan authorization cache also was enhanced to use a smarter algorithm for better
management of authorization IDs and roles that can be cached per plan.
The plan authorization cache is enabled when the existing AUTHEXIT_CACHEREFRESH system
parameter is set to ALL and the z/OS release is version 2.5 or later.
Before Db2 13, when the AUTHEXIT_CACHEREFRESH subsystem parameter is set to ALL and the
access control authorization exit is active, Db2 listens to the type 62, type 71, and type 79
ENF signals from RACF for user profile or resource access changes and refreshes the Db2
cache entries as needed. Db2 13 is enhanced to also refresh the plan authorization cache
entries for a particular plan when the user profile or resource access is changed in RACF and
SETROPTS RACLIST REFRESH is issued for the RACF resource class for the plan, such as
MDSNPN.
You can still specify the CACHESIZE bind option on the BIND PLAN subcommand to control the
plan authorization cache size at the plan level.
The global authentication cache is enhanced to handle the cache entries based on the
timestamp for all connections, when the AUTHEXIT_CACHEREFRESH subsystem parameter is set
to ALL. The specific cache entry is invalidated when the user permission is changed in RACF.
Db2 13 provides the flexibility for a DBA to control the ownership of plans, packages, and SQL
routine packages by using the DBA role without depending on the security administrator. New
capability was added to identify the type of owner for plans and packages.
7.2.1 Owner type support for packages of SQL procedures and functions
With application compatibility (APPLCOMPAT) level V13R1M500, the type of package owner
can be specified for the following SQL statements:
CREATE/ALTER PROCEDURE (native SQL)
CREATE/ALTER FUNCTION (compiled SQL scalar)
The AS ROLE parameter specifies that <authorization-name> is a role that exists on the server.
The AS USER parameter specifies that <authorization-name> is an authorization ID. The
default is AS ROLE if the process is running in a trusted context and the ROLE AS OBJECT OWNER
AND QUALIFIER option is in effect. Otherwise, the default is the SQL authorization ID of the
process.
The package owner is USER01, which is an authorization ID. Later, if the DBA must change the
package owner to a role, they can specify the new owner by using PACKAGE OWNER with AS
ROLE in the ALTER statement:
ALTER PROCEDURE MYSCHEMA.MYPROCEDURE
PACKAGE OWNER DEVELOPER_ROLE AS ROLE
Conversely, if a DBA wants to create a procedure without a trusted context, they can specify
either a role or an authorization ID as the owner by using AS ROLE or AS USER in the PACKAGE
OWNER option.
When the command is issued in a trusted context with a role as an object owner, the default
value of OWNERTYPE is ROLE. Otherwise, the default value of OWNERTYPE is USER.
The package owner is USER01, which is an authorization ID. Later, if the DBA must change
the package owner, they can rebind the package by specifying the new owner. The following
command changes the owner to a role that is named DEVROLE:
REBIND PACKAGE(MYCOLLID.MYPKG) OWNER(DEVROLE) OWNERTYPE(ROLE)
Similarly, if a DBA wants to bind a package without a trusted context or without the role as the
object owner, they can specify either USER or ROLE as the OWNERTYPE.
Db2 listens to the ENF 86 signal that is generated by the z/OS Compliance Agent services.
Db2 generates System Management Facilities (SMF) 1154 trace records for the
recommended system security parameter settings and Db2 Admin access configuration for
Db2 for z/OS native controls.
The SMF 1154 records can be used independently of the IBM Z Security and Compliance
Center solution.
For more information about SMF 1154 records, see the Db2 13 library.
For more information about the ENF 86 signal, see the Z/OS 2.5.0 library.
Db2 13 adds support for the decrypt-only archived key when a key label is specified by using
the Db2 interfaces for data set encryption. If the key label that is specified refers to a
decrypt-only archived key, Db2 fails the key label specification if the key is used for creating or
encrypting new data. If this key is used only for decrypting purposes, Db2 allows the use of
the key. The intent is to allow archived keys to decrypt existing ciphertext, which enables
re-encryption with a new key but not allow that same archived key to generate new ciphertext.
This enhancement helps users with key rotations, which are an important aspect in
maintaining data security.
You can specify a key label by using the ENCRYPTION_KEYLABEL subsystem parameter or by
issuing the following DDL statements:
ALTER STOGROUP
ALTER TABLE
CREATE STOGROUP
CREATE TABLE
If the key label that is specified refers to a decrypt-only archived key, Db2 fails the key label
specification and issues SQLCODE -20223 for the DDL and message DSNX242I for the
subsystem parameter:
DSNT408I SQLCODE = -20223, ERROR: THE OPERATION FAILED. ENCRYPTION FACILITY NOT
AVAILABLE 00000000 00000D5F
DSNX242I -DB2A ENCRYPTION KEY LABEL CHANGE 655 UNSUCCESSFUL, KEY LABEL
SYSTEM.KEY01. REASON CODE 00E73006
Support for z/OS 2.5 decrypt-only archived keys is available in function level V13R1M100 and
higher.
For more information about decrypt-only archived keys, see the z/OS 2.5.0 library.
7.5 Summary
Db2 13 includes many significant enhancements that enable you to keep your data secure
and compliant while increasing your ability to work with that data efficiently.
Reducing the contention for security manager resources mitigates any availability issues
during remote connection authentication and performance issues during authorization checks
when plans run. Changes to the CREATE and ALTER statements and to the BIND command
eliminate certain types of restrictions that are associated with managing plans, packages, and
SQL routines. The new SMF record helps streamline the compliance process. Lastly, support
for decrypt-only archived keys is added for use in decrypt-only operations. These
enhancements demonstrate a commitment to enabling you to work with your data easily and
efficiently without sacrificing security.
This chapter describes several key utility enhancements in the IBM Db2 Utilities Suite for
z/OS 13.1 and the core utilities.
For a complete list of all utility enhancements, see Db2 Utilities in Db2 13.
Before Db2 13, it was challenging to gather historical information about utility executions,
which also made utility management and tuning difficult and time-consuming. Checking for
any utility failures in the past 24 hours could not be done quickly or easily. To analyze
performance statistics or other information about utilities that ran on a Db2 subsystem, you
had to gather sufficient information from each utility output or from Db2 traces, but it was
nearly impossible to easily obtain a concise history of utility executions, which also would be
useful for balancing and managing utility workloads.
Db2 13 introduces a new utility history function that collects essential and useful execution
information across all the utilities. With function level 501 (V13R1M501), you can configure
Db2 to gather the history of Db2 utilities in real time. When requested, Db2 collects and
stores the historical information in the Db2 catalog. Then, you can use this information to
better manage your utility strategy. For example, you can check daily utility executions for
failures and take immediate corrective actions, or use the historical trends to balance the
utility workload, such as moving jobs from one execution window to another one or moving
objects from job to job.
The following Db2 utilities are enhanced to support the utility history function:
BACKUP SYSTEM
CATMAINT
CHECK DATA
CHECK INDEX
CHECK LOB
COPY
COPYTOCOPY
LOAD
MERGECOPY
MODIFY RECOVERY
MODIFY STATISTICS
QUIESCE
REBUILD INDEX
RECOVER
REORG
REPAIR
REPORT RECOVERY
REPORT TABLESPACESET
RUNSTATS
STOSPACE
UNLOAD
A new subsystem parameter, UTILITY_HISTORY, was introduced to collect utility history, which
can be updated online. By default, the parameter is set to NONE, and no utility history is
collected. To activate utility history collection, set UTILITY_HISTORY to UTILITY.
Db2 stores the utility history in the new SYSIBM.SYSUTILITIES catalog table. A new row is
inserted into SYSUTILITIES at the beginning of each utility execution. This row includes a
unique event ID in the EVENTID column to identify the utility execution. If the utility execution
inserts one or more rows into SYSIBM.SYSCOPY, the EVENTID value is also recorded in the
new EVENTID column in SYSCOPY. The SYSUTILITIES row also contains details about the
utility execution, such as the job name, utility name, number of objects, starting and ending
timestamp, elapsed time, CPU time, zIIP time, final return code, and special conditions, such
as restart and termination with the TERM UTIL command. Information is collected when the
utility is invoked by the DSNUPROC JCL procedure, the DSNUTILB program, or the DSNUTILU or
DSNUTILV stored procedure. Information in the SYSUTILITIES row is updated as the utility
progresses.
To retrieve the utility history, you can query SYSUTILITIES, SYSCOPY, or both catalog tables.
The output of the REPORT RECOVERY utility includes any relevant event IDs from SYSCOPY, and
the output of the DSNJU004 (print log map) utility includes any relevant event IDs for
system-level backups created by BACKUP SYSTEM and recorded in the bootstrap data set
(BSDS).
For more information about how utility history collection works, including restrictions and
processing, see Monitoring Utility History.
If a LISTDEF list is specified in a utility statement, the number of SYSUTILITIES rows that are
inserted into the catalog table varies depending on the utility, the specified utility options, and
the specified LISTDEF options. One row is inserted for each separate invocation of a utility. If
a list of objects is split into separate utility runs by Db2, each run results in a separate row.
The following examples show the number of SYSUTILITIES rows that are inserted by
different utilities or utility runs:
COPY, RECOVER, COPYTOCOPY, and QUIESCE are invoked once to process all the objects on a
list and only one SYSUTILITIES row is inserted.
REORG TABLESPACE on a list of partitions is invoked once for each group of related partitions
(the ones that belong to the same table space). One SYSUTILITIES row is inserted for
each group of partitions that is processed.
CHECK INDEX or REBUILD INDEX on a list of index spaces is invoked once for each group of
related index spaces (the ones that are associated with the same table space). One
SYSUTILITIES row is inserted for each group of related index spaces that are processed.
MODIFY RECOVERY on a list of table spaces is invoked once for each table space. One
SYSUTILITIES row is inserted for each table space that is processed.
The message displays the event ID in the utility output when utility history collection is active.
In a data-sharing group, the event IDs may not be assigned sequentially according to the
order of utility executions because each Db2 member has a cache of assigned sequence
values to use. Gaps might exist in the utility event IDs because unused sequence values in
the cache are not kept when Db2 is restarted, which affects both data-sharing and
non-data-sharing environments. You can use the STARTTS (start timestamp) column to sort
the SYSUTILITIES rows to see the order of utility executions.
The NUMOBJECTS column is the number of objects to process. In general, these objects
are placed in a utility-in-progress state (UTUT, UTRO, or UTRW) by the utility. For
partitioned objects, each partition is counted as one object. For non-partitioned objects,
each object counts as one object except for data-set-level requests (DSNUM option) for
backup and recovery utilities.
The STARTLOGPOINT column value is the current relative byte address (RBA) for
non-data-sharing or log record sequence number (LRSN) for data-sharing at the start of
the utility, which can be used for diagnostic purposes.
Another 14 sets of columns that collect utility phase information are reserved for future use.
For now, these columns have a default value of NULL: PHASEnNAME, PHASEnET,
PHASEnCPUT, PHASEnZIIPT, and PHASEnDATA, where n is the number that references
one of the 14 columns.
When a stopped utility is restarted, the RESTART column value is set to ‘Y’ at the beginning
of execution. Upon completion, the ELAPSEDTIME column value includes the time when the
utility was stopped. The CPUTIME, ZIIPTIME, SORTCPUTIME, and SORTZIIPTIME column
values reflect the resources that are used in the restart execution only. The restart of a utility
accepts the UTILITY_HISTORY subsystem parameter setting from the first, original execution,
which avoids incomplete information in SYSUTILITIES rows.
A TERM UTIL command that is issued on an active utility notifies the utility to terminate. The
ENDTS, ELAPSEDTIME, and CONDITION columns are updated by the utility during the
termination processing.
You can run utilities on the utility history catalog objects if needed. Do not include other
objects in the utility statement or list, and if possible, avoid running utilities on other objects
concurrently. When a utility needs exclusive use of or allows limited access to the utility
history catalog objects, it is best to avoid running other utilities because the insert or update of
the SYSUTILITIES row for utility history processing might not be successful.
The SYSTSUTL table space must be copied and recovered by itself in separate utility
statements than other table spaces. Otherwise, an error is issued. If the indexes DSNULX01
and DSNULX02 are altered with the COPY YES attribute, they can be included in the same
utility statement along with any user-defined indexes on SYSUTILITIES.
The recovery order of catalog and directory objects is modified to include table space
SYSTSUTL and the indexes over SYSUTILITIES. For more information, see Recovering
catalog and directory objects.
The sample SQL statements that are shown in Example 8-1 are for creating global variables
and assigning values for retrieving utility history. You can tailor them to fit your needs or
modify the queries to use specific values instead of these global variables.
If any of these global variables exist with different attributes, choose a different name and
modify the SQL statements. Change the SET CURRENT PATH statement to the schema for the
global variables.
Example 8-2 Sample query for displaying utilities that run between midnight and early morning
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_STARTTS = '2022-04-04-00.00.00.000000'; -- START TIMESTAMP
SET UH_ENDTS = '2022-04-04-08.00.00.000000'; -- END TIMESTAMP
SELECT EVENTID,NAME,JOBNAME,UTILID,STARTTS,ENDTS,RETURNCODE,CONDITION
FROM SYSIBM.SYSUTILITIES
WHERE STARTTS >= UH_STARTTS
AND STARTTS <= UH_ENDTS;
The sample query that is shown in Example 8-3 shows all active or stopped utilities. Issue the
DIS UTIL(utilid) command by using the UTILID column value of each SYSUTILITIES row to
check whether the utility is active or stopped.
Example 8-3 Sample query for displaying all active or stopped utilities
SELECT EVENTID,NAME,JOBNAME,UTILID,STARTTS,RETURNCODE,CONDITION
FROM SYSIBM.SYSUTILITIES
WHERE RETURNCODE IS NULL
AND CONDITION=' ';
The sample query that is shown in Example 8-4 displays utilities with return code 8 or higher
in the last 24 hours.
Example 8-4 Sample query for displaying utility executions with errors in the last 24 hours
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_NO_DAYS = 1; -- NUMBER OF DAYS
SELECT EVENTID,NAME,JOBNAME,UTILID,STARTTS,RETURNCODE,CONDITION
FROM SYSIBM.SYSUTILITIES
WHERE STARTTS >= CURRENT TIMESTAMP - UH_NO_DAYS DAYS
AND RETURNCODE >= 8;
You can use the sample query that is shown in Example 8-5 to check on restarted utilities that
are in an active or stopped state.
Example 8-5 Sample query for showing restarted utilities in an active or stopped state
SELECT
EVENTID,NAME,JOBNAME,UTILID,STARTTS,RESTART,RETURNCODE,CONDITION
FROM SYSIBM.SYSUTILITIES
WHERE RESTART = 'Y'
AND RETURNCODE IS NULL
AND CONDITION=' ';
Example 8-6 Sample query for displaying REORG TABLESPACE and REORG INDEX executions that
are sorted by descending elapsed time in the last 7 days
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_NAME = 'REORG'; -- UTILITY NAME
SET UH_NO_DAYS = 7; -- NUMBER OF DAYS
SELECT EVENTID,NAME,JOBNAME,UTILID,STARTTS,ELAPSEDTIME,RETURNCODE,CONDITION
FROM SYSIBM.SYSUTILITIES
WHERE NAME = UH_NAME
AND STARTTS >= CURRENT TIMESTAMP - UH_NO_DAYS DAYS
ORDER BY ELAPSEDTIME DESC;
Example 8-7 Sample query for COPY executions for job T6E11028 sorted by descending elapsed time in the last 7 days
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_NAME = 'COPY '; -- UTILITY NAME
SET UH_NO_DAYS = 7; -- NUMBER OF DAYS
SELECT
EVENTID,NAME,JOBNAME,UTILID,STARTTS,ELAPSEDTIME,RETURNCODE,CONDITION
FROM SYSIBM.SYSUTILITIES
WHERE NAME = UH_NAME
AND JOBNAME='T6E11028'
AND STARTTS >= CURRENT TIMESTAMP - UH_NO_DAYS DAYS
ORDER BY ELAPSEDTIME DESC;
Example 8-8 Sample query for displaying LOAD executions that are sorted by descending CPU time in
the last 7 days
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_NAME = 'LOAD'; -- UTILITY NAME
SET UH_NO_DAYS = 7; -- NUMBER OF DAYS
SELECT
EVENTID,NAME,JOBNAME,UTILID,STARTTS,CPUTIME,RETURNCODE,CONDITION
FROM SYSIBM.SYSUTILITIES
WHERE NAME= UH_NAME
AND STARTTS >= CURRENT TIMESTAMP - UH_NO_DAYS DAYS
ORDER BY CPUTIME DESC;
The sample queries that are shown in Example 8-9 to Example 8-12 on page 120 show the
counts of executions by utility name.
Example 8-9 Sample query for displaying the total count of executions by utility name
SELECT A.NAME AS UTILITY_TYPE, COUNT(*) AS UTILITY_COUNT,A.EVENTID
FROM SYSIBM.SYSUTILITIES A
GROUP BY A.NAME,A.EVENTID;
Example 8-11 Sample query for showing the count of executions by utility name in the past 7 days
SET UH_NO_DAYS = 7; -- NUMBER OF DAYS
SELECT A.NAME AS UTILITY_TYPE, COUNT(*) AS UTILITY_COUNT
FROM SYSIBM.SYSUTILITIES A
WHERE A.STARTTS >= CURRENT TIMESTAMP - UH_NO_DAYS DAYS
GROUP BY A.NAME;
Example 8-12 Sample query for displaying the count of different utilities with SYSCOPY rows in the
past 7 days
SET UH_NO_DAYS = 7; -- NUMBER OF DAYS
SELECT A.NAME AS UTILITY_TYPE, COUNT(*) AS UTILITY_COUNT
FROM SYSIBM.SYSUTILITIES A
WHERE A.STARTTS >= CURRENT TIMESTAMP - UH_NO_DAYS DAYS
AND
EXISTS (SELECT * FROM SYSCOPY B WHERE B.EVENTID = A.EVENTID)
GROUP BY A.NAME;
The first example query (Example 8-13 on page 121) shows the most recent successful
execution of REORG TABLESPACE for a specific table space or REORG INDEX for a specific index
space (with the COPY YES attribute). Set global variables UH_DB to the database name and
UH_PS to the name of the table space or the index space. For partitioned objects, the REORG
information is displayed for each partition, even when space level processing occurred.
Some data in SYSCOPY and SYSUTILITIES might be deleted independently from each other
by MODIFY RECOVERY and SQL DELETE respectively. It is possible that information might not exist
for the most recent REORG execution. The information in a SYSCOPY row contains the object
information and if it does not exist, the query displays 'NO CORRESPONDING SYSCOPY ROW
FOUND'. If the SYSUTILITIES row does not exist, the query displays ‘NO CORRESPONDING
SYSUTILITIES ROW FOUND’. When DSNTEP3 is used to run this query and the ‘NO
CORRESPONDING …' text appears, it might set a return code of 4, but it does not affect the query
results.
You can set global variable UH_PS to ‘1’ to show the most recent REORG for all table spaces
and index spaces (with the COPY YES attribute) in the database.
This sample query returns and displays results that are similar to Figure 8-1 and Figure 8-2
on page 123.
In Figure 8-1, the results include a row for each partition of table space DB000231.TP000231.
Partitions 1 and 3 were processed by the REORG TABLESPACE execution with event ID 1001.
Partitions 2 and 4 were processed by the REORG TABLESPACE execution with event ID 1004. A
REORG TABLESPACE execution was not recorded in SYSCOPY and SYSUTILITIES for partitions
5 - 10, so ‘NO CORRESPONDING …' is displayed in the SC_TIMESTAMP (SYSCOPY
TIMESTAMP) and SU_ENDTS (SYSUTILITIES ENDTS) result columns for these partitions.
The next example query (Example 8-14) shows the most recent successful execution of
REORG TABLESPACE or REORG INDEX on objects in a database when the SYSCOPY information
for the REORG exists. The results are like the results for Example 8-13 on page 121, but nothing
is displayed when the SYSCOPY information does not exist.
Example 8-14 Query for the most recent successful REORG for table space DB000231.TP000231 when the
corresponding SYSCOPY information exists
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_DB = 'DB000231'; -- DATABASE NAME
SET UH_PS = ’TP000231’; -- TABLE SPACE OR INDEX SPACE (COPY YES)
--SET UH_PS = '1'; -- ALL TABLE SPACES OR INDEX SPACES (COPY YES)
--THE OUTER MOST PART OF THIS QUERY ACTS ON THE LIST OF THE LATEST
-- REORGS PER DATABASE, PAGE SET, AND PARTITION, ALONG WITH OPTIONAL
--SYSUTILITIES INFORMATION. IF NO SYSUTILITIES INFORMATION IS FOUND, A
--“NO CORRESPONDING...” MESSAGE IS DISPLAYED.
SELECT I.PAGESET_TYPE, I.DBNAME, I.NAME, I.PARTITION, I.EVENTID
,I.SC_TIMESTAMP
,COALESCE(CAST(F.ENDTS AS VARCHAR(30)),
'NO CORRESPONDING SYSUTILITIES ROW FOUND'
) AS SU_ENDTS
FROM
(
--THIS SELECT LIST EXPRESSION ADDS THE SYSCOPY EVENTID INTO THE
--RESULT LIST OF THE LATEST REORG PER DATABASE, PAGE SET, AND
--PARTITION.
SELECT G.PAGESET_TYPE, G.DBNAME, G.NAME, G.PARTITION
,(SELECT E1.EVENTID
FROM SYSIBM.SYSCOPY E1
WHERE E1.DBNAME = G.DBNAME
AND E1.TSNAME = G.NAME
AND (E1.DSNUM = G.PARTITION OR E1.DSNUM = 0)
AND E1.TIMESTAMP = G.SC_TIMESTAMP
) AS EVENTID
,G.SC_TIMESTAMP
FROM
(
--THIS PRIMARY BLOCK CREATES A LIST OF PAGE SETS BASED ON THE GLOBAL
--VARIABLE INPUT PROVIDED. THIS QUERY ONLY RETURNS PAGE SETS
--WHERE A SYSCOPY ROW EXISTS, THEREFORE SYSCOPY INNER JOIN
--RETURNS THE TIMESTAMP COLUMN AND SELECTS THE LATEST REORG
This sample query returns and displays results that are similar to Figure 8-3, Figure 8-4 on
page 125, and Figure 8-5 on page 125.
In Figure 8-3, the results include a row for four (out of the ten) partitions of table space
DB000231.TP000231 with a REORG TABLESPACE execution that is recorded in SYSCOPY.
Partitions 1 and 3 were processed by the REORG execution with event ID 1001. Partitions 2 and
4 were processed by the REORG execution with event ID 1004. No rows are displayed for
partitions 5 - 10 because a REORG execution is not recorded in SYSCOPY.
In this example, the global variable UH_PS was set to ‘1’. The results in Figure 8-5 show the
most recent REORG TABLESPACE execution for each table space partition in database
DB000231. The partitions without a row in the result table indicate that a REORG TABLESPACE
was not recorded in SYSCOPY. Either the partition was never reorganized or the SYSCOPY
information about the REORG was deleted. The results do not include information for index
spaces because either REORG INDEX was not executed, the SYSCOPY information about the
REORG was deleted, or the index spaces were not set with the COPY YES attribute.
Figure 8-5 Query output for table spaces and index spaces in database DB000231
Before you start the cleanup, determine how long you want to keep useful utility history. You
might want to keep historical data for certain utilities longer than for others. Assess the size of
the SYSTSUTL table space regularly and clean out rows that you no longer need. You can
then issue SQL SELECT and DELETE statements as shown in Example 8-15 on page 126 -
Example 8-17 on page 127 to evaluate the rows in SYSUTILITIES and delete the ones that
are older than a specified age.
You can modify the following sample SQL statements to meet your needs and remove only
those SYSUTILITIES rows that you no longer need. If you use the sample SQL statements to
clean up your utility history information, run the statements together in the specified order.
Example 8-15 Query for a summary report of SYSUTILITIES rows and rows that meet the deletion
criteria (older than 365 days)
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_NO_DAYS = 365; -- OLDER THAN 365 DAYS
SELECT B.C1 AS #_QUALIFYING_SYSUTILITIES_ROWS
,B.C2 AS #_TOTAL_SYSUTILITIES_ROWS
,CASE
WHEN B.C2 > 0
THEN
(CAST(ROUND(
DECIMAL(B.C1) / DECIMAL(B.C2) * 100, 1) AS DECIMAL(4,1)))
||'%'
ELSE 'N/A'
END
AS "%_OF_QUALIFYING_ROWS"
FROM
(
SELECT COUNT(*) AS C1
,(SELECT COUNT(*) FROM SYSIBM.SYSUTILITIES) AS C2
FROM SYSIBM.SYSUTILITIES A
WHERE
STARTTS < CURRENT TIMESTAMP - UH_NO_DAYS DAYS
) B;
The next query (Example 8-16) produces a detailed report of the SYSUTILITIES rows that
meet the deletion criteria (older than the specified number of days specified) when global
variable UH_DELETE_PREVIEW is set to ‘1’, The report also includes the number of
SYSCOPY rows that correspond to the SYSUTILITIES rows.
Example 8-16 Query for detailed report of SYSUTILITIES rows that meet the deletion criteria (older
than 365 days) and the count of associated SYSCOPY rows
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_DELETE_PREVIEW = ‘1’; -- PRODUCE DETAILED REPORT OF QUALIFIED ROWS
SET UH_NO_DAYS = 365; -- OLDER THAN 365 DAYS
SELECT A.EVENTID
,A.NAME
,A.JOBNAME
,A.UTILID
,A.USERID
,CAST(A.STARTTS AS VARCHAR(26)) AS STARTTS
,CAST(A.ENDTS AS VARCHAR(26)) AS ENDTS
,A.NUMOBJECTS
,(SELECT COUNT(*)
FROM SYSIBM.SYSCOPY B
WHERE B.EVENTID = A.EVENTID
) AS #_SYSCOPY_ROWS
FROM SYSIBM.SYSUTILITIES A
WHERE UH_DELETE_PREVIEW = ‘1’
AND STARTTS < CURRENT TIMESTAMP - UH_NO_DAYS DAYS
UNION ALL
SELECT 0 AS EVENTID
To delete the SYSUTILITIES rows that match the deletion criteria (older than the specified
number of days), run the following DELETE statement (Example 8-17). Use caution when
setting the UH_NO_DAYS global variable because SYSUTILITIES rows older than the
specified number of days will be deleted. The count of deleted SYSUTILITIES rows is
reported.
Example 8-17 Deleting SYSUTILITIES rows that meet deletion criteria (older than 365 days)
SET CURRENT PATH = 'SC000231'; -- SCHEMA FOR GLOBAL VARIABLES
SET UH_DELETE_ROWS = ‘1’; -- DELETE SYSUTILITIES ROWS
SET UH_NO_DAYS = 365; -- OLDER THAN 365 DAYS
SELECT COUNT(*)
FROM OLD TABLE
(
DELETE FROM SYSIBM.SYSUTILITIES
WHERE UH_DELETE_ROWS = ‘1’
AND STARTTS < CURRENT TIMESTAMP - UH_NO_DAYS DAYS
) ;
Rerun the first query to generate the summary report to verify that the expected rows were
deleted.
Db2 13 further extends the page sampling support. With function level 500 or V13R1M500,
you can now use page sampling for gathering inline statistics when running REORG TABLESPACE
or LOAD REPLACE utilities for table spaces. Specify the STATISTICS option for REORG TABLESPACE
or LOAD REPLACE if you want to collect inline statistics for the utilities. Make sure that you also
specify the TABLESAMPLE SYSTEM keyword.
With page sampling enabled, the target pages, which are already in memory, are chosen for
inline statistics collection as REORG or LOAD loads the data during the RELOAD phase. This
behavior optimizes the execution performance of the utilities and reduces both the elapsed
time and the CPU time.
For more information about TABLESAMPLE SYSTEM for inline statistics, see the
IBM Documentation for Syntax and options of the REORGE TABLESPACE control statement
and the Syntax and options of the LOAD control statement.
Before Db2 13, when RECOVER was invoked with DSNUM ALL for space-level recovery of table
spaces, index spaces, or indexes, and the image copies were created at the partition or piece
level, Db2 returned error message DSNU512I (DATASET LEVEL RECOVERY IS REQUIRED),
indicating that the recovery failed at the space level. Objects with these errors were left
unrecovered, and the RECOVER operation ended with RC8. As a result, you had to retry the
recovery of individual partitions or pieces, or specify a LISTDEF list with the PARTLEVEL option.
In Db2 13, when you specify RECOVER for recovering the entire table space or index space, the
utility can use any space-level, partition-level, or piece-level image copies for the recovery.
These copies include sequential image copies or FlashCopy image copies that are created at
the partition or piece level. They also include inline sequential or FlashCopy image copies
that are taken by the LOAD or REORG TABLESPACE utility. This feature is supported on universal
table space (UTS), large object (LOB) table spaces, and partitioned indexes.
As shown in Example 8-18, Db2 issues a new message, DSNU1576I, when a space-level
recovery uses partition-level or piece-level image copies. Each individual image copy that is
used as a base for recovery is reported with a DSNU515I message.
The NOSYSUT1 keyword and the REORG_INDEX_NOSYSUT1 subsystem parameter were introduced
in Db2 12 with APAR PH25217. Starting in Db2 13 and with function level 500 (V13R1M500),
the NOSYSUT1 keyword is set as the default and only behavior for REORG INDEX when you
specify the SHRLEVEL REFERENCE or CHANGE keywords. Also, the default value of the
REORG_INDEX_NOSYSUT1 subsystem parameter is changed to YES, and the user specification of
this subsystem value is ignored.
The current dictionary is always available in the active source table space. However, if the
dictionary is being rebuilt by the REORG TABLESPACE or LOAD utility, the old dictionary is written
to the log where it can be used in IFCID 306 processing for SQL INSERT, UPDATE, and DELETE
operations that occurred before the new dictionary was built.
ISV applications that create a compression dictionary might need to write decompression
dictionary log records, so Db2 13 introduces dictionary support for REPAIR WRITELOG. With
function level 100 (V13R1M100), REPAIR WRITELOG was extended to accept a new value for
the TYPE and SUBTYPE values. The new value allows the decompression dictionary log
record up to the maximum log record size that is supported by Db2. After the decompression
dictionary is successfully written, REPAIR issues message DSNU3335I (Example 8-19) with the
RBA or LRSN of the log record. Independent software vendor (ISV) applications can use this
information to insert an appropriate SYSIBM.SYSCOPY record for the dictionary.
8.6 Summary
With new and enhanced functions, Db2 for z/OS utilities continue to play a vital role in helping
you manage your Db2 data. For more information about Db2 utilities, see the Db2 utilities and
Db2 for z/OS Utilities in Practice, REDP-5503.
Furthermore, IFCID 359 is disabled by default, which causes critical abnormalities during the
index split process to go unnoticed, including the total elapsed time of the split that is greater
than 1 second. Db2 13 address this concern by adding a IFCID 396 trace facility and three
new catalog fields, all of which trace and capture more pertinent information about index page
splits. The recorded information helps you identify and analyze the root cause of an INSERT
performance issue.
9.1.1 New IFCID 396 trace facility for recording abnormal index splitting
The new IFCID 396 trace facility records detailed information that is specific to index page
splits, and it is always enabled by default under Stats Class 3 and Performance Class 6. An
IFCID 396 record is automatically written when the total time of an index splitting event
exceeds the threshold of 1000 ms (1 second), which is considered “abnormal.”
To provide more flexibility, Db2 13 enables you to run IFCID 306 in non-PROXY mode and to
turn on or off the EDITPROC support function within IFCID 306.
Db2 13 adds more flexibility to IFCID 306 that enables you to use partition filtering as a range
of partitions to speed up log read process and increase the throughput for replication
operations. This enhancement benefits replication products that run multiple capture
programs in which each capture processes different parts of the same table space object in
parallel for better end-to-end performance.
A new WQLSPTRG flag is defined in the qualification area fields of WQLS. When you turn on this
flag, the mapping of WQLSDBPS in WQLS is replaced with the new mapping of WQLSDBPP.
The WQLSDBPP mapping contains two new fields that are called WQLSPPART_LOW and
WQLSPPART_HIGH, which can be specified only once for each table space.
Table 9-3 lists and describes the qualification area fields of WQLSDBPP. This area is mapped
by the assembly language mapping macro DSNDWQAL. This area maps the area that is
used to filter IFCID 0306 records when WQLSTYPE is set to 'DBPP'.
WQLSPPART_LOW 4 Hex, 2 bytes Low partition number of the table space partition range.
WQLSPPART_HIGH 6 Hex, 2 bytes High partition number of the table space partition range.
For a partitioned table space, you can set WQLSPPART_LOW and WQLSPPART_HIGH to
any value of low and high available partition numbers. If WQLSPPART_LOW is ‘0000’x and
WQLSPPART_HIGH is ‘0000’x or ‘FFFF’x, the entire partitioned table space qualifies for the
IFCID 306 process.
In Db2 13, IFCID was added to START TRACE(STAT) CLASS(1) so that IFCID 369 is started
automatically during Db2 startup. The result is that the collection and aggregation of
accounting statistics also occurs automatically. To maintain compatibility, START TRACE(STAT)
CLASS(9) remains unchanged.
9.6.1 New IFCID 411 trace facility for recording DDF application statistics
IFCID 411 records detailed statistics about applications that access the Db2 subsystem by
using the DRDA protocol.
The information can help identify applications that are using resources inefficiently in the
following types of situations:
An application is not releasing connections in a timely manner.
An application is causing unexpected thread termination.
An application is monitored by a profile but thresholds must be adjusted (too high or low).
An application is monopolizing resources (such as opening multiple threads) and should
be monitored.
QLAPAPPN CHAR(16) The name of the application that is running at the remote site.
QLAPPRID CHAR(8) The product ID of the remote location from which the remote application
connects.
QLAPCOMR BIGINT The number of commit requests that are received from the requester
(single-phase commit protocol) and committed requests that are received from
the coordinator (two-phase commit protocol).
QLAPABRR BIGINT The number of abort requests that are received from the requester (single-phase
commit protocol) and backout requests that are received from the coordinator
(two-phase commit protocol).
QLAPNREST BIGINT The number of times that the application reported a connection or application
condition from a REST service request.
QLAPNSSR BIGINT The number of times that the application reported a connection or application
condition from setting a special register through a profile.
QLAPNSGV BIGINT The number of times that the application reported a connection or application
condition from setting a global variable through a profile.
QLAPHCRSR BIGINT The number of times that the application used a cursor that was defined as WITH
HOLD and was not closed. That condition prevented Db2 from pooling Database
Access Threads (DBATs).
QLAPDGTT BIGINT The number of times that the application did not drop a declared temporary table.
That condition prevented Db2 from pooling DBATs.
QLAPKPDYN BIGINT The number of times that the application used a KEEPDYNAMIC package. That
condition prevented Db2 from pooling DBATs.
QLAPHIPRF BIGINT The number of times that the application used a high-performance DBAT. That
condition prevented Db2 from pooling DBATs.
QLAPHLOBLOC BIGINT The number of times that the application had a held large object (LOB) locater.
That condition prevented Db2 from pooling DBATs.
QLAPSPCMT BIGINT The number of times that a COMMIT was issued in a stored procedure. That
condition prevented Db2 from pooling DBATs.
QLAPNTHDPQ BIGINT The number of times that a thread that was used by a connection from the
application was queued because a profile exception threshold was exceeded.
QLAPNTHDPT BIGINT The number of times that a thread that was used by a connection from the
application was terminated because a profile exception threshold was exceeded.
QLAPNTHDA BIGINT The number of times that a thread that was used by a connection from the
application abended.
QLAPNTHDC BIGINT The number of times that a thread that was used by a connection from the
application was canceled.
QLAPNTHD INTEGER The current number of active threads for the application.
QLAPHTHD INTEGER For a statistics trace, this number is the highest number of active threads during
the current statistics interval; for a READS request, this number is the highest
number of active threads since Distributed Data Facility (DDF) started.
QLAPTHDTM BIGINT The number of threads that were queued because the MAXDBAT subsystem
parameter value was exceeded.
QLAPTHDTI BIGINT The number of threads that were terminated because the IDTHTOIN subsystem
parameter value was exceeded.
QLAPTHDTC BIGINT The number of threads that were terminated because the CANCEL THREAD
command was issued.
QLAPTHDTR BIGINT The number of threads that were terminated because a profile exception
condition for idle threads was exceeded.
QLAPTHDTK BIGINT The number of threads that were terminated because the threads were running
under KEEPDYNAMIC refresh rules, and the idle time exceeded
KEEPDYNAMICREFRESH idle time limit (20 minutes).
QLAPTHDTF BIGINT The number of threads that were terminated because the threads were running
under KEEPDYNAMIC refresh rules, and the time that the threads were in use
exceeded the KEEPDYNAMICREFRESH in-use time limit.
QLAPTHDTN BIGINT The number of threads that were terminated due to network termination.
9.6.2 New IFCID 412 trace facility for recording DDF user statistics
IFCID 412 records detailed statistics about users who access the Db2 subsystem by using
the Distributed Relational Database Architecture (DRDA) protocol. The information can help
identify users who are using resources inefficiently in the following types of situations:
A user is not releasing a connection in a timely manner.
A user is causing unexpected thread termination.
A user is monitored by a profile but thresholds must be adjusted (too high or low).
A user is monopolizing resources (such as opening multiple threads) and should be
monitored.
QLAUUSRI CHAR(16) The name of the client user ID under which the connection from the remote
application to the local site is established.
QLAUPRID CHAR(8) The product ID of the remote application from which the application connects.
QLAUCOMR BIGINT The number of commit requests that are received from the requester
(single-phase commit protocol) and committed requests that are received from
the coordinator (two-phase commit protocol).
QLAUABRR BIGINT The number of abort requests that are received from the requester (single-phase
commit protocol) and the number of backout requests that are received from the
coordinator (two-phase commit protocol).
QLAUNREST BIGINT The number of times that an application that was run by the specified client user
ID reported a connection or application condition from a REST service request.
QLAUNSSR BIGINT The number of times that an application that was run by the specified client user
ID reported a connection or application condition from setting a special register
through a profile.
QLAUNSGV BIGINT The number of times that an application that was run by the specified client user
ID reported a connection or application condition from setting a global variable
through a profile.
QLAUHCRSR BIGINT The number of times that an application that was run by the specified client user
ID used a cursor that was defined as WITH HOLD and was not closed. That
condition prevented Db2 from pooling DBATs.
QLAUDGTT BIGINT The number of times that an application that was run by the specified client user
ID did not drop a declared temporary table. That condition prevented Db2 from
pooling DBATs.
QLAUKPDYN BIGINT The number of times that an application that was run by the specified client user
ID used KEEPDYNAMIC packages. That condition prevented Db2 from pooling
DBATs.
QLAUHIPRF BIGINT The number of times that an application that was run by the specified client user
ID used a high-performance DBAT. That condition prevented Db2 from pooling
DBATs.
QLAUHLOBLOC BIGINT The number of times that an application that was run by the specified client user
ID used a held LOB locator. That condition prevented Db2 from pooling DBATs.
QLAUSPCMT BIGINT The number of times that a COMMIT was issued in a stored procedure that was
called by the specified client user ID. That condition prevented Db2 from pooling
DBATs.
QLAUNTHDPQ BIGINT The number of times that a thread that was used by a connection from the
application that was run by the specified client user ID was queued because a
profile exception threshold was exceeded.
QLAUNTHDPT BIGINT The number of times that a thread that was used by a connection from the
application that was run by the specified client user ID terminated because a
profile exception threshold was exceeded.
QLAUNTHDA BIGINT The number of times that a thread that was used by a connection from an
application that was run by the specified client user ID abended.
QLAUNTHDC BIGINT The number of times that a thread that was used by a connection from an
application that was run by the specified client user ID was canceled.
QLAUNTHD INTEGER The current number of active threads for the application that were run by the
specified client user ID.
QLAUHTHD INTEGER For a statistics trace, this number is the highest number of active threads during
the current statistics interval; for a READS request, this number is the highest
number of active threads since DDF was started.
QLAUTHDTM BIGINT The number of threads that are associated with the specified client user ID that
were queued because the MAXDBAT subsystem parameter value was exceeded.
QLAUTHDTI BIGINT The number of threads that are associated with the specified client user ID that
were terminated because the IDTHTOIN subsystem parameter value was
exceeded.
QLAUTHDTC BIGINT The number of threads that were associated with the specified client user ID that
were terminated because the CANCEL THREAD command was issued.
QLAUTHDTR BIGINT The number of threads that were associated with the specified client user ID that
were terminated because a profile exception condition for idle threads was
exceeded.
QLAUTHDTK BIGINT The number of threads that were associated with the specified client user ID that
were terminated because the threads were running under KEEPDYNAMIC
refresh rules, and the idle time exceeded the KEEPDYNAMICREFRESH idle
time limit (20 minutes).
QLAUTHDTF BIGINT The number of threads that were associated with the specified client user ID that
were terminated because the threads were running under KEEPDYNAMIC
refresh rules, and the time that they were in use exceeded the
KEEPDYNAMICREFRESH in-use time limit.
QLAUTHDTN BIGINT The number of threads that were associated with the specified client user ID that
were terminated due to network termination.
9.6.3 New fields for the IFCID 365 trace facility for recording DDF location
statistics
New fields were added for counting Database Access Threads from remote locations that
were terminated. This new information can help identify locations that are using resources
sub-optimally.
QLSTNTPLH BIGINT The number of high-performance DBATs that were terminated after remaining in
a pool longer than POOLINAC.
QLSTNTILS BIGINT The number of DBATs that were terminated when the TCP socket was closed
due to connection loss.
9.6.4 New fields added to the IFCID 1 trace facility for recording global DDF
New fields were added for counting Database Access Threads that remained active or were
terminated at the server. This new information can help analyze resource utilization at the
server.
QDSTNAKD INTEGER The current number of DBATs that are active due to usage of packages that are
bound with KEEPDYNAMIC yes.
QDSTMAKD INTEGER The maximum number of DBATs that are active due to usage of packages that
are bound with KEEPDYNAMIC yes.
QDSTNDBT INTEGER The number of DBATs that were terminated since DDF was started.
QDSTNTPL INTEGER The number of DBATs that were terminated after remaining in a pool longer than
POOLINAC.
QDSTNTRU INTEGER The number of DBATs that were terminated after being reused more times than
the limit.
Two new fields were added to the relevant GBP statistics storage areas:
The average time in milliseconds that a data area stays in a GBP before it is removed.
The average time in milliseconds that a directory entry stays in a GBP before it is
removed.
Both metrics are accessible through more fields in IFCID 230 and 254 trace records and by
using the -DISPLAY GROUPBUFFERPOOL command.
Table 9-8 and Table 9-9 show the new fields that were added to IFCID 230 and IFCID 254.
QBGBART CHAR(8) “Data Area Residency Time” is the weighted average in microseconds of the
elapsed time that a data area stays in a GBP before it is reclaimed.
QBGBERT CHAR(8) “Directory Entry Residency Time” is the weighted average in microseconds of
the elapsed time that a directory entry stays in a GBP before it is reclaimed.
QW0254AR CHAR(8) “Data Area Residency Time” is the weighted average in microseconds of the
elapsed time that a data area stays in a GBP before it is reclaimed.
QW0254ER CHAR(8) “Directory Entry Residency Time” is the weighted average in microseconds of
the elapsed time that a directory entry stays in a GBP before it is reclaimed.
The improved diagnostic log record is written when one of the following conditions is met:
When a space search of an insert transaction visits more than 120 data pages but fails to
insert a row.
When the insert transaction fails to find space for inserting a row and insert also performs
cross-partition retry on a partition but fails with a conditional partition lock.
The insert transaction causes lock escalation.
To support the Db2 13 strategy for First Failure Data Capture (FFDC), a more granular default
setting of 10 seconds was implemented for the STATIME_MAIN subsystem parameter. This new
default value causes data to be collected more frequently instead of asking customers to
re-create their problem. This serviceability enhancement can help to speed up problem
diagnosis by having more data readily available.
IBM Watson® Machine Learning for z/OS (WMLz) and IBM Db2 AI for z/OS (Db2ZAI) are two
of the solutions that integrate seamlessly with your Db2 for z/OS system. This chapter
introduces WMLz and Db2ZAI, highlights their key features and capabilities, and describes
the use cases or opportunities for using them in your Db2 for z/OS environment. It shows that
together with Db2 for z/OS, WMLz and Db2ZAI can transform the way you harness the power
of your data and gain actionable insights, which ultimately help you achieve greater speed to
market and higher return on investment (ROI).
WMLz consists of a required base component (the WMLz base) and an optional integrated
development environment (IDE) component (the WMLz IDE). As Figure 10-1 shows, the
WMLz base runs on z/OS and leverages IBM machine learning capabilities on IBM Z,
including IBM Open Data Analytics for z/OS (IzODA). IzODA serves as the data processing
cluster for WMLz and delivers advanced data analytics through z/OS Spark (Spark), z/OS
Anaconda (Anaconda), and Mainframe Data Service (MDS).
Although Spark provides a best-of-class analytics engine for large-scale data processing and
Anaconda supplies a wide range of popular Python packages for model training, MDS
connects those data processing engines to your enterprise data sources on IBM Z, such as
Db2 for z/OS, Information Management System (IMS), Virtual Storage Access Method
(VSAM), and System Management Facilities (SMF). In addition to being one of the primary
data sources, the WMLz base also uses Db2 for its repository service.
The optional IDE runs on RHOCP for s390x and x86 64-bit servers. Built on Red Hat
Enterprise Linux and Kubernetes, RHOCP provides a secure, scalable, and multi-tenant
operating system and a range of integrated application run times and libraries. The IDE also
leverages IBM machine learning capabilities in Watson Studio and IBM SPSS® Modeler.
Both WMLz base and IDE feature a web user interface (UI) and various APIs. The base also
includes a configuration tool and an administration dashboard, as shown in Figure 10-2 on
page 145. Together, the interfaces provide a powerful suite of easy-to-use tools for model
development and deployment, user management, and system administration.
The various tools, run times, and libraries simplify the machine learning operations for your
key personas, including system administrators, machine learning administrators, data
scientists, and application developers. For example, system administrators can use the
administration dashboard to keep the services on the base running, and machine learning
administrators can deploy machine learning models as services to IBM Customer Information
Control System (CICS), Java, and other applications on IBM Z. The machine learning
administrators can also import models that are trained in the IDE into the base for deployment
and management. Application developers can use the deployments and services in their
applications to make real-time predictions.
Secure IBM Z platform for running machine learning with data in place
WMLz provides state-of-the-art machine learning technology behind the firewall on IBM Z, the
world's most secure computing platform, and sets the new standard for data gravity or data
movement at a large scale. Most cognitive solutions require that data is moved off their
on-premises or private cloud, exposing the data to potential breaches and introducing
significant operational costs. IBM recognizes the imperative of data gravity and provides
solutions that bring cognitive capabilities to the data. WMLz offers the unique ability for you to
work directly with data securely in place on IBM Z. By keeping your mission-critical data at the
source, WMLz delivers in-transaction predictive analytics with real-time data and minimizes
the latency, cost, and complexity of data movement.
Figure 10-3 The WMLz base cluster configuration for high availability
Taking full advantage of the IBM Z sysplex, dynamic virtual IP address (VIPA), and Shareport
technologies, WMLz ensures HA of a stand-alone scoring service or a scoring service cluster
by running the services natively on a single logical partition (LPAR) or across multiple LPARs.
WMLz also runs its online scoring service in CICS efficiently to meet service level
agreements (SLAs) for transaction processing. Transactions that are running in CICS call the
scoring service directly through CICS LINK, which eliminates the latency of network
communication.
Real-time prediction with minimal impact to transaction processing are key to machine
learning operations on IBM Z. WMLz addresses this requirement in the architecture and
implementation of its online and batch-scoring services by leveraging IBM Z technologies. For
example, with WMLz, you can configure the online scoring service in CICS, where most data
transaction and processing happen on IBM Z. This flexibility ensures fast processing of
scoring requests to meet SLAs for transaction processing. To ensure HA, WMLz leverages
TCP/IP port sharing if the scoring service cluster is on a single LPAR or the sysplex distributor
if the cluster spans across multiple LPARs.
WMLz can help satisfy this requirement and optimize your loan decision-making. Instead of
relying exclusively on explicit rules, WMLz can use the historical data of past loans and
approvals as a base and a range of algorithms to uncover implicit patterns in a vast amount of
application data and predict the risk of default before approving a new loan.
Built on top of Watson Machine Learning for z/OS, Db2ZAI enables the Db2 for z/OS
optimizer to leverage machine learning capabilities to reduce CPU consumption and IT costs.
Db2ZAI provides three main use cases for improving your Db2 for z/OS applications: SQL
optimization, system assessment, and distributed connection control.
Db2ZAI predicts the likely behavior of your SQL (for example, host variable values and the
number of rows that is fetched by the query) and uses this information to create better access
paths.
By learning the values that are used by your host variables, Db2ZAI helps to achieve better
filter factor estimation. With better filter factor and application behavior, the optimizer can
choose better access paths. Another source of information for SQL optimization is
screen-scrolling application behavior.
How it works
The Db2ZAI system assessment feature learns from your system’s historical data to
understand the specific workload characteristics of the system.
It automatically collects the key system performance and workload metrics, and then it
provides an exception analysis that is based on the learned baseline thresholds. Then, it
generates a set of recommended actions to take (including current related zparm or buffer
pool settings) and considerations for making updates.
The DCC feature addresses many of the challenges that are faced by Db2 system
administrators in managing their inbound network traffic.
In addition, Db2 administrators sometimes lack knowledge about how to set up the Db2
mechanisms that are used to control connections. Setting connection controls too low can
cause an application outage, but setting connection controls too high can reduce their
effectiveness.
How it works
The DCC feature uses the Instrumentation Facility Component Identifier (IFCID) 365, 402,
411, and 412 data that is collected by Db2. When you initiate it, DCC trains itself by using the
IFCID 365 data to generate recommended connection profiles.
When training is complete, you review training data and the recommended Db2 profiles, and
make any necessary edits. Finally, you activate the recommended profiles to enable
protection.
When profile exceptions occur, alerts are raised in the Db2ZAI user interface and email alerts
are sent to notify you about the situation. Then, you can review the historical statistical data to
help you determine the correct course of action to resolve the exception.
For more information about WMLz and Db2ZAI, see the following resources:
WMLz product details
WMLz technical documentation
Db2ZAI product details
Db2ZAI technical documentation
Although they are available independently, the APIs are also integrated directly within Db2
Developer Extension to enable application developers to tune their code and within Db2
Administration Foundation for z/OS (Admin Foundation) to enable administrators to tune SQL
that is identified through the catalog, statement cache, and other query monitoring
integrations. Specifically, the APIs are provided for the following tasks:
Retrieve SQL queries from several sources.
Generate a visual representation of access plans for an SQL statement by using Visual
Explain to enable you to more easily analyze the access path that was chosen for queries.
Generate RUNSTATS recommendations to improve the information that is available to the
optimizer so that the cost of running queries can be evaluated more accurately based on
different access path choices.
Capture the environment in which an SQL statement runs for diagnostic purposes.
Perform administrative functions, such as managing EXPLAIN tables and jobs.
SQL Tuning Services is a component of the IBM Database Services Expansion Pack feature,
which is included with Db2 Accessories Suite for z/OS.
You can use the Visual Explain API to gain the following insights about a query:
Determine whether an index was used to access a table. If an index was not used, Visual
Explain can help you determine which columns might benefit from being indexed.
Easily examine table join sequence and methods and determine whether a sort is used.
View the effects of performing various tuning techniques by comparing the before and
after versions of the access plan graph for a query.
Obtain information about each operation in the access plan, including the total estimated
cost and number of rows that is retrieved (cardinality).
View the statistics that were used at the time of optimization. Then, you can compare
these statistics to the current catalog statistics to help you determine whether rebinding
the package might improve performance.
IBM Db2 Developer Extension provides language support for Db2 for z/OS SQL, including
embedded statements in non-SQL file types, such as COBOL, Python, Java, and Node.js, so
that application developers can use familiar programming languages and their preferred
development environment to interface with Db2 for z/OS data.
Additionally, if you also use the Zowe Explorer extension for Visual Studio Code, you can use
the Db2 Developer Extension capabilities on SQL that exists directly on z/OS.
Writing SQL
Db2 Developer Extension offers a host of features that simplify the writing of SQL that is easy
to parse and syntactically correct:
Syntax checking and highlighting ensure that the SQL that you write is syntactically
correct. Syntax errors are highlighted and, whenever possible, you are provided with
potential fixes for the errors.
SQL formatting makes it easy to parse large blocks of code and to understand the
relationship between different blocks of SQL elements and clauses. SQL formatting is
supported in all SQL file types, including .sql, .ddl, .spsql, and many others.
Code completion enables you to type a few characters of the SQL element that you are
looking for before you are presented with a selectable list of options, while signature help
dynamically presents the parameters that are available as you code your SQL elements.
As Figure 11-1 shows, these features are available for Db2-supplied stored procedures
and built-in functions.
Customizable code snippets enable you to define commonly used blocks of code and
easily insert them into your file.
Integrated SQL support links you directly to Db2 for z/OS reference information for the
SQL element that you are coding so that you do not have to search for syntax details.
Select multiple SQL elements on different lines and run those elements as a complete
statement.
Display consolidated results from running multiple SQL statements in a single view.
Run SQL with or without parameter markers and host variables.
Run SQL that includes parameters and variables from within a native stored procedure
(.spsql file).
Customize commit and rollback behavior that is based on whether an SQL statement fails
or is successful.
Store the results of every SQL execution for historical purposes.
Sort the query history by the timestamp of the execution so that you can quickly identify
and display failing SQL statements.
Db2 Developer Extension includes the following capabilities for working with native stored
procedures:
Easily create native stored procedures by using many of the ease-of-use features that are
described earlier in this section.
Define reusable deployment options that are separate from the code itself, which makes it
easier to iteratively run your code and simplifies the ability to push your code into a source
code manager.
Set conditional hit-count breakpoints, which are helpful for debugging stored procedures
that contain loops, as shown in Figure 11-3.
Generate advice for collecting statistics for the data objects that are involved in an SQL
statement and update statistics profiles by using the Statistics Advisor.
Generate and download all the artifacts that are needed to re-create access path issues,
such as DDL and statistics, by using the Capture Query Environment feature.
Create, migrate, and drop EXPLAIN tables.
Configure tuning options.
Save a history of your tuning activities.
All the features and capabilities that are described in the previous sections and more are
documented in detail in IBM Developer Extension Documentation Repository on GitHub,
along with tips and tricks for getting the most out of Db2 Developer Extension.
Admin Foundation is available as a component of IBM Unified Management Server for z/OS.
The framework of Admin Foundation is provided by the open-source Zowe Virtual Desktop,
which is a modern and intuitive alternative to traditional IBM Z interfaces. It provides z/OS
administrators and developers with a simple, open, and familiar tool platform.
You can run multiple SQL statements simultaneously from the SQL editor, and the results of
each run are displayed for easy review, as shown in Figure 11-5 on page 163.
Tuning profiles
Tuning profiles reference a particular Db2 for z/OS subsystem that is registered in Admin
Foundation and on which tuning actions are run. With Admin Foundation, you can create,
manage, view, share, and delete tuning profiles. By using these tuning profiles, you can
submit tuning jobs and then view the results of the submitted job and recommendations for
actions that you can take on the submitted job.
Figure 11-6 on page 165 shows the Create tuning profile page.
Supported sources
You can capture results from SQL statements from the following sources:
Statement cache
Input file
Packages
Plans
Stabilized SQL
By using the capture results from the various sources, you can determine the SQL statements
that are running efficiently; the statements that must be tuned or modified; the statements that
are using too many resources; and the statements that can be improved.
Figure 11-7 Filtering and ordering options for columns of SQL statements
For example, some filtering criteria that you can specify for an SQL statement displays the
load on the CPU from that statement; the length of time that the statement took to run; and
the amount of resources that the statement is using, as shown in Figure 11-8. These statistics
help you identify statements that must be tuned or modified and identify weaknesses in a
system.
Note: You can also specify the first characters of the object name, its schema, and a
specific subsystem to search, but for this use case, all tables are being displayed.
2. Select the table that you want to explore to display details about that table. The details are
organized by the following tabs:
– The Overview tab provides a high-level view of several key characteristics of the table,
as shown in Figure 11-10. You can display more detailed information about the object
hierarchy, referential integrity (RI), and the structure of the table by clicking the Details
link in the respective views. You can also display these details by clicking the tabs at
the top of the dashboard.
– The Structure tab shows you all the columns in the table, their data types, length, and
other characteristics of the columns. It also shows information about indexes that are
defined for the table.
– The DDL tab displays the SQL that was used to create the table. From here, you can
run the SQL directly, or you can copy it and edit it in the SQL editor or use it as a
template to create a similar table elsewhere in your system.
From here, you can run the DDL directly, or you can copy it and edit it in the SQL editor or
use it as a template to create a similar table elsewhere in your system.
2. Select the stored procedure that you want to work with to display a summary of its
characteristics and structure.
3. Click the DDL tab to generate the DDL for the stored procedure.
A typical use case involves running both Visual explain and Statistics advisor simultaneously,
and then switching back and forth between the output of these two jobs to evaluate and
understand any performance issues, as shown in Figure 11-17.
3. Click the Bind tab to display the BIND command for the package or plan that you selected
in an editor, as shown in Figure 11-19 on page 173. From here, you can bind, rebind, and
free packages and plans. You can also use the Diff view to review any changes that you
make before you apply them.
4. Click the SQL tab to display all the SQL statements that are bound to that package, and
then click the three vertical dots at the right side of each SQL statement to initiate tuning or
to view the query.
This process can also be initiated by selecting plans or packages from the Select source
menu in the Explore SQL dashboard.
Admin Foundation makes it easy to stabilize SQL statements and to free statements that are
already stabilized.
11.4 Summary
SQL Tuning Services, Db2 Developer Extension, and Admin Foundation represent the next
generation of tools for administering Db2 for z/OS and developing Db2 for z/OS applications.
They are designed and developed specifically to increase the efficiency and productivity of
Db2 for z/OS users of all skill levels by using robust APIs, intuitive user interfaces, and familiar
related tools. They are also designed to evolve alongside Db2 for z/OS to support the latest
Db2 13 enhancements and beyond.
When you migrate to Db2 13, you can continue to use your favorite Db2 tools. You can use
any of these tools with Db2 13 starting on day one of Db2 13 general availability (GA). More
importantly, you can use these tools to leverage new features of Db2 13.
The following IBM tools have a new version or release to support Db2 13:
Db2 Administration Tool for z/OS 13.1
Db2 Query Management Facility 13.1
IBM OMEGAMON® for Db2 Performance Expert on z/OS 5.5
For the following IBM tools, you can enable support for Db2 13 by applying a PTF to the latest
version:
Db2 Analytics Accelerator Loader for z/OS 2.1
Db2 Automation Tool for z/OS 4.3
Db2 Cloning Tool for z/OS 3.2
Db2 Log Analysis Tool for z/OS 3.5
Db2 Recovery Expert for z/OS 3.2
Db2 SQL Performance Analyzer for z/OS 5.1
Db2 Query Monitor for z/OS 3.3.
Data Virtualization Manager for z/OS 1.1
Db2 Admin Tool released a new version (13.1) that includes comprehensive support for new
Db2 13 features and usability improvements. You can run Db2 Admin Tool 13.1 on both
Db2 13 subsystems and Db2 12 subsystems.
Db2 13 features are supported by the following enhancements in Db2 Admin Tool 13.1:
The ability to easily convert partition-by-growth (PBG) table spaces to partition-by-range
(PBR) table spaces by using the new Db2 13 ALTER syntax, which minimizes outages and
maximizes concurrency.
The ability to view Db2 utility history information for better insight into utility usage and
optimization, including the ability to search and delete SYSUTILITIES rows and to
navigate from SYSCOPY rows and DISPLAY UTIL output to related SYSUTILITIES rows.
Support for deleting an active log data set simply by selecting the data set from a list.
Support for specifying a package owner type (role or user), which increases the flexibility
for package ownership.
Page sampling support for inline statistics for the REORG and LOAD utilities, which has the
potential to reduce both CPU time and elapsed time.
A simple panel interface for managing Db2 profiles, including support for the profile
enhancements in Db2 13, which make profiles increasingly important in helping automate
certain aspects of monitoring and controlling a Db2 subsystem.
Support for longer column names (up to 128 characters).
Support for all changes to the real-time statistics (RTS) tables and Db2 catalog tables in
function levels 500 and 501.
Removal of the NOSYSUT1 option from the REORG INDEX utility options panel and generated
jobs to correspond to the REORG INDEX changes in Db2 13.
Db2 Admin Tool 13.1 also includes the following usability enhancements:
The ability to generate commands on a list of objects, which can save extra steps and
time.
The visibility of the catalog level and remote subsystem location (if applicable) on the main
menu.
The ability to navigate directly from a list of table spaces or indexes to the RTS tables for a
selected object.
The option to suppress the copyright statement when invoking Db2 Admin Tool, which
allows external tools to seamlessly invoke the catalog navigation feature.
Note: For examples of some of these key enhancements, see the following blog posts in
the IBM Community:
Db2 Administration Tool 13.1: Converting partition-by-growth (PBG) table spaces to
partition-by-range (PBR) table spaces online
Db2 Administration Tool 13.1: Generating commands for a list of objects
Db2 Administration Tool 13.1: Managing utility history
Db2 Administration Tool 13.1: Managing Db2 profile tables
For a list of all the enhancements in Db2 Admin Tool 13.1, see What’s new in Db2 Admin Tool
13.1.
Db2 Automation Tool 4.3 supports new features in Db2 13 by providing the following
capabilities:
You can view utilities history information in an easy-to-read format on an ISPF panel with
the ability to delete history table entries.
If you convert a table space from PBG to PBR, Db2 Automation Tool can detect the
resulting pending status and trigger a REORG operation or other appropriate action.
All new Db2 commands, parameters, new column types in RTS tables, and utility syntax
changes are supported by Db2 Automation Tool.
For more information about these enhancements and other new features in Db2 Automation
Tool, see the IBM Db2 Automation Tool for z/OS 4.3 library.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 177
12.3 IBM Db2 Log Analysis Tool for z/OS
IBM Db2 Log Analysis Tool for z/OS helps to ensure high availability (HA) and complete
control over data integrity. It allows you to monitor data changes by automatically building
reports of changes that are made to database tables.
Db2 Log Analysis Tool 3.5 supports Db2 13 by offering the following new features:
Support for 128-character column names in Db2 tables
Improved handling of inline image copies to implement processing through online
conversion of a PBG table space to a PBR table space
The ability to generate an Application-Relationships report based on intelligent analysis of
the Db2 logs
This new Db2 Log Analysis feature discovers the application-level relationship of Db2 objects
and allows you to accomplish the following tasks:
Optimize an environment to group objects and perform backup, recovery, and DevOps
tasks more efficiently.
Apply a relational database model to prevent update, insert, and delete operations’
anomalies.
Enhance transaction processing to ensure high performance, reliability, and consistency.
When you generate activity reports, a new option enables data collection on objects and units
of work for relationship discovery. Application-level relationship discovery is available in the
Utilities menu.
For more information about these enhancements and other new features in Db2 Log Analysis
Tool, see the IBM Db2 Analysis Tool for z/OS 3.5 library.
Db2 Recovery Expert 3.2 supports Db2 13 and introduces many usability enhancements, the
most significant of which is Recover Health Check.
Recover Health Check can determine the recoverability of a set of objects and their resources
within a certain timeframe. Then, the Intelligent Suggestions for Recovery Health component
provides suggestions for improving recovery.
When you use the Recover Health Check feature, Db2 Recovery Expert reports timeframes
when selected objects are unrecoverable and the reason why they are unrecoverable.
By using the report, you can identify any limitations of your current recovery policy. For
example:
Recover Health Check identifies that the log data set cannot be found, so the object is
reported as unrecoverable for a certain timeframe.
Recover Health Check identifies when there are two LOG NO events and no image copy
or system-level backup in between, so the object is reported as unrecoverable for a certain
timeframe.
Then, Intelligent Suggestions for Recovery Health provides suggestions for improving the
recovery run time or the tool run time. These suggestions allow you to improve your recovery
routine.
In the following scenarios, Db2 Recover Expert offers the following suggestions:
If Recover Health Check identifies a deprecated table space within the selected objects,
Intelligent Suggestions for Recovery Health suggests converting the table space.
If Recover Health Check identifies an outdated SYSCOPY record, Intelligent Suggestions
for Recovery Health recommends running MODIFY RECOVERY.
For more information about these enhancements and other new features in Db2 Recovery
Expert, see the IBM Db2 Recovery Expert for z/OS 3.2 library.
Db2 Query Monitor for z/OS 3.3.0 supports Db2 13 and introduces many other
enhancements:
The ability to perform exception and alert reporting by using adaptive thresholds
Improved support for application development by enabling positive SQLCODEs to be
specified in monitoring profiles
Improved performance and serviceability:
– Reduced CPU consumption by enabling detailed object collection by workload.
– Smarter reorganization recommendations, which offer the ability to leverage SQL
performance data that is collected about application objects to determine whether a
reorganization is necessary.
– Loop detection in applications running on a thread or Db2 subsystem.
– Updated security protocols.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 179
Improved usability:
– Quicker navigation to problematic queries.
– Reorg Recommendation Monitor, which shows long name and offload column names
for threshold metrics.
– New OPTKEY QUERYNO added for static SQL.
For more information about these enhancements and other new features in Db2 Query
Monitor, see the IBM Db2 Query Monitor for z/OS 3.3 library.
QMF 13.1 is a new release that offers hybrid cloud deployment, support for cloud-native data
sources, improved task scheduling capability, and ease of use for configuration management.
Key enhancements include the following ones:
Simplification of workstation configuration rollout
An improved user of awareness for QMF features with context-specific tips
Hybrid cloud deployment on IBM Cloud® and Microsoft Azure
Support for the cloud-native data sources Athena and Redshift
Support for MongoDB
Improved scheduling capability with time zone selection
QMF for WebSphere deployment on IBM z/OS Container Extensions (IBM zCX)
Enhanced Global Variable support for the QMF IBM Z client
Enhanced QMF object organization with QMF for Time Sharing Option (TSO)
For more information about these enhancements and other new features in QMF 13.1, see
the IBM Db2 Query Management 13.1 library.
With the application of APAR PH32374, Db2 Sort leverages IBM z15 Integrated Accelerator
for Z Sort while running Db2 Utilities. Db2 LOAD, RUNSTATS, REBUILD INDEX, and REORG
TABLESPACE can use the on-chip sort acceleration facilities with Db2 Sort, which can help
reduce CPU usage and elapsed time for sorting various workloads.
This performance enhancement can greatly improve the throughput of your IBM Db2 Utilities
without requiring any extra memory, and it can be used by all the Db2 Utilities that invoke the
sort operation, not just REORG TABLESPACE. As a result, Db2 Sort has a larger range of IBM Z
Sort candidates for Db2 Utilities than DFSORT. In addition to the performance advantages,
Db2 Sort uses resources more intelligently.
Developers can readily combine IBM Z data with other enterprise data sources to gain
real-time insight, accelerate deployment of traditional mainframe and new web and mobile
applications, modernize the enterprise, and take advantage of the API economy. DVM also
supports data access and movement.
Its ability to provision virtual tables and virtual views on and over Db2 and non Db2 data
sources on and off the IBM Z platform make DVM an important piece of the Db2 for z/OS data
ecosystem.
The DVM server supports IBM supported hardware ranging from IBM z196 to the latest IBM
models running IBM z/OS 1.13 or later. The technology supports traditional database
applications, such as Db2 for z/OS, Information Management System (IMS), Integrated
Database Management System (IDMS), ADABAS, and traditional mainframe file systems,
such as sequential files, ZFS, Virtual Storage Access Method (VSAM) files, log-stream, and
System Management Facilities (SMF).
DVM reduces overall mainframe processing usage and costs by redirecting processing that is
otherwise meant for general-purpose processors (GPPs) to zIIPs. DVM provides
comprehensive, consumable data that can be readily accessible by any application or
business intelligence tools to address consumer demand and stay ahead of the competition.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 181
12.8.2 Db2 for z/OS usage
DVM provides unique benefits to organizations that are centered around Db2 for z/OS as their
primary management system for business-critical data. DVM leverages Db2 native stored
procedures and eliminates the requirement to write separate programs to access non Db2
data sources by using SQL. DVM can query non Db2 data sources and simplify using batch
COBOL programs and EXEC SQL to access mainframe data sources that are provisioned by
DVM.
The DRDA method allows a higher percentage of the Db2 workload to run in Service Request
Block (SRB) mode and to be offloaded to a zIIP specialty engine. Running workloads in SRB
mode lowers the total cost of ownership when compared to RRSAF by reducing the
dependency on z/OS GPP.
Figure 12-1 Db2 Direct provides direct access without using Db2 resources
By using user-defined table functions (UDTFs), both local and distributed DVM clients can
use Db2 common interfaces through SQL and through external and sourced functions. DVM
is enhanced to create UDTFs on virtualized data that is defined to the metadata catalog.
When a UDTF is defined on a virtual object, it can be accessed by using Db2 SQL and
supported attachment facilities.
Support for this feature is based on the usage of three-part-names within Db2 stored
procedures and applications, Db2 RESTful services, MQF for TSO, and the SQL processor
by using the file input (SPUFI) facility within the Db2 Interactive (Db2I) primary option menu of
ISPF.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 183
Some organizations choose to create a single Db2 table for each data set with an identifier
column for use as a search key, and a long varchar column in which to insert the complete
sequential or VSAM record as is.
DVM can transform individual data fields within a single Db2 column into a single virtual table
by creating a COBOL metadata layout that maps and links a relational structure to the fields
in the Db2 multi-field column. SQL queries can be run against the individual data fields in the
Db2 column by using the virtual table as though it were a fully normalized muti-column table.
Virtualized access from IBM Data Fabric and IBM Cloud Pak for Data
IBM Cloud Pak for Data provides a data virtualization service with integration to DVM. This
service facilitates connectivity and access to data sources that are configured by DVM and
supports business rules and policies that can be applied at an exposure point within
IBM Cloud Pak for Data for downstream traditional and modern mainframe applications.
OMEGAMON for Db2 PE introduces support for the following Db2 13 features:
Db2 13 introduces the SQL Data Insights feature, which provides three key built-in
cognitive functions (BIFs) that allow for injecting deep learning AI capability within Db2
SQL. From a monitoring and reporting perspective, SQL Data Insights exposes elapsed
and CPU timers that are included in Class 2 elapsed and CPU times. It represents only
those portions that are spent running the built-in scalar functions. With OMPE, you can
monitor this new accounting information from both real-time monitoring interfaces and
from the best-in-breed batch reporting (ACCOUNTING).
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 185
BATCH reporting support
Extensions to the batch ACCOUNTING and RECTRACE report sets were made to track the
respective SQL Data Insights metrics.
Using the SYSIN DD statement that is shown in Example 12-1 in the batch reporting job
creates an ACCOUNTING REPORT and a RECTRACE report. ACCOUNTING TRACE
support is also available (not shown).
Example 12-2 shows the SQL Data Insights metrics in the ACCOUNTING report (only
CLASS 1 / 2 times block are shown here).
Example 12-2 Sample SQL Data Insights metrics in the ACCOUNTING report
AVERAGE APPL(CL.1) Db2 (CL.2)
----------- ---------- ----------
ELAPSED TIME 1.340014 1.337055
NONNESTED 1.340014 1.337055
STORED PROC 0.000000 0.000000
UDF 0.000000 0.000000
TRIGGER 0.000000 0.000000
CP CPU TIME 2.633505 2.632807
AGENT 0.698717 0.698019
NONNESTED 0.698717 0.698019
STORED PRC 0.000000 0.000000
UDF 0.000000 0.000000
TRIGGER 0.000000 0.000000
SQL DI N/A 0.000000
PAR.TASKS 1.934789 1.934789
SQL DI N/A 1.648177
SE CPU TIME 1.920307 1.920295
NONNESTED 0.000000 0.000000
STORED PROC 0.000000 0.000000
UDF 0.000000 0.000000
TRIGGER 0.000000 0.000000
SQL DI N/A 0.000000
PAR.TASKS 1.920307 1.920295
SQL DI N/A 1.569330
SUSPEND TIME 0.000000 0.667829
AGENT N/A 0.638205
PAR.TASKS N/A 0.029625
STORED PROC 0.000000 N/A
UDF 0.000000 N/A
NOT ACCOUNT. N/A N/C
Db2 ENT/EXIT N/A 46.00
EN/EX-STPROC N/A 0.00
EN/EX-UDF N/A 0.00
EN/EX-SQL DI N/A 1371968.00
The report reflects an aggregation of the respective metrics that consider the respective
OMPE IDs, such as PLANNAME or AUTHID. Special attention can be given to the parallel task
processing metric for Specialty Engine (SE), which is computed by the reporting function. The
fields show the CPU and elapsed time that were spent running the AI built-in functions and
the number of invocations of AI functions.
In addition to the BATCH reporting function, the Performance Database is extended to include
the new SQL Data Insights metrics. You can process these metrics by using the Spreadsheet
Utility to generate CSV files from SMF or GTF data.
For the RECTRACE report, OMPE shows the details, as shown in Example 12-3.
These metrics show the amount of CPU or SE time and SQL Data Insights Events that occur
when the respective thread detail snapshot is taken. Continuing to refresh the details panel
shows whether activity that is triggered by SQL Data Insights is performed by the respective
thread.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 187
Figure 12-3 shows how these metrics are displayed in the E3270 Thread Detail Accounting
workspace (note the highlighted fields).
Besides the real-time zoom-in capability from Thread Summary, the Thread History in E3270
is extended to include the SQL Data Insights metrics.
Support for the PE Client is similar. Note the Data Insight labels in Figure 12-4.
The Db2 instrumentation changes are in IFCID 230 (Group Buffer Pool Attributes) and IFCID
254 (Coupling Facility Cache Structure Statistics).
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 189
Real-time monitoring support
Support for the data area and data and directory entry residency times creates the
Performance Expert Client interface and the E3270 host interface.
For the Performance Expert Client, support is added to the GBP details section (IFCID 254) in
the Statistics Details pages (see the Data area Residency time and Directory entry Residency
time fields), as shown in Figure 12-5.
Figure 12-5 Example Global GBP details in the Performance Expert Client
Furthermore, the GBP attributes changes are visible when you drill down into the Group
Buffer Pools section of the System Parameter page, as shown in Figure 12-6.
From an E3270 perspective, you can monitor the data from the System Statistics main
workspace by navigating to the Buffer Pool workspace and then selecting the Group Buffer
Pool tab. Zoom in to the respective GBPs and select the Performance Counters subtab to
display the residency times, as shown in Figure 12-7 on page 191.
For the GBP attributes display, also follow the System Statistics main workspace by
navigating to the Buffer Pool workspace and then selecting the Global Buffer Pool tab. Zoom
into the respective Global Buffer Pool to display the residency times, as shown in Figure 12-8.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 191
12.9.3 Monitoring SET CURRENT LOCK TIMEOUT related Db2 changes
OMPE supports several instrumentation changes that were introduced with the Db2 13 SET
CURRENT LOCK TIMEOUT feature. The changes are in data section QXST (IFCID 2 and 3 and
148) and in the Lock timeout and Deadlock IFCIDs (172 and 196).
In addition to these changes, a new IFCID 437 is written when a SET CURRENT LOCK TIMEOUT
statement is issued from an application or a profile.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 193
E3270 workspace changes for System Statistics
These changes are available from both the single Db2 subsystem and the Db2Plex view. Go
to the System Statistics main workspace and then select the SQL Counts tab (SQLC for
IBM Db2 Plex® view, SQL for single Db2 view). On the SQL Counts workspace, select the
DCL tab to view the new metrics, which are highlighted in Figure 12-9 and Figure 12-10.
The following workflow uses the E3270 interface to illustrate the enhancements, but the
Performance Expert Client changes are identical in nature.
To access and collect Db2 Events, the following tasks must be done:
Db2 Event trace collection must be turned on.
Timeout and deadlock data must be collected.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 195
On the main single Db2 panel, navigate to the Events option (option E) and select a
timeframe and event filter, as shown in Figure 12-12.
Click OK. The deadlock and timeout events are shown in the Events Summary workspace
(Figure 12-13).
By zooming into the list of events for deadlock and timeout, you can see more metrics that
support the enhanced timeout management function that was introduced by Db2 13:
Timeout Detail
Deadlock Details
Timeout Detail
The Interval Source field indicates what setting triggered the timeout condition (IRLM, special
register, or subsystem parameter), as shown in Figure 12-14 on page 197.
Zoom into the Blocker Details to display the Holder Timeout Interval and the Holder Interval
Source fields (Figure 12-15).
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 197
Deadlock Details
Select the details of a deadlock event. The workspace that is shown in Figure 12-16 opens.
Drill down into the Resource details and locate the Waiter Source of the Worth value, which
can be Global Variable or Other (see Figure 12-17). For Global Variable, Db2 uses the
DEADLOCK_RESOLUTION_PRIORITY global variable to specify a deadlock resolution
priority value that is used in resolving deadlocks with other threads.
12.10 Summary
Although Db2 13 itself provides a wealth of critical new features in areas such as
performance, usability, availability, security, simplification, and application management,
access to tools that seamlessly support these new features is important. The day-one support
that is provided by the Db2 Tools that are described in this chapter helps ensure that you and
your organization can take advantage of what this latest Db2 version offers.
Chapter 12. IBM Db2 for z/OS tools support for Db2 13 199
200 IBM Db2 13 for z/OS and More
13
This chapter provides an overview of key functions and involved table types. It describes the
key function enhancements to Db2 Analytics Accelerator for z/OS 7.5 and the subsequent
maintenance levels.
Db2 Analytics Accelerator for z/OS 7 originally offered the following deployment options:
Accelerator on IBM Integrated Analytics System (IAS), which consists of a pre-configured
hardware and software appliance for easy deployment, management, and high
performance.
Accelerator on IBM Z, which can be deployed in the following ways:
– Within an IBM Secure Service Container (SSC) logical partition (LPAR) within IBM Z by
using Integrated Facility for Linux (IFL) processors without needing to attach a separate
appliance.
– Within an IBM SSC LPAR, but leveraging IFL processors on IBM LinuxONE, an
enterprise-grade Linux server.
The IAS appliance was withdrawn from marketing, so the preferred deployment option for new
installations is IBM Z or IBM LinuxONE within an SSC LPAR.
The Db2 Analytics Accelerator on IBM Z provides a hyper-protected runtime environment with
IBM Z levels of service. The deep integration with the IBM Z environment leverages system
management services that integrate into existing IBM Parallel Sysplex and IBM Globally
Dispersed Parallel Sysplex (IBM GDPS) based high availability and disaster recovery (HADR)
implementations. This architecture provides resource sharing and the flexibility to adjust
resources to support growth and changing requirements.
For more information about these features, see Db2 Analytics Accelerator for z/OS 7.5.
The tables on the accelerator are core to the key capabilities. Db2 Analytics Accelerator uses
several distinct types of tables. Understanding the role that each table type plays is critical to
getting the most out of your Db2 Analytics Accelerator environment.
Figure 13-1 shows the characteristics of these table types. Details about each type are
explained in the following sections.
After the accelerator-shadow table is defined on the accelerator, you can load data into the
table by copying data from the original Db2 table. Alternatively, you can use IBM Integrated
Synchronization, which is a replication method that is built into Db2 Analytics Accelerator, to
replicate changed data continually from the Db2 table to the accelerator-shadow table in
near-real time with low latency.
You can also use the Db2 Analytics Accelerator Loader to load external data, for example,
from 9 files, into both Db2 for z/OS and the accelerator concurrently. Then, the same data
exists in the Db2 table and the accelerator-shadow table. With the HALOAD utility, you can load
Db2 data or external data into multiple accelerators concurrently while reducing CPU
consumption and ensuring data consistency.
Read-only queries are routed to the accelerator to run on accelerator-shadow tables. Data
Manipulation Language (DML) statements cannot be run on accelerator-shadow tables. If you
use INSERT from SELECT statements, the SELECT part can be routed to the accelerator and can
run on accelerator-shadow tables if ZPARM QUERY_ACCEL_OPTION is set to option 2. Otherwise,
the SELECT part is run in Db2. The INSERT part is always run in Db2.
When the data is archived on the accelerator, the original Db2 partitions are emptied to save
storage space on IBM Z. The table spaces of Db2 table partitions that are associated with an
accelerator-archived table are set to a persistent read-only state so that the original Db2 table
can no longer be changed. An image copy of the data is also created as part of the archive
process to enable restoring the data in Db2. Archiving to multiple accelerators is supported.
To create an AOT, issue the CREATE TABLE statement in Db2 with the IN ACCELERATOR <name>
clause, where <name> determines the name of the accelerator.
To drop an AOT, issue the DROP TABLE statement. To create or drop an AOT, the accelerator
must be started, but the QUERY ACCELERATION setting can have any value.
The QUERY ACCELERATION setting must be set to ENABLE, ELIGIBLE, or ALL to enable execution
of queries or DML statements on AOTs.
However, a query that references an AOT cannot join a non-accelerator Db2 table. If a query
that references an AOT is not eligible for query acceleration, or if the QUERY ACCELERATION
setting is not set correctly, an SQL error -4742 is issued.
You can use DML statements to change the contents of an AOT. A DML statement, for
example, an INSERT from a SELECT statement, can reference other tables on the accelerator:
either accelerator-shadow tables, accelerator-archived tables, or other AOTs. However, this
DML statement cannot reference a non-accelerator Db2 table.
You also can use the Db2 Analytics Accelerator Loader to load external data, for example,
from a VSAM file, into an AOT.
AOTs are different from temporary tables. For example, the Db2 catalog does not store a
description of a declared temporary table. Therefore, the description and the instance of the
table are not persistent. Multiple application processes can refer to the same declared
temporary table by name, but they do not share a description or instance of the table.
About query acceleration: Db2 for z/OS considers routing a query to the accelerator if
the QUERY ACCELERATION setting is set to any value other than NONE. In addition to the QUERY
ACCELERATION setting, Db2 for z/OS also considers heuristics such as table size and
estimated cost. Although most Db2 for z/OS queries are eligible for query acceleration, the
following limitations cause a query to be ineligible for acceleration:
A query that uses columns with unsupported data types such as XML or large object
(LOB).
A query that uses a user-defined function (UDF) other than inline SQL scalar UDF or
compiled SQL scalar UDF.
A query that joins tables with different encoding schemas.
A query that uses an unsupported built-in function, for example, ACOS, ASIN, CLOB, or
BLOB.
A query that is not read-only.
The accelerator’s database engine (IBM Db2 Warehouse) offers a range of built-in analytic
functions that are not supported natively by Db2 for z/OS., which include aggregate functions,
online analytics processing (OLAP) functions, and scalar functions.
A subset of these types of functions can be used in SQL queries that are routed to the
accelerator with built-in-function pass-through only expression support. Db2 for z/OS is
“aware” of the accelerator when it parses the SELECT statement. If the SELECT statement
references a built-in function that is available only on the accelerator, the Db2 for z/OS parser
validates the signature and allows its invocation within the rewritten SQL.
This capability enhances the analytic SQL capabilities of Db2 for z/OS with an accelerator
that is connected by allowing you to run more analytical workloads on your Db2 for z/OS data.
For example:
You can use regression functions for statistical analysis of the relationship among
variables.
You can use OLAP functions to return ranking, row numbering, and existing aggregate
function information as a scalar value in a query result.
Db2 12 FL504 introduced pass-through support for the following built-in functions:
OLAP and aggregate functions: CUME_DIST, FIRST_VALUE, LAG, LAST_VALUE, LEAD,
NTH_VALUE, NTILE, PERCENT_RANK, and RATIO_TO_REPORT
Scalar functions: REGEXP_COUNT, REGEXP_INSTR, REGEXP_LIKE, REGEXP_REPLACE, and
REGEXP_SUBSTR
Db2 12 FL507 introduced pass-through support for the following built-in functions:
Aggregate functions (all regression functions): REGR_SLOPE, REGR_INTERCEPT, REGR_ICPT,
REGR_R2, REGR_COUNT, REGR_AVGX, REGR_AVGY, REGR_SXX, REGR_SYY, and REGR_SXY
Scalar functions: ADD_DAYS, DAYS_BETWEEN, NEXT_MONTH, BTRIM, and ROUND_TIMESTAMP (if
invoked with a data expression)
The following two kinds of queries with pagination are eligible to run on the accelerator:
Queries that include pagination (OFFSET N ROWS clause), for example:
SELECT * FROM T1
ORDER BY C1
OFFSET 10 ROWS;
Routing support for a query that references a column with a column mask is provided with
Db2 12 for z/OS APAR PH33061 or Db2 13 and Db2 Analytics Accelerator for z/OS 7.5.6.
Tables that have masked columns are now fully added, loaded, or replicated to the
accelerator.
Db2 Analytics Accelerator for z/OS 7.5.6 required. Before Db2 Analytics Accelerator for z/OS
7.5.6, the masked columns were omitted and only the non-masked columns were added,
loaded, or replicated to the accelerator.
Queries that reference the masked columns are routed to the accelerator if the masked
columns are not part of the outermost result SELECT list that is being returned to the
application. In this case, the column mask does not need to be applied.
In the following example, two tables are defined (T1 and T2). Both tables have column C2
defined as a masked column, and columns C1 and C3 as non-masked columns.
TABLE T1(C1, C2, C3) column mask on C2
TABLE T2(C1, C2, C3) column mask on C2
The following query can be routed to the accelerator because the masked column C2 is used
only to join the tables:
SELECT A.C1, B.C3
FROM T1 A,
T2 B
WHERE A.C2 = B.C2;
The following query cannot be routed to the accelerator because the masked column C2 is
returned as part of the outermost result set:
SELECT A.C1, B.C2
FROM T1 A,
T2 B
WHERE A.C2 = B.C2;
The query fails with SQLCODE -4742 rsn 15 if CURRENT QUERY ACCELERATION ALL is set;
otherwise, it runs in Db2 for z/OS.
This feature can be useful if you use CURRENT QUERY ACCELERATION ENABLE but want to ensure
that specific queries are always routed to the accelerator. ENABLE means that the Db2
optimizer considers cost estimates and other heuristics to decide whether to route a query to
the accelerator. For example, cost estimates might change if the table statistics are not up to
date. As a result, the Db2 optimizer could select a different access path at preparation time,
which causes the query to run in Db2 for z/OS instead of on the accelerator.
Conversely, you can use the dynamic plan stability feature to stabilize the decision by the Db2
for z/OS optimizer to not route a query to the accelerator, but always run it on Db2 for z/OS.
This feature protects the accelerator against unexpected Db2 for z/OS cost estimation
changes (due to upgrading Db2 for z/OS maintenance), which might cause queries that had
been run on Db2 before to be routed to the accelerator instead.
The following steps summarize the process of implementing the “never accelerate” behavior
by using the dynamic plan stability feature:
1. Monitor the dynamic statement cache by using EXPLAIN and other tools.
2. Identify queries that have the wanted access paths and acceleration behaviors (for
example, ACCELERATED = NEVER) and that must remain stable across Db2 for z/OS PTF
maintenance.
3. Stabilize identified queries into SYSDYNQRY while they are in the dynamic statement
cache.
4. Apply Db2 for z/OS PTF maintenance as usual.
After Db2 for z/OS is restarted, stabilized queries are loaded from SYSDYNQRY into the
dynamic statement cache and are run from there. Loading from SYSDYNQRY does not result
in a full prepare, so access paths and acceleration behaviors are preserved even if Db2 for
z/OS PTFs changes the access paths for these queries.
Before Db2 12 FL509, AOTs did not support a HA setup with an accelerator group that
consisted of more than one accelerator. For example, if multiple accelerators were connected
to the same Db2 subsystem for HA or workload balancing purposes, an AOT could be
created on one accelerator only. This restriction meant that an AOT did not take part in HA or
workload balancing scenarios. Db2 for z/OS always routed queries on an AOT to the same
accelerator. If this accelerator was down for any reason, then the AOT could not be accessed
unless a backup was taken and restored on an available accelerator.
Db2 12 FL509 APAR PH30574 and Db2 13 removed this restriction. You can now create
AOTs on multiple accelerators that are connected to the same Db2 subsystem to support HA
and workload balancing.
To use multiple AOTs with the same name, you define an accelerator group that consists of
one or more accelerators, and then you create the AOT in the accelerator group.
Usage notes:
INSERT, UPDATE, and DELETE operations on the AOTs must be run on single accelerator
of the group by using the CURRENT ACCELERATOR special register (and then repeated on
the others).
Queries that reference the AOTs are routed to one of the accelerators within the group
according to the existing workload balancing algorithm.
Db2 Analytics Accelerator Loader supports High Availability Load (HALOAD) into all
AOTs within the accelerator group.
To create an accelerator group (ACCLGRP) that contains two accelerators (ACCEL1 and
ACCEL2), issue the following example INSERT statement into the SYSIBM.LOCATION table:
INSERT INTO SYSIBM.LOCATIONS (LOCATION, LINKNAME, DBALIAS)
VALUES (‘ACCLGRP’, 'DSNACCELERATORALIAS', 'ACCEL1 ACCEL2’)
To create an AOT in both accelerators ACCEL1 and ACCEL2, specify the accelerator group
(ACCLGRP) in the IN ACCELERATOR clause, for example:
CREATE MYAOT.T1 (<columnspec>)IN ACCELERATOR ACCLGRP
Queries that reference MYAOT.T1 are routed to an accelerator that is defined in ACCLGRP
according to existing workload balancing algorithms.
Figure 13-2 Db2 z/OS catalog entries for an AOT in multiple accelerators
The AOT with name T1 is created on the accelerators ACCEL1 and ACCEL2. This
information persists in tables SYSACCEL.SYSACCELERATEDTABLES and SYSIBM.SYSTABLES.
ACCEL1 and ACCEL2 are defined in accelerator group ACCLGRP. This information persists
in the SYSIBM.LOCATIONS table.
Alternatively, you can use the Db2 Analytics Accelerator Loader to load data into AOTs. By
using its HALOAD function, you can load data into the AOTs on multiple accelerators in
parallel.
Distribution keys are now available for AOTs. Distribution keys can help to improve query
performance for queries that reference AOTs. This feature requires Db2 12 for z/OS APAR
PH30659, or Db2 13.
To create an AOT with a distribution key from IBM Db2 Data Studio by using a CREATE TABLE
statement, you need Data Studio 4.1.4 APAR 2, which includes the Data Studio APAR=
IT39577. Otherwise, Data Studio ignores the distribution key setting within the SQL comment.
For example, after you run the following statements from a Data Studio SQL Editor, the AOT
and its distribution key are displayed in the accelerator tables view within Data Studio, as
shown in Figure 13-3.
SET CURRENT QUERY ACCELERATION ENABLE;
--#SET SQLFORMAT SQLCOMNT
CREATE TABLE AOTWITHKEY (COL01 CHAR(8), COL02 CHAR(8), COL03 INTEGER) IN
ACCELERATOR ACCEL1
-- <ACCOPTS> <distributionkey> (COL01) </distributionkey>
</ACCOPTS>;
Figure 13-3 Accelerator tables view in Data Studio displaying an AOT with distribution key
The following list summarizes the steps to request statistic collection as part of a batch
process:
1. Insert rows into AOT A.
2. Collect statistics on AOT A by calling the ACCEL_COLLET_TABLE_STATISTICS stored
procedure.
3. Run a second INSERT statement that selects rows in AOT A and inserts these rows into
AOT B.
The previous technology for replicating data between Db2 for z/OS and Db2 Analytics
Accelerator was IBM Change Data Capture (CDC). CDC was a separate product that
required individual installation and setup. CDC is deprecated and is no longer used with
accelerator deployments.
The following sections describe the features, enhancements, and capabilities of Integrated
Synchronization in the context of Db2 for z/OS, which are added to Integrated
Synchronization with Db2 12 APARs. However, they are all natively available with Db2 13.
For more information about Integrated Synchronization, see IBM Integrated Synchronization:
Incremental Updates Unleashed, REDP-5616. Although this Redpaper refers to multiple Db2
for z/OS prerequisites, such as FL500 and certain PTFs, all these prerequisites are part of
Db2 13 natively, so no additional PTFs are required.
With this enhancement, schema changes are supported in the sense that you do not need to
remove, redefine, re-enable replication for, and reload an accelerator-shadow table after a
schema change to an original Db2 for z/OS table. The schema change is detected by the
IBM Integrated Synchronization function and, if necessary, a modification of the target
accelerator-shadow table is initiated so that incremental updates can continue.
Add/Alter column operations on Db2 for z/OS tables that are enabled for replication are
automatically synchronized to the accelerator. You no longer need to remove, re-add, and
reload a table to the accelerator to make the schema change available.
Schema change (Add/Alter column) support for non-replicated tables was initially introduced
with Db2 Analytics Accelerator 7.5.7 as a technology preview.
With this enhancement, Add/Alter column operations on Db2 for z/OS tables that are not
enabled for replication are now automatically synchronized to the accelerator if Integrated
Synchronization is enabled for the Db2 system. You no longer need to remove, re-add, and
reload a table to the accelerator to make the schema change available.
The supported schema changes for non-replicated tables are the same as for replicated
tables.
Regarding the advisory REORG pending (AREO) state and Integrated Synchronization:
If replication is enabled for a table and the table is set to the AREO state, replication
continues.
If replication is disabled for a table and the table is set to the AREO state, perform the
REORG first and then enable it for replication.
Certain cases are not supported by online schema change, such as with ALTER
MAXPARTITIONS.
For example, if you have a simple table space with replication enabled for its table or tables
and you issue ALTER TABLESPACE with MAXPARTITIONS for converting to universal table space
(UTS), replication continues and the table space goes into the AREO state. However, after a
reorganization, replication is disabled and requires a full reload again.
Because in this scenario you changed the table from a non-partitioned to a partitioned table
space, you must reload the table to the accelerator.
With ALTER TABLE ROTATE PARTITION support, rotate operations on a partitioned Db2 for z/OS
table no longer require a reload of the corresponding replication-enabled accelerator
shadow-table.
The table changes that are run as part of the ALTER TABLE ROTATE PARTITION operation are
replicated to the accelerator by Integrated Synchronization processing.
The ALTER ROTATE support was initially introduced in Db2 Analytics Accelerator 7.5.8.
Dropping a Db2 for z/OS table triggers a run of the SYSPROC.ACCEL_REMOVE_TABLE stored
procedure, which removes the accelerator-shadow table from the connected accelerators.
You are notified of a removal by a corresponding DSNX881I message in the z/OS syslog or
by a similar message in the event viewer of your chosen administration client (Db2 Analytics
Accelerator Studio or IBM Data Server Manager).
The DROP TABLE support was initially introduced in Db2 Analytics Accelerator 7.5.5.
Figure 13-4 on page 217 shows how the WAITFORDATA feature works together with Integrated
Synchronization.
Row-based Db2 for z/OS table changes are transformed into Db2 Warehouse columnar
tables through log-based replication. This step is the Integrated Synchronization processing.
Transactional WRITEs and READs are run in Db2 for z/OS. But if a data-intensive complex
READ query comes in and the Db2 optimizer routed this query to the accelerator, the
accelerator checks to determine whether the most recent committed data is needed based on
the WAITFORDATA setting.
If you do not care about the most recent committed data, then the query runs on the
accelerator without further processing. But if the most recent committed data is needed, the
accelerator checks whether the most recent committed data already was applied from the
staging area into the columnar tables. In many cases, it already is applied, in which case the
query runs.
If the data is not applied for the tables that are involved in the query, the accelerator initiates
an apply (writes the data from the staging area into the tables) and waits for a period for this
process to happen. When the apply completes, the query runs.
The challenge that is associated with achieving true HTAP is waiting for this period, which
ultimately depends on how fast the data can be replicated and applied to the columnar tables.
Integrated Synchronization, which is lightweight, integrated, and fast, drastically reduces this
period. Coupled with the WAITFORDATA feature, data redundancy and latency are transparent
to the application, and queries on the accelerator run consistently on the most recent
committed data. This process is how “true HTAP” processing works.
If data is deleted from a synchronized Db2 for z/OS table or table partition by running REORG
DISCARD or LOAD REPLACE DD DUMMY, the effect on the related accelerator-shadow table (either
synchronization or table suspension) varies depending on the type of the Db2 for z/OS table:
Synchronization: Only partition-by-range (PBR) or classic partitioned table spaces are
supported. The deleted rows are also deleted from the related accelerator-shadow table,
and replication remains enabled. A reload is not required.
Table suspension: If a REORG DISCARD or LOAD REPLACE DUMMY is run on a
partition-by-growth (PBG) table space, the table is suspended from Integrated
Synchronization, query acceleration is disabled, and the table goes into error state. This
feature is a security feature. A PBG table does not have fixed partition boundaries;
therefore, a REORG DISCARD or LOAD REPLACE DUMMY operation on partitions of tables in this
state does not make sense because the user does not know which rows will be affected.
Most likely, not all rows of a partition or table will be deleted by this operation.
Partial discards are not supported by REORG DISCARD. Tables with partially emptied partitions
will be set to the “Replication Error” status. Tables and partitions in this state must be
reloaded. Starting with Db2 Analytics Accelerator 7.5.2 or later, query acceleration is disabled
for tables in the Replication Error state.
Most users are familiar with managing and administering Db2 Analytics Accelerator by using
products such as the IBM Data Studio thick client or IBM Data Server Manager. Chapter 11,
“Development and administration tools” on page 155 introduced the on-going work in the Db2
user experience transformation space. With Db2 Analytics Accelerator a key part of this
ecosystem, the management and administration experience is being reimagined.
The goal is to extend the Db2 for z/OS transformation model to API-enable the business logic
layer, which enables communication with the Db2 Analytics Accelerator server and decouples
it from the user interface implementation. This approach allows for the flexibility of consuming
the APIs to perform end-to-end workflows by using a user interface or to build workflows
directly by stitching together the APIs into a CI/CD pipeline as needed.
The goal of Db2 Analytics Accelerator Administration Services is to API-enable unit of work
operations with a fail-proof implementation. For example, to report the accurate status of an
accelerator, you can call the REST API to fetch the status. The API checks the connectivity
status and fetches the status of various accelerator components by calling more stored
procedures and Db2 commands and compiling them into a standard JSON output for easy
consumption.
API-enabling the functional capabilities provides more flexibility because the APIs can now be
invoked from other user interface products to streamline administration tasks or they can be
stitched together for a DevOps pipeline to automate management of the accelerator.
For more information about installing and configuring these services, see the IBM Db2
Analytics Accelerator Administration Services documentation. These services were
introduced initially with Db2 Analytics Accelerator 7.5.8.
Chapter 11, “Development and administration tools” on page 155 introduced Admin
Foundation as the strategic platform for the administration of Db2 for z/OS and its ecosystem.
The next-generation user experience for Db2 Analytics Accelerator administration is built on
Admin Foundation.
Admin Foundation makes it easy to discover and display all the accelerators that are paired to
one or more registered Db2 subsystems. Selecting any one of the accelerators from the list
displays the accelerator dashboard in the context of the Db2 subsystem with which it is
paired, as shown in Figure 13-6 on page 221.
To manage accelerated tables and data, click the Tables tab, as shown in Figure 13-7. You
can add tables to the accelerators, load table data, enable and disable acceleration and
replication, alter distribution and organizing keys, and remove tables from the accelerator.
Figure 13-7 Db2 Analytics Accelerator tables dashboard on Db2 Administration Foundation for z/OS
Figure 13-8 Db2 Analytics Accelerator queries dashboard on Db2 Administration Foundation for z/OS
The following Db2 Analytics Accelerator administration capabilities are available through
Admin Foundation. User interface support for other capabilities, such as archiving and
federation, will be made available in a future release.
Query or browse accelerators by using Db2 Catalog Explorer.
Manage accelerators by:
– Starting and stopping acceleration
– Starting and stopping replication
– Configuring trace levels and saving traces
– Removing accelerators from a Db2 subsystem
– Monitoring system utilization
– Viewing replication events
Manage tables on the accelerator by:
– Viewing all tables in the accelerator
– Adding tables
– Loading table data
– Removing tables
– Enabling and disabling acceleration
– Enabling and disabling replication
Manage accelerated queries by:
– Viewing Full SQL text
– Capturing query environment and performance data
13.6 Summary
Db2 Analytics Accelerator is a high-performance component that is tightly integrated with Db2
for z/OS. It provides day-one support for Db2 13, which means that you can use Db2
Analytics Accelerator with Db2 13 and benefit from all provided features.
Because Db2 Analytics Accelerator is based a continuous delivery model, the serviceability,
performance, and function features are enhanced regularly. In this chapter, various
accelerator enhancements were introduced that were provided by a recent accelerator
maintenance level, a Db2 12 PTF, or the combination of both. Db2 13 contains all Db2
enhancements for the accelerator natively.
You can explore the described enhancements to grow your existing accelerator use cases or
discover new ones.
About IBM Cloud Pak for Data: IBM Cloud Pak for Data is a fully integrated data and
artificial intelligence (AI) platform that modernizes how you collect, organize, and analyze
data to infuse data-powered AI throughout your organization. It is a modular platform for
running integrated data and AI services. IBM Cloud Pak for Data is composed of integrated
microservices that run on a multi-node Red Hat OpenShift cluster, which manages
resources elastically and runs with minimal downtime.
IBM Cloud Pak for Data runs on Red Hat OpenShift, which enables a wide range of target
platforms. You can run IBM Cloud Pak for Data on the following types of cloud
environments:
An on-premises, private cloud cluster on Linux on IBM Z or x86 64-bit hardware
Any public cloud infrastructure that supports Red Hat OpenShift, including IBM Cloud
Db2 Data Gate provides an end-to-end solution to ensure that data is available and
synchronized from sources on IBM Z to targets that are optimized on IBM Cloud Pak for Data.
Db2 Data Gate is powered by IBM Integrated Synchronization, which is the same technology
that is used in the IBM Db2 Analytics Accelerator for z/OS. This technology includes a log
data provider that captures log changes from Db2 for z/OS and sends consolidated,
encrypted changes to a log data processor on the target side. The fully zIIP-enabled
synchronization protocol is lightweight, high throughput, and low latency. It enables near
real-time access to your data without degrading the performance of your core transaction
engine.
Compared to any custom-built solution, the Db2 Data Gate service reduces the cost and
complexity of your application development. By using this service to synchronize and access
your enterprise data on IBM Cloud Pak for Data, you can reduce your operational costs while
accelerating your journey to the cloud and AI.
This chapter describes some functions of Db2 Data Gate and provides links to other key
technologies within Db2 for z/OS.
For more information about these steps, see Installing Db2 Data Gate on z/OS.
Tip: For optimal performance, deploy the database on a dedicated Red Hat OpenShift
worker node.
When creating a Db2 Data Gate instance, you must specify the appropriate amount of
resources (virtual CPU and memory) for your workload type and priority. For more
information, see Creating a Db2 Data Gate instance.
14.2.3 Pairing a Db2 Data Gate instance with a Db2 for z/OS subsystem
After you create a Db2 Data Gate instance, you must pair it with a single Db2 for z/OS
subsystem. The process of pairing exchanges connection information between a Db2 Data
Gate instance and a Db2 for z/OS subsystem. When the pairing completes, the Db2 Data
Gate instance is ready to manage tables and data synchronization.
A single Db2 for z/OS subsystem can be paired with multiple Db2 Data Gate instances. You
can edit the existing pair of a Db2 Data Gate instance to pair it with another Db2 for z/OS
subsystem. However, a Db2 Data Gate instance can be paired with only one Db2 for z/OS
subsystem at a time.
The process of pairing a Db2 Data Gate instance with a Db2 for z/OS subsystem is
documented in Creating a Db2 Data Gate instance.
You also can use the web UI to stop and start synchronization, remove tables, and load
tables. For more information, see Administering Db2 Data Gate.
14.2.5 Monitoring
You can monitor the Db2 Data Gate status and statistics on both Db2 for z/OS and IBM Cloud
Pak for Data. The following mechanisms provide you with different types of information:
DISPLAY ACCEL command When the system is configured, a heartbeat connection is
established between the Db2 Data Gate instance and the
connected Db2 subsystem. Every 30 seconds, the heartbeat
connection provides the Db2 subsystem with status
information about the connected Db2 Data Gate instance.
You can view most of this information by using the DISPLAY
ACCEL Db2 command.
SYSLOG messages Because detailed information about specific events cannot
be viewed by using the DISPLAY ACCEL command, this type of
information is written to the z/OS system log (SYSLOG) in
the form of DSNX881I messages. This information includes
messages that are related to the synchronization function.
Db2 Data Gate dashboard Db2 Data Gate provides a browser-based dashboard, which
is shown in Figure 14-2, to display the status and statistics
on the IBM Cloud Pak for Data side.
For more information about the data that is displayed in the dashboard and how to use it, see
Monitoring a Db2 Data Gate instance.
Although Db2 Data Gate usually operates with low synchronization latency in the subsecond
to low-seconds range, this level of performance might not satisfy the requirements for
applications that must work with the most recent source data.
A good example of this requirement is a mobile banking application that both transfers data
that is written in a source Db2 for z/OS database and that also displays the current account
balance by querying the replicated target Db2 Warehouse. Due to the synchronization
latency, if users of this mobile banking app refresh their balance immediately after they make
a transfer, the transaction that is related to the transfer might not be reflected in their balance
because the transfer has not yet been synchronized to the target database. This scenario
represents a typical read-after-write inconsistency issue.
To mitigate such issues, Db2 Data Gate offers a feature that is called WAITFORDATA, which
allows queries on synchronized tables that were submitted on the target database to be
delayed until the most recent data from the source database is synchronized. The feature is
comparable to the WAITFORDATA feature that is offered by Db2 Analytics Accelerator for z/OS,
as described in 13.4.4, “True HTAP support (the WAITFORDATA feature)” on page 216.
Note: The main difference between the two implementations of WAITFORDATA is that for
Db2 Data Gate the WAITFORDATA feature applies to queries that are directly run on the Db2
target database, and for Db2 Analytics Accelerator, the WAITFORDATA feature applies to
queries that are routed from Db2 for z/OS to Db2 Analytics Accelerator and run there.
In Figure 14-3:
1. A transaction runs on the source database that persists data changes in a
source-synchronized table.
2. An application requests to read data from the target-synchronized table and uses the
WAITFORDATA feature. Now, the data from the transaction in step 1 has not yet been
synchronized to the target database.
3. WAITFORDATA delays the transaction until the most recent source data is synchronized to
the target database.
4. When the data is synchronized, the submitted query runs on the target database and
accesses all the required data.
In this example, a query waits up to 10 seconds for the most recent source data before the
query either runs or fails with a TIMEOUT exception. The CURRENT QUERY WAITFORDATA register
value specifies a maximum delay of each query, that is, if the change set is replicated from the
source to the target database before reaching the timeout, the query runs instantly without
further delays.
WAITFORDATA delays are valid for synchronized tables only, but a query also can include other
types of tables. WAITFORDATA behaves differently depending on the type or types of tables that
are referenced in a query:
Only synchronized tables are referenced:
Query processing is delayed until all referenced tables are
synchronized.
Synchronized and non-synchronized tables are referenced:
Non-synchronized tables are ignored. Query processing starts
when synchronized tables are updated within the WAITFORDATA
delay period.
Only non-synchronized tables are referenced:
The WAITFORDATA delay is ignored, and query processing starts
immediately.
A query references no tables at all:
The delay is applied so that query processing starts after the
newest source data for all replicated tables is synchronized.
This behavior is useful, for example, for running stored
procedures on the target database with the most recent source
data.
14.4 Summary
Db2 Data Gate provides analytics and digital applications with low latency, read-only access
to data that originates in Db2 for z/OS. Data is synchronized between Db2 for z/OS data
sources and target Db2 databases on IBM Cloud Pak for Data by using IBM Integrated
Synchronization technology. So instead of accessing data in the IBM Z data source directly,
an application accesses a synchronized copy of the Db2 for z/OS data that is hosted by a
separate system.
Using the WAITFORDATA feature ensures that applications access most recent committed data
that is synchronized from Db2 for z/OS to the target database. Latency of the committed data
does not impact consistency or coherency of the selected data because queries first run
when the required data is available on the target database.
This appendix also includes some considerations for your plan to activate Db2 12 function
level 510 (V12R1M510), which is the prerequisite for migration to Db2 13, in “Activating
V12R1M510: Considerations” on page 249.
If you are already running Db2 12 FL 510, for more information about migration to Db2 13,
see Chapter 2, “Installing and migrating” on page 13.
Db2 12 does not add more function levels after FL 510, but adds capabilities in the
maintenance stream. For more information about Db2 12 features that are delivered after GA,
see What’s new in Db2. From there, you can find more information about specific function
levels and other changes by reviewing the following resources:
Db2 12 function levels
Recent enhancements to Db2 12
SQL enhancements
The following enhancements to Db2 12 SQL represent either new SQL functions or changed
SQL behaviors. The enhancements, including the ones in the following sections, were
delivered in various function levels in the order of their availability.
LISTAGG
Casting of numeric values to GRAPHIC and VARGRAPHIC data types
Temporal auditing query update
Global variable for data replication override
Syntax flexibility for special registers and NULL predicates
WHEN clause on trigger visibility to rows in history or archive tables
Column level security with new built-in functions
Alternative names for built-in functions
CREATE OR REPLACE syntax for stored procedures
Application-level granularity for lock limits
Referential integrity support for UPDATE or DELETE for parent tables in business-time
temporal relationships
To use a specific new SQL function or a changed SQL behavior, you must activate a specific
containing function level and run the enhanced SQL at the corresponding application
compatibility (APPLCOMPAT) level.
LISTAGG
LISTAGG is a new, built-in aggregate function that produces a list of values that based on the
inputs to the function. Introduced in FL 501 (V12R1M501), this function provides a much
simpler SQL statement than what was previously required to produce such a value list.
For more information about LISTAGG, see Function level 501 (activation enabled by APAR
PI70535 - May 2017).
For more information, see Explicit casting of numeric values to GRAPHIC or VARGRAPHIC.
Table A-1 Syntax flexibility for special registers and NULL predicates
Existing syntax New syntax option
IS NULL ISNULL
For more information, see New SQL syntax alternatives for special registers and NULL
predicates.
CHARACTER_LENGTH CHAR_LENGTH
HASH_MD5(expression) HASH(expression, 0) or
HASH(expression)
HASH_SHA1(expression) HASH(expression, 1)
HASH_SHA256(expression) HASH(expression, 2)
POWER POW
RAND RANDOM
LEFT STRLEFT
POSSTR STRPOS
RIGHT STRRIGHT
CLOB TO_CLOB
TIMESTAMP_FORMAT TO_TIMESTAMP
Db2 12 FL 507 (V12R1M507) provides the following two built-in global variables that allow
you to specify locking limits for table spaces or users at the application level:
SYSIBMADM.MAX_LOCKS_PER_TABLESPACE
SYSIBMADM.MAX_LOCKS_PER_USER
You can also set these global variables for Distributed Data Facility (DDF) applications with
the Db2 system profile tables. For more information about these global variables, see
“Application-level granularity for lock limits” on page 237.
You can also invoke Db2 REST services by using IBM z/OS Connect APIs, which extend the
value of Db2 REST services in combination with other z/OS based resources.
For more information about how to enable Db2 REST support, see Db2 REST services.
SQL PL debugging
Developers need another option instead of Data Studio to debug SQL Procedure Language
routines. APAR PI44721 provides eight sample native SQL procedures that you can use to
trace SQL PL routines. This enhancement is independent of function level. For more
information, see PI44721.
Another option is to use the Unified Debugger, which allows you to remotely debug stored
procedures that run on Db2 for z/OS servers. For more information, see Debugging stored
procedures by using the Unified Debugger.
Package management
Db2 12 provides several enhancements that are related to managing packages. The most
significant of these enhancements is REBIND phase-in, which is the ability to create a copy of a
package while existing threads continue to use the previous copy of the package.
REBIND phase-in
This enhancement, introduced in Db2 12 FL 505, improves DBA productivity and application
availability. You no longer need to wait for applications to stop before rebinding the packages
that the applications use. You can react quickly to situations that call for package rebinds. For
more information, see “REBIND phase-in” on page 264.
For more information about the FREE PACKAGE syntax, see FREE PACKAGE (DSN).
APREUSE
The following two enhancements improve your ability to use APREUSE to manage package
performance:
PH36728 reduces the likelihood of APREUSE errors or warnings in certain cases that involve
query transformations. For more information, see PH36728.
PH36179 allows APREUSE to consider page range screening for the query access plan even
when the earlier plan did not have page range screening. For more information, see
PH36179.
Example A-3 provides the COBOL syntax and PL/I syntax for writing DBRMs to HFS files.
Example A-3 COBOL syntax and PL/I syntax for writing DBRMs to HFS files
cob2 myprogram.cbl -o myprogram -dbrmlib -qsql
pli -o -qpp=sql -qdbrmlib -qrent myprogram.pli
Other DDL enhancements include implicit drop of explicitly created table spaces, implicit
hidden ROWID columns, meta-data self-description, support of DECFLOAT columns in
indexes, and the ability to rotate partitions for materialized query tables.
Huffman compression
Db2 12 function level 504 (FL 504) adds support for Huffman compression, also called
entropy encoding. Huffman compression provides you with the opportunity to achieve even
higher compression ratios with comparable or lower CPU costs than you can achieve with
traditional table space compression. Both compression types are dictionary-based
compression. Traditional compression uses fixed-length dictionary entries, but Huffman uses
variable-length dictionary entries, with the shorter entries used for the most frequently
occurring patterns.
Db2 12 function level 509 (FL 509) allows you to specify which type of encoding you prefer at
the page set or partition level. FL 509 relies on catalog changes to track your compression
encoding preferences. In addition, you can now use the DSN1COMP stand-alone utility to
estimate the compression savings of objects that are compressed.
For more information about Huffman compression, see “Huffman compression” on page 267.
For more information, see “Preventing new deprecated objects” on page 270.
If you are running a package with at least APPLCOMPAT level V12R1M506, the DROP TABLE
MYTB1 statement causes Db2 to drop table space MYTS1.
For an auxiliary table, the result is different than it is before FL 506. You might have processes
that expect to reuse the LOB table space after you drop the auxiliary table, or you might have
processes that include the explicit drop of the LOB table space. If either condition is true, you
should adjust your processes. Example A-5 shows this scenario.
With the drop of MYAUXTB1, Db2 implicitly drops table space MYLOBTS1.
Example A-6 shows the system-period temporal table scenario; the same approach applies
to the archive-enabled table scenario.
For more information, see “Migrating multi-table table spaces to PBG UTS” on page 274.
Db2 generates a pseudo-random value for the first 8 bytes of the ROWID column. This
approach has no application impact.
You can check for objects that are missing system pages by using REPAIR CATALOG TEST.
Utilities
Db2 delivered many new utility capabilities after Db2 12 GA. These enhancements fall into
one or more of the following categories:
Utility support for Db2 enhancements that were added after GA
Utility performance improvements
Utility availability improvements, including concurrency with other utilities and with other
applications
Usability and simplification enhancements to improve your productivity
For more information about these Db2 12 utilities capabilities, see “Db2 utilities” on page 277.
Performance
Enhancements in the performance area include monitoring and management changes, and
opportunities for CPU and elapsed time reductions.
SORTL
Db2 adds support for IBM Integrated Accelerator for IBM Z Sort, which is available on
IBM z15 and later systems and uses the SORT LIST (SORTL) instruction to sort lists of data.
Db2 can use SORTL for relational data system (RDS) sort, with the REORG utility, or on DFSORT
invocation. For more information, see “SORTL” on page 293.
Db2 adds the system-related improvements through the maintenance stream. These
improvements do not depend on any particular Db2 function level.
Replication
Data sharing
Distributed Data Facility
Subsystem monitoring
DSNZPARM simplification
If you specify any of these supported Db2 utility options, you risk producing an inconsistency
between the source and replication target, requiring a refresh of the target table. Setting
UTILS_BLOCK_FOR_CDC to YES removes that risk.
Data sharing
Data sharing has the following enhancements:
UNLOAD
Retrying automatic logical page list and group buffer pool recover pending recovery
Asynchronous cross-invalidation of group buffer pools
Internal resource lock manager deadlock
ALTER GROUPBUFFERPOOL
Sysplex group authentication with Sysplex workload balancing
Multi-factor authentication without Sysplex workload balancing
UNLOAD
Db2 12 changes the default of the REGISTER option for UNLOAD…SHRLEVEL CHANGE ISO(UR) to
YES to ensure that the unloaded rows include the latest changes on other members in the Db2
data sharing group.
Retrying automatic logical page list and group buffer pool recover pending
recovery
Db2 12 adds messages to support the retry of automatic logical page list (LPL) or group
buffer pool recover pending (GRECP) recovery.
DDF messaging
Db2 12 adds in-use thread information to several existing messages and adds two new
messages that are related to in-use DBATs.
Db2 12 adds new keyword columns to monitor connections and threads from unknown or
unspecified IP addresses. This significant change allows you to control floods of requests and
keep them from overwhelming your high-priority distributed workloads.
Subsystem monitoring
Db2 adds several enhancements to improve subsystem monitoring. For more information
about the enhancements in this section, see “Db2 system topics” on page 294.
SYSPROC.ADMIN_INFO_IFCID
The ADMIN_INFO_IFCID stored procedure allows you to retrieve statistics information that is
contained in Instrumentation Facility Component Identifiers (IFCIDs) 1, 2, or 225 with a READS
call in a stored procedure.
STATIME_MAIN
The STATIME_MAIN subsystem parameter allows you to specify the interval in which Db2 writes
key statistics records.
DSNZPARM simplification
Db2 12 changes defaults for six subsystem parameters and removes 12 subsystem
parameters to simplify the process of managing Db2.
STATGPSAMP
Db2 12 function level 505 sets the STATPGSAMP subsystem parameter so that the default for all
RUNSTATS jobs is to use page sampling. If the STATPGSAMP default is not appropriate, you can
set the subsystem parameter to NO, or you can override the individual jobs by using the
TABLESAMPLE SYSTEM keyword specification. For more information, see “Db2 utilities” on
page 277.
Security
Db2 12 provides new security capabilities for encryption and auditing and increases security
for existing interfaces.
For more information about these encryption and auditing enhancements, see “Security” on
page 300.
You can grant the BINDAGENT privilege only to any binding authid if you have the SECADM
authority; the BINDAGENT privilege cannot be granted to PUBLIC by using the SECADM
authority.
If you set DISALLOW_SSARAUTH to NO, user address spaces are allowed to set a Db2 address
space as a secondary address space. If you set DISALLOW_SSARAUTH to YES, user address
spaces are blocked from setting a Db2 address space as a secondary address space. Setting
the parameter to YES improves the security of your Db2 environment but can interfere with
some tools. Check with your tool vendors before setting the parameter to YES.
For more information, see System Recovery Boost for the IBM z15.
System Recovery Boost is also available on the IBM z16 with more functions, including
middleware restart boost. You can elect for Db2 to use the middleware restart boost if you are
running in a z16 environment.
Db2 12 can use the zHL protocol to complete the write of the Db2 log buffer without issuing
an interrupt to the processor, which means the application continues running. This feature
can decrease your application elapsed time. The zHL process is conceptually like coupling
facility (CF) access.
To deploy zHL, you must have the appropriate storage controller (for example,
IBM DS8000F), connectivity, and the Db2 ZHYPERLINK subsystem parameter set correctly. You
can set the ZHYPERLINK parameter to the following values:
DISABLE Db2 does not use zHyperLlink for any I/O requests. This value is the
default value.
ENABLE Db2 requests the zHL protocol for all eligible requests.
DATABASE Db2 requests the zHL protocol only for database synchronous I/Os.
ACTIVELOG Db2 requests the zHL protocol only for active log write I/Os.
For more information about this subsystem parameter, see Db2 zHyperLinks SCOPE field
(ZHYPERLINK subsystem parameter).
For more information about zHL, see zHyperLink for the DS8880F overview.
You can use the IBM Z Batch Network Analyzer (zBNA) tool to process SMF 42-6 records to
estimate the benefit of zHL in your environment. You can download the zBNA tool from IBM Z
Batch Network Analyzer (zBNA) Tool.
Running CATMAINT determines your current catalog level and makes whatever updates are
necessary to achieve catalog level V12R1M509, which is the latest possible catalog level for
Db2 12.
Issuing ACTIVATE is successful if there are no packages that are bound before Db2 11 that
were run within the last 18 months.
Though activating FL 510 is simple, you should take these steps with thought and planning.
Catalog changes after Db2 12 FL 500 affect only a few catalog tables, and most applications
should be able to run during the catalog update.
The only time that you must change your applications is to introduce new SQL syntax or
behavior. In most cases, these types of changes require application code changes, which
requires an outage.
Non-function-level-dependent features
Many of the features that are described in this appendix are active in your system already
from applying regular maintenance. For example, data-sharing enhancements are
independent of function levels. You can still avoid the effects of some of these features by not
taking the actions that trigger the effect. For example, you can still control a new utility option
that is not dependent on a function level by how you specify the option.
Db2 12 delivers over 100 features in this category. The potential impact of these features on
your existing business workloads should be covered by standard testing as you promote
maintenance through your environments to production.
Function-level-dependent features
You have control over when function-level-dependent features apply to your business
applications based on when you issue the ACTIVATE FUNCTION LEVEL command. These
features tend to have more significant impact on your environment but do not affect the SQL
behavior of existing applications.
Most of these features do not take effect on activation of the corresponding function level.
Rather, you must invoke the feature by taking a specific action. For example, REBIND phase-in
is available but has no effect until you rebind a package. You can mitigate or avoid the impact
of many of the function-level-dependent features. There are two ways to avoid the impact: by
not invoking the triggering action, or by setting the related subsystem parameter to the value
that represents earlier behavior.
Application-compatibility-dependent features
You have even more control over features that depend on particular APPLCOMPAT levels.
These features do not affect existing applications and packages unless some other action
occurs. Even those features that have a potentially significant impact can be mitigated in most
cases. For example, if you bound the packages that you use to issue DDL at APPLCOMPAT
level V12R1M504, which affects creation of deprecated objects, and you must create such a
deprecated object, you can set the CURRENT APPLICATION COMPATIBILITY special
register to V12R1M503 or earlier to issue the specific DDL statement.
You do not need to change APPLCOMPAT levels for packages that run your business
applications unless you want to deploy new features or behaviors. Db2 continues to support
APPLCOMPAT levels of V10R1, V11R1, and all those levels in Db2 12.
For incompatible changes, you can use IFCID 376 to determine which application packages
are incompatible. You can then schedule changes to address those incompatibilities
independent of your activation of new function levels.
Take advantage of the control Db2 continuous delivery provides as you plan to activate
FL 510. You should find that these implementation steps have less impact on your normal
business operations and do not require significant outages.
In addition to the information in What’s new in Db2, you might find it beneficial to read a
technical paper that was written by IBM Gold Consultant Gareth Copplestone-Jones with
input from other IBM Gold Consultants on the topic of Db2 12 continuous delivery. This paper
covers the major features in each category and includes a limited survey of customer
experiences, and can be found at IBM Db2 12 for z/OS Function Level Activation and
Management.
Summary
The Db2 12 continuous delivery process provides many features that you can deploy in your
environment with minimal impact to your business applications. The last function level on Db2
12, V12R1M10, prepares your system for a smooth migration to Db2 13.
Other SQL enhancements, as listed below, are briefly described in “SQL enhancements” on
page 234:
LISTAGG
Global variable for data replication override
Syntax flexibility for special registers and NULL predicates
Alternative names for built-in functions
Referential integrity (RI) support for UPDATE and DELETE for parent tables in business-time
temporal relationships
Archive-enabled tables have corresponding archive tables into which rows that you delete
from the base table can be inserted. With this function, you can make the archive-enabled
table smaller and more efficient while still providing access to the archived rows.
When you create or alter an advanced trigger, you can specify SYSTEM_TIME SENSITIVE YES
(default) for system-time temporal tables or ARCHIVE SENSITIVE YES (default) for
archive-enabled tables. You can use ALTER for advanced triggers to change these values.
For basic triggers, you cannot specify system-period temporal or archive-enabled sensitivity
at create trigger time. The default behavior is as though the option were specified YES. You
can override the trigger definition for basic triggers during rebind with the bind option
SYSTIMESENSITIVE or ARCHIVESENSITIVE, respectively.
Before FL 505, you could not create, alter, or rebind a trigger package if system-time temporal
tables or archive-enabled tables were referenced in a WHEN clause and if either the trigger
definition or the bind options were specified YES for the corresponding system time or archive
sensitivity. If you attempted to create, alter, bind, or rebind such a trigger before FL 505 or
with the package APPLCOMPAT level set earlier than V12R1M505, you received SQL errors
-270 and -20555, respectively. The time machine (or row versioning) and archive
transparency features were restricted in trigger WHEN clauses before FL 505.
Figure B-1 illustrates the difference in what you can do with WHEN trigger clauses that
reference system-period temporal tables or archive-enabled tables before and after activating
FL 505 and binding the trigger package with APPLCOMPAT level V12R1M505.
For more information, see Temporal and archive transparency support for WHEN clause on
triggers.
The time and effort to handle these issues on a repository that contains many procedures are
unacceptable, particularly under an increasingly tight schedule.
To solve the issues, Db2 introduces a new OR REPLACE clause to enhance the following CREATE
PROCEDURE statements:
CREATE PROCEDURE (external)
CREATE PROCEDURE (SQL - native)
To specify the OR REPLACE clause in an existing procedure, make sure that the new definition
defines the same type of procedure (external or native SQL) as the existing one.
For native SQL procedures, you can replace an existing procedure or replace an existing
version of the procedure or add a new version to the procedure. Assume that the following
original statement created existing procedure MYPROC1:
CREATE PROCEDURE MYPROC1
(IN P1 CHAR(5),
OUT P2 DECIMAL(15,2) )
BEGIN
SELECT AVG(SALARY) INTO P2
FROM DSN8C10.EMP
WHERE WORKDEPT = P1;
END
You can change the definition in MYPROC1 by reusing the original CREATE PROCEDURE
statement and adding the OR REPLACE clause. The following example shows how you can add
an OR REPLACE clause and issue the modified statement as a different query:
CREATE OR REPLACE PROCEDURE MYPROC1
(IN P1 CHAR(5),
OUT P2 DECIMAL (15,2) )
BEGIN
SELECT AVG(SALARY + 1000) INTO P2
FROM DSN8C10.EMP
WHERE WORKDEPT = P1;
END
If the VERSION keyword is specified with a version identifier, you can add or replace a version
of an existing native SQL procedure with the specified version identifier.
To add a new version to an existing native SQL procedure, specify the OR REPLACE clause and
identify the new version ID after the VERSION keyword. The new version of the procedure is
defined as though an ALTER PROCEDURE statement was issued with the ADD VERSION clause. If
the procedure does not yet exist, it is created with the specified version ID for the first version
of the procedure.
To replace an existing version of a native SQL procedure, add the OR REPLACE clause and
identify the ID of the version to be changed after the VERSION keyword. The version of the
procedure is redefined as though an ALTER PROCEDURE statement was issued with the REPLACE
VERSION clause. For example, the following statement replaces version V2 of procedure
MYPROC1:
CREATE OR REPLACE PROCEDURE MYPROC1
(IN P1 CHAR(5),
OUT P2 DECIMAL (15,2))
VERSION V2 -- Identify the version to be replaced
BEGIN
SELECT AVG(SALARY + 10000) INTO P2
FROM DSN8C10.EMP
WHERE WORKDEPT = P1;
END
For more information, see CREATE PROCEDURE (external) and CREATE PROCEDURE
(SQL - native).
NUMLKTS sets the limit on the number of locks that an application can hold simultaneously in a
table or table space. The optional LOCKMAX parameter on the CREATE or ALTER TABLESPACE
statement sets the number of locks that an application process can hold simultaneously in the
table space before lock escalation.
NUMLKUS specifies the maximum number of pages, row, or large object (LOB) locks that a
single application can hold concurrently for all table spaces.
With FL 507, Db2 introduced two new built-in global variables to support application
granularity for locking limits: MAX_LOCKS_PER_TABLESPACE and MAX_LOCKS_PER_USER. You can
set these variables to provide specific thresholds for an application to have locking limit values
that are different from the ones that are in the subsystem parameters NUMLKTS and NUMLKUS.
Other applications can continue to use the subsystem parameters without being affected. The
two global variables can help prevent table lock escalation that might have occurred with
NUMLKTS previously, which mitigates resource timeouts and unavailability.
Assigning a value to either of the new built-in global variables to change the locking threshold
for an application can have detrimental effects on other applications that run on the
subsystem. Administrators may not allow applications to access the entire range of values for
the SYSIBMADM.MAX_LOCKS_PER_TABLE_SPACE and
SYSIBMADM.MAX_LOCKS_PER_USER built-in global variables. Instead of giving
applications WRITE privileges on the built-in global variables, consider using SQL routines for
fine-tuned control of the allowable value range. For example, consider using stored
procedures like the following example to distribute a different range of values that is based on
the SQL authorization ID:
CREATE PROCEDURE SET_TSLIMIT (IN LIMIT INT)
DISABLE DEBUG MODE
BEGIN
DECLARE ADMF001_MAX INTEGER CONSTANT 8000;
DECLARE ADMF002_MAX INTEGER CONSTANT 6000;
DECLARE ADMF003_MAX INTEGER CONSTANT 2000;
DECLARE ADMF00n_MIN INTEGER CONSTANT 1000;
Db2 REST services can also be invoked by using IBM z/OS Connect APIs, which extend the
value of Db2 REST services in combination with other z/OS based resources.
For more information about enabling Db2 REST services support, see Db2 REST services.
Db2 REST services leverage existing Distributed Data Facility (DDF) capabilities, including
authorization, authentication, client information management, service classification, system
profiling, and service monitoring and display. Figure B-2 shows the workflow when a mobile
application accesses Db2 data by invoking the Db2 native REST support.
The initial Db2 native REST support included the functions for creating, discovering, invoking,
and deleting REST services. Each REST service is a static package that consists of a single
SQL statement, which may be a CALL to a stored procedure. Db2-supplied services are
DB2ServiceManager and DB2ServiceDiscover, and user-defined services are added to
SYSIBM.SYSSERVICES as they are created.
You can use one of the following SQL statements in a Db2 REST Service: CALL, DELETE,
INSERT, SELECT, TRUNCATE, UPDATE, and WITH.
Note: You may use MERGE in a Db2 REST Service if the MERGE statement is part of the
stored procedure that the REST service invokes. MERGE cannot be the single SQL
statement in a Db2 REST Service.
Although you can use the interface that you prefer, the BIND SERVICE subcommand, REST
application, or REST browser plug-in to create or delete (FREE) REST services in Db2, you
must use a REST application or REST browser plug-in to discover or invoke REST services.
Your applications access Db2 REST services through uniform resource identifiers (URIs) with
an applicable HTTP method. Db2 native REST services invocation supports only the POST
method for applications. The POST method applies to any Db2 REST Service whether the SQL
statement that comprises the service is CALL, SELECT, INSERT, UPDATE, DELETE, TRUNCATE, or
WITH.
With the appropriate authorization or privileges, you can discover all existing REST services
by issuing an HTTP POST request. The requesting URI must define a “discover all services”
request, as shown in the following example:
POST https://<host>:<port>/services/Db2ServiceDiscover
Alternately, you can also discover all REST services by using the HTTP GET method. For
example, you can discover all existing REST services with the GET method in the following
URI:
GET https://<host>:<port>/services
Using a browser plug-in or a REST application to create a service is like the process for
discovering one. With the correct authorization and authority, you may issue a POST request
through a REST client. The POST request includes a manager request in the URI and specifies
all the details in the HTTP body of the request, as shown in the following example:
POST https://<host>:<port>/services/Db2ServiceManager
HTTP request body:
{ "requestType": "createService",
"sqlStmt": "<sqlStatement>",
"collectionID": "<serviceCollectionID>",
"serviceName": "<serviceName>",
"description": "<serviceDescription>",
"bindOption": "<bindOption>"}
For more information about REST services, see Db2 REST services.
z/OS Connect provides a single point of entry for z/OS resource access through RESTful
APIs from the cloud or the web. Instead of disparate access points and protocols, z/OS
Connect provides a common, secure entry point for all REST HTTP calls to reach the
requested business assets and data on z/OS systems. It runs on a z/OS LPAR and maps the
APIs with REST calls to copybooks and programs of z/OS assets.
The two main components of z/OS Connect are the runtime server and the tool platform. The
runtime server hosts the APIs that you define to run, connects to the back-end system, and
allows for multiple instances. The tool platform is an environment that is based on Eclipse
where you can define APIs, create data mapping, and deploy the APIs to the runtime server.
With IBM z/OS Connect, a Db2 POST method with an SQL statement is mapped to the
appropriate REST method, which allows for full RESTful implementation.
For more information about z/OS Connect, see IBM z/OS Connect documentation.
SERVICE versioning
SERVICE versioning refers to the ability of Db2 REST services to develop and deploy a new
version of a REST service while the existing versions are still being used. Db2 REST
versioning is based on Db2 package versioning support. A new version of a REST service
can be introduced without having to change existing authorizations.
You can enable SERVICE versioning by applying APAR PI98649 and running sample job
DSNTIJR2. After you enable versioning, do not remove APAR PI98649. Otherwise, the entire
REST services function is unavailable.
The SERVICE versioning option does not impact the existing REST services that do not
specify a version. The version-less REST services have an empty string value as their version
ID. The services that are created after you enable versioning always will be versioned.
You can specify the REST service version ID as part of a REST request. If no version ID is
specified, the default service version is invoked.
You can use the REBIND PACKAGE subcommand to change the default version of the REST
service.
Summary
Db2 12 provides rich functions for SQL capabilities and REST services to improve your ability
to deploy modern applications. For more information, see What’s new in Db2 12.
REBIND phase-in
With REBIND phase-in, you can rebind a package while that package is currently running. This
feature increases your ability to react to situations where rebinding a package is necessary,
such as to change bind attributes or to improve access paths. You do not need to find a time
when the target package is unused or resort to complex workarounds to force the package
rebind.
Starting with function level 505 (FL 505), if you issue a REBIND for a package that is currently
running, Db2 creates a copy of the package. Existing threads continue to use the existing
package copy, which becomes the phased-out copy. When the rebind operation commits the
new copy, that copy becomes the current copy and is immediately available for new threads to
use. A phased-out copy is removed after the last thread that uses the phased-out copy
releases it.
As shown in Figure C-1, two threads are running by using the current package copy, which
has copy ID 0. When you issue REBIND for that package, Db2 creates copy ID 4. If a new
thread (Thread 3) runs the same package during the rebind process, it also uses copy ID 0.
When the rebind process commits, copy ID 4 becomes the current copy and subsequent
threads, such as Thread 4, use copy ID 4. Copy ID 0 is marked as phased-out. After Thread
1, Thread 2, and Thread 3 release the package, a subsequent rebind deletes copy ID 0.
1 SYSPACKCOPY PREVIOUS.
2 SYSPACKCOPY ORIGINAL.
3 - Reserved.
Db2 replicates the phased-out copy into PREVIOUS or ORIGINAL copies during the rebind
process as necessary.
If you try to rebind a package and receive a DSNT500I message with reason code 00E30307,
all the available copy IDs are in use. You must determine which application thread is blocking
Db2 from freeing the phased-out copy. Start by looking at IFCID 393, which shows you which
application thread must be recycled. To take advantage of IFCID 393, be sure to turn on it
before issuing REBIND. You can also use a package accounting trace to show which package
copy of a thread is running.
FREE PACKAGE
Unused phased-out copies might stay in SYSPACKCOPY for some time. You can use the
following FREE PACKAGE commands to free eligible phased-out copies, which reduce the size
of SYSPACKCOPY:
FREE PACKAGE PLANMGMTSCOPE(PHASEOUT)
FREE PACKAGE PLANMGMTSCOPE(INACTIVE)
For more information about the FREE PACKAGE syntax, see FREE PACKAGE (DSN).
When you issue the EXPLAIN PACKAGE statement without the COPY keyword, the output
includes EXPLAIN for the CURRENT, PREVIOUS, and ORIGINAL copies.
– Avoid pooling of HIPERF DBATs because pooled DBATs continue to run the package
copies that were originally allocated to them. Use the following command to allow
HIPERF DBATs that are not pooled at connection termination:
-MODIFY DDF, PKGREL(BINDOPT)
– Set the reuse limit for IBM CICS protected threads to the default (REUSELIMIT = 1000)
to ensure that CICS transaction programs quickly start to run the new package copy.
– Instead of Information Management System (IMS) Fast Path, use IMS pseudo
wait-for-input (pseudo WIFI) regions to increase the chances of running the most
current package copy. IMS Fast Path WIFI region threads can remain allocated for
days or weeks.
APAR PH28693 improves REBIND phase-in and should be applied to production
environments before using this feature. The improvements that are provided by this APAR
reduce the likelihood of timeouts when newly arriving threads request the phased-out
package copy before the rebind operation completes.
Use the FREE PACKAGE command to remove unused package copies from the
SYSPACKCOPY.table.
Thoroughly test the behavior of REBIND phase-in before activating FL 505 in your
production environment.
Other DDL enhancements include implicit drop of explicitly created table spaces, implicit
hidden ROWID columns, meta-data self-description, support of DECFLOAT columns in
indexes, and the ability to rotate partitions for materialized query tables. For more information
about these enhancements, see “Data Definition Language” on page 240.
Huffman compression
Db2 12 introduces support for Huffman compression in function level 504 (FL 504) and
extends that support in FL 509. Huffman compression gives you another option to consider
when compressing your data.
Db2 supports index compression by using software algorithms for indexes that are defined in
page sizes larger than 4 K. If you compress indexes, the compressed page is always 4 K in
size. Index pages are always decompressed when they are in their buffer pool. The benefit of
index compression is disk savings.
Db2 12 introduced support for large object (LOB) compression in function level 500 (FL 500).
Db2 LOB compression is not dictionary-based, and it requires a separately priced software
feature in addition to hardware support for IBM Z Enterprise Data Compression (zEDC). The
hardware support for zEDC in z14 processors requires a separately priced feature card. In
IBM z15 and later, zEDC support is included on the processor; the software feature code is
still required.
For more information about Db2 compression, see Compressing your data.
Figure C-2 provides an excerpt of the COMPRESS clause that you can specify at the table space
or partition level for UTSs.
To use any of these methods to build a Huffman compression dictionary, the following
requirements must be met; otherwise, a fixed-length dictionary is built:
The table space must be a UTS.
The table space is not the directory table space (SPT01).
The table space is not defined with ORGANIZE BY HASH.
This catalog level is required to activate FL 509 and gives you visibility into the compression
algorithm at the table space or partition level.
DSN1COMP
The DSN1COMP stand-alone utility estimates space savings that can potentially be achieved by
data compression in table spaces, including LOB table spaces and indexes. You can estimate
the space savings for fixed-length compression, Huffman compression, or both.
These data sets can contain compressed or uncompressed data. For example, you can use
DSN1COMP on a table space partition that already is compressed with fixed-length compression
to determine whether you might experience greater space savings by using Huffman
compression.
After activating function level V12R1M504, you can control and prevent the creation of the
following deprecated objects in your Db2 environment:
Synonyms
Non-UTS table spaces, including segmented non-UTS (“classic” segmented) and
partitioned non-UTS (“classic” partitioned)
Hash-organized tables and the associated hash table spaces
Table C-2 Deprecated objects when APPLCOMPAT level is later than V12R1M504
This DDL: Results in:
CREATE TABLESPACE ... SEGSIZE n NUMPARTS p PBR UTS is created (no change).
CREATE TABLESPACE ... SEGSIZE n MAXPARTITIONS p PBG UTS is created (no change).
CREATE TABLE referencing an empty simple, segmented non-UTS An error occurs. Create a table in PBG or PBR
or partitioned non-UTS table space UTS instead.
These changes apply only when SQL DDL statements run at APPLCOMPAT level
(APPLCOMPAT) V12R1M504 or later. You can still create the deprecated objects at any Db2
function level where the DDL statements run with APPLCOMPAT V12R1M503 or earlier.
For more information about how you can control APPLCOMPAT, see APPLCOMPAT bind
option or CURRENT APPLICATION COMPATIBILITY.
Also, if you have not yet activated function level V12R1M504, all your packages are still bound
at APPLCOMPAT V12R1M503 or earlier, so you do not need to rebind any packages at a
higher APPLCOMPAT until you are ready for the applications to start using the features of a
later function level. So, activating function level V12R1M504 by itself has no immediate
impact on your ability to create deprecated objects.
Synonyms
Synonyms are still supported but can no longer be created. A CREATE SYNONYM statement
results in an error. Consider using aliases instead.
For reference, before FL 504, the syntax for creating or defaulting to a segmented table space
was:
CREATE TABLESPACE … SEGSIZE n (without MAXPARTITIONS or NUMPARTS specifications)
CREATE TABLESPACE … (without MAXPARTITIONS, NUMPARTS, or SEGSIZE specifications)
This syntax, in which a CREATE TABLESPACE statement does not specify the MAXPARTITIONS or
NUMPARTS clauses, results in the creation of a PBG table space by default instead of a
segmented (non-UTS) table space starting at the target APPLCOMPAT level. The value of
MAXPARTITIONS is the same as the default MAXPARTITIONS value for implicitly created PBG
table spaces. The value of SEGSIZE is determined by the explicit SEGSIZE clause, or Db2 uses
the DPSEGSZ subsystem parameter value. (For more information about the DPSEGSZ subsystem
parameter, see DEFAULT PARTITION SEGSIZE field (DPSEGSZ subsystem parameter.) Any
other table space attributes that are not specified are set to the normal default values that are
indicated in the CREATE TABLESPACE statement description.
Starting in APPLCOMPAT level V12R1M504, syntax that can specify the creation of a
segmented (non-UTS) table space is not available. The syntax that would have resulted in a
segmented (non-UTS) table space before APPLCOMPAT level V12R1M504 now results in a
PBG table space.
Because a segmented (non-UTS) table space can no longer be created through the CREATE
TABLESPACE statement, the LOCKSIZE TABLE specification is no longer valid because LOCKSIZE
TABLE can be specified only for a segmented (non-UTS) table space. LOCKSIZE TABLE was
removed from the CREATE TABLESPACE syntax diagram and description.
In APPLCOMPAT level V12R1M503 or earlier, Db2 issues an error for the following
statements, and continues to issue this error even in APPLCOMPAT levels V12R1M504 or
later:
CREATE TABLESPACE … LOCKSIZE TABLE MAXPARTITIONS n
CREATE TABLESPACE … LOCKSIZE TABLE NUMPARTS n
ALTER TABLESPACE … LOCKSIZE TABLE (where the table space is a UTS)
ALTER TABLESPACE with the LOCKSIZE TABLE specification continues to be supported if the
table space being altered is a segmented (non-UTS) table space.
For reference, here is the syntax for creating or defaulting to a partitioned (non-UTS) table
space:
CREATE TABLESPACE … SEGSIZE 0 … NUMPARTS n (without the MAXPARTITIONS specification)
CREATE TABLESPACE … NUMPARTS n (without the MAXPARTITIONS specification, and with
DPSEGSZ = 0)
A CREATE TABLESPACE statement that specifies SEGSIZE 0 and NUMPARTS results in an error.
A CREATE TABLESPACE statement that specifies NUMPARTS but does not specify the
MAXPARTITIONS and SEGSIZE clauses results in the creation of a PBR UTS table space by
default, even if the DPSEGSZ subsystem parameter is set to 0.
The value of SEGSIZE is determined by the explicit SEGSIZE clause or, if a SEGSIZE clause is not
specified, by the value of the DPSEGSZ subsystem parameter value. Any other table space
attributes that are not specified are set to the normal default values that are indicated in the
CREATE TABLESPACE statement description. For more information about the DPSEGSZ
subsystem parameter, see DEFAULT PARTITION SEGSIZE field (DPSEGSZ subsystem
parameter).
Hash-organized tables
Hash-organized tables are still supported but can no longer be created. A CREATE TABLE or
ALTER TABLE statement that specifies ORGANIZE BY HASH results in an error.
Sort activity and declared global temporary tables can use either type of table space in the
work file database, but determination of which table space to use for which activity depends
on the setting of the WFDBSEP subsystem parameter.
For more information about the default values of other options, see the full syntax for CREATE
TABLESPACE.
For these reasons, deprecated objects remain supported in Db2 12 and Db2 13. You can still
create the objects by controlling the APPLCOMPAT level, and Db2 13 supports all Db2 12
APPLCOMPAT levels. For more information, see Creating non-UTS table spaces
(deprecated).
Also, ALTER is fully supported at every APPLCOMPAT level for all existing objects, except
when the alteration can result in the creation of a deprecated object.
Nevertheless, it is a best practice that you adopt the use of strategic object types such as
UTS and that you develop plans for migrating your existing deprecated objects to one of these
UTS types. These latest UTS types offer many significant benefits (online schema evolution
support, flexible size partitions, and so on), and the deprecated objects will not be supported
forever. For more information about the advantages of UTSs, see Table space types and
characteristics in Db2 for z/OS.
Previously, this process involved unloading data, dropping tables, re-creating tables in UTSs,
re-creating associated objects, re-granting privileges, and reloading data. This conversion
process resulted in an application outage and was not feasible for production environments.
You can now convert these deprecated table spaces to partition-by-growth (PBG) UTSs by
using the new MOVE TABLE option on the ALTER TABLESPACE statement.
MOVE TABLE
Use the new MOVE TABLE option of ALTER TABLESPACE to move tables from a source multi-table
table space to target PBG UTSs. The MOVE TABLE operation becomes a pending definition
change that must be materialized by running the REORG utility on the source table space. Only
one table can be moved per ALTER TABLESPACE MOVE TABLE statement, but multiple pending
MOVE TABLE operations can be materialized in a single REORG.
The new MOVE TABLE clause is effective when running with APPLCOMPAT level V12R1M508
or later.
Before you can move a table, the target table space must exist as a PBG UTS in the same
database as the source table and must be defined with the following attributes:
DEFINE NO
MAXPARTITIONS value of 1
LOGGED or NOT LOGGED matching the source table space
CCSID values that match the source table space
Figure C-3 Option 1: Source table space conversion to target PBG UTS by using MOVE TABLE
Figure C-4 Option 2: Source table space conversion to target PBG UTS by using MOVE TABLE
For more information about preparing for and issuing MOVE TABLE operations, see Moving
tables from multi-table table spaces to partition-by-growth table spaces.
For more information about creating image copies of the source table space and accessing
historical table data from these image copies, see Accessing historical data from moved
tables by using image copies.
Db2 utilities features that were added after the initial release of Db2 12 are supported at
function level 100 (FL 100) unless otherwise indicated:
REORG
LOAD
UNLOAD
RUNSTATS
CHECK DATA
COPY
RECOVER
MODIFY RECOVERY
Concurrency
Replication support
Stored procedures
REORG
Enhancements to the REORG utility can be grouped into these categories:
Availability
Performance
Simplification and usability
Availability
Availability enhancements result in a greater likelihood of utilities completing and less impact
to application workloads.
REORG of SYSCOPY
PI96693 UI57559 RSU1812: When REORG SHRLEVEL CHANGE runs for the SYSIBM.SYSCOPY
table in table space DSNDB06.SYSTSCPY, other utilities that are running on different objects
at the same time and need to access or update SYSCOPY are no longer impacted. This
enhancement allows REORG to be run without having to consider any special scheduling.
REORG NOCHECKPEND
PH13527 UI65509 RSU2003: The new NOCHECKPEND keyword allows for improved availability
when REORG discards rows from a parent table of a referential integrity relationship. When the
keyword is specified, REORG avoids setting CHECK-pending (CHKP) status on dependent table
spaces. This option applies only when REORG DISCARD is specified. The NOCHECKPEND
specification does not reset a CHECK-pending status that is already in place before running the
REORG.
Performance
REORG performance enhancements can reduce elapsed time, CPU time, or both, and they can
result in higher proportions of specialty engine (zIIP) usage.
For ease of enablement, a new online changeable subsystem parameter was introduced to
enable the new REORG INDEX NOSYSUT1 behavior without specifying the new NOSYSUT1
keyword. The REORG_INDEX_NOSYSUT1 subsystem parameter defaults to NO, but you can set it to
YES to enable all REORG INDEX executions to use this new performance improvement.
These performance improvements are summarized in Table C-3 and Table C-4 on page 279.
You can now more easily manage these trade-offs by using the new keywords to dictate the
upper boundary on the number of image copy data sets to allocate. REORG uses this input to
decide how to bundle partitions into the specified number of image copies used.
Reorganization for PBR table spaces with relative page numbering (RPN) is also improved by
removing the previous restriction that required separate image copies for each partition that is
being reorganized. Performing a multi-partition REORG TABLESPACE operation against a PBR
RPN without a TEMPLATE that has &PART or &PA specified in the COPYDDN or RECOVERYDDN
statements no longer fails with RC8 and message DSNU2922I - PARTITION LEVEL INLINE
DATASETS ARE NOT SPECIFIED.
LOAD
Enhancements to the LOAD utility can be grouped into these categories:
Performance: LOAD PRESORT support
Simplification and usability
Availability
You can specify the PRESORT option when loading from a single input data set into a single
table, or when loading from multiple input data sets, one per partition. It also can be specified
when loading multiple tables, each with its own clustering sequence. PRESORT does not affect
existing rows in the table space, and full clustering cannot be guaranteed when loading into a
non-empty page set.
This capability can be useful in cases where you want to drive LOAD at the partition level, but
are constrained by a limited number of separate image copies that can be created during a
single LOAD due to the limited availability of tape drives. Also, this enhancement allows
serialization on the partitions that are included in the LOAD rather than the entire table space.
The following variations for DEFAULTIF on a column specification are now supported:
COLx DEFAULTIF(any_column = ’ABC’)
COLx DEFAULTIF(any_column <> ’ABC ‘)
COLx DEFAULTIF(CONVERR)
Availability
Enhancements to availability can be grouped into these categories:
LOAD Read Only table spaces support
LOAD SHRLEVEL REFERENCE FORCE option
UNLOAD REGISTER YES default
Target objects include the ones that were started with -START DATABASE() SPACENAM()
ACCESS(RO), and include the base table space, base table space indexes, LOB table spaces,
LOB indexes, XML table spaces, and XML indexes. LOAD_RO_OBJECTS also applies when the
underlying database for an object is in read-only state.
RUNSTATS
Enhancements to the RUNSTATS utility can be grouped into these categories:
Performance
Simplification and usability
Performance
Enhancements to performance can be grouped into these categories:
COLGROUP sort avoidance
RUNSTATS STATCLGSRT keyword
RUNSTATS default page sampling FL505
In function level 505 (FL505), the STATPGSAMP subsystem parameter is set so that the default
for all RUNSTATS jobs is to use page sampling. If the STATPGSAMP default is not acceptable, you
can set the subsystem parameter to NO, or you can override the individual jobs by using the
TABLESAMPLE SYSTEM keyword specification.
CHECK DATA
There is only one CHECK DATA enhancement.
COPY
Enhancements to COPY can be grouped into these categories:
Performance
Availability
Performance
Enhancements to performance can be grouped into these categories:
COPY FLASHCOPY CONSISTENT
TEMPLATE Large Block Interface support (FL 500)
RECOVER
Enhancements to RECOVER can be grouped into these categories:
Redirected recovery for table spaces (FL 500)
Redirected recovery for index spaces and indexes (FL 500)
With redirected recovery, the target table space is recovered by using the recovery assets
(image copy and logs) of the associated source table space.
MODIFY RECOVERY
There is only one MODIFY RECOVERY enhancement.
Concurrency
There is only one concurrency enhancement.
Replication support
Enhancements to replication support can be grouped into these categories:
Writing a diagnostic log record
Prohibiting utilities on replicated tables
DSNACCOX
PH25108 UI72637 RSU2103: On a large subsystem with many objects in a restricted or
advisory state, the number of messages that are returned by a DISPLAY command can exceed
the space that is available, in which case the resulting messages can be truncated. In cases
where you might be interested in checking objects in a specific database, provide a CRITERIA
parameter with DBNAME = database-name. However, because the resulting messages of
the DISPLAY DATABASE are truncated, the specific objects in the database in which you are
interested in might have been truncated before they can be evaluated by DSNACCOX.
This APAR also fixes a problem where an SQLCODE = -305 for no NULL indicator is returned
from a data sharing check by using the SQL function GETVARIABLE.
DSNACCOX defaults
PH32763 UI73842 RSU2106: The following default values for DSNACCOX were changed based
on recommendations from a Db2 360 study:
RRIPseudoDeletePct from -1 (off in non-data sharing) to 5
RRIEmptyLimit from 10 to = 5
For more information about extra features that provide synergy between IBM Db2, z15
systems, and the DS8000 family, see Appendix A, “IBM Db2 12 continuous delivery features
and enhancements” on page 233.
Monitoring and management changes include a new command option for the resource limit
facility (RLF), a new default setting for the DEFAULT_INSERT_ALGORITHM subsystem parameter,
and the new capability of populating reporting fields and adding a reporting field for index
rowid (RID) list processing.
Db2 12 extended the benefits of fast index traversal after general availability (GA) by adding
support for non-unique indexes so that you have greater flexibility to monitor and control fast
index traversal in your environment.
For more information about fast index traversal, see Fast index traversal.
FTBs use a separate memory area to cache non-leaf pages so that fewer getpage operations
are needed for index traversal. Initial FTBs support only unique indexes with a key size of 64
bytes or less. Unique indexes with INCLUDE columns also are eligible if the unique part of the
key does not exceed 64 bytes. Db2 deploys FTBs for indexes automatically and dynamically
based on current workload patterns.
Figure D-1 on page 291 shows the process of accessing a leaf page before using FTBs.
Db2 performs a getpage for each of the index pages and for the data page. In this example of
a 5-level index, five getpage operations are issued for the index pages, and one more to
access the data page, for a total of six getpage operations.
Figure D-2 shows the getpage reduction by using FTBs. The access starts with the FTB
structure for the index. The root and non-leaf pages are represented in the FTB. In this case,
Db2 performs getpage operations for only the green pages (the index leaf page and the data
page).
The getpage reduction that is shown in Figure D-2 results in significant reduction in CPU cost
for random index access. For the same five-level index scenario, only two getpage operations
are required.
In a data-sharing environment, it is recommended that all members use the same setting.
Fast index traversal is intended to be controlled by Db2 automatically. If you require more
granular control over which indexes have FTBs built for specific exceptions, you can populate
rows in the SYSIBM.SYSINDEXCONTROL catalog table. By using this table, you can direct
Db2 to include or exclude indexes from consideration for FTBs overall or based on the time of
day or on the day of the week or month.
Example D-1 and Example D-2 on page 293 show some examples of using this command.
Example D-1 Displaying index traversal counts for a specific index partition
DISPLAY STATS(INDEXTRAVERSECOUNT) DBNAME(DB1) SPACENAM(IX1) PART(1)
DSNT830I -
DBID PSID DBNAME IX-SPACE LVL PART TRAV. COUNT
--- ---- -------- -------- --- ---- -----------
0017 0005 DB1 IX1 003 0001 00000000999
******* DISPLAY OF STATS ENDED ******************
You can use the output of these commands to evaluate potential candidates for index fast
traversal.
For more information about the syntax options for this command, see -DISPLAY STATS
(Db2).
SORTL
Db2 12 supports IBM Integrated Accelerator for IBM Z Sort, which is available on the z15 and
later systems. It uses the SORTL instruction to sort lists of data. During the input phase, SORTL
sorts multiple lists of unsorted data into one or more lists of sorted data, and then merges
these multiple lists of sorted input data into a single list of sorted output data.
SORTL does the sort and merge during the input phase based on the maximum number of
records that it can handle at one time. If all the data cannot be sorted in the input phase,
SORTL creates work files to hold the resultant sorted lists. Then, SORT merges these work files
together during the merge phase.
You might observe reduced CPU and elapsed time by using SORTL for sort-intensive
operations. The enhancement is limited to a sort key size of 136 bytes and a sort record size
of 256 bytes. You also observe more memory consumption during sort phases.
Db2 can use SORTL for Relational Data System (RDS) sort, the REORG utility, or on a DFSORT
invocation. RDS uses SORTL for ORDER BY and GROUP BY operations automatically after
APAR PH31684 is applied. To enable SORTL for REORG, you must apply APAR PH28183 and
set the UTILS_USE_ZSORT subsystem parameter to YES (NO is the default). For DFSORT, use
OPTION ZSORT in the DFSORT control statement or ICEMAC/PRM.
As part of this support, Db2 adds statistics counters to track the total number of Db2 sorts
and the number of Db2 sorts that use SORTL. There are changes to Instrumentation Facility
Component Identifier (IFCID) 96 to add identifiers (STLO for ORDER BY operations and
STLG for GROUP BY operations) and new fields (key size and data size).
You can use the IBM Z Batch Network Analyzer (zBNA) tool to estimate potential SORTL
benefits for your workload. You can download zBNA at IBM Z Batch Network Analyzer (zBNA)
Tool.
Data sharing
The following topics are described in this section:
RUNSTATS and UNLOAD utilities
Retrying automatic logical page list and GBP recovery pending recovery
Asynchronous cross-invalidation of Group Buffer Pools
IRLM deadlock
-ALTER GROUPBUFFERPOOL (Db2)
Sysplex support for multi-factor authentication and IBM RACF PassTickets
The default value for the REGISTER options is YES in both cases:
RUNSTATS…SHRLEVEL CHANGE REGISTER (YES, NO)
UNLOAD…SHRLEVEL CHANGE ISO(UR) REGISTER (YES, NO)
Setting REGISTER to NO might reduce GBP dependency, depending on your other workloads,
and removes the requirement of the utility to register the pages that the utility reads. Each of
these effects might reduce utility elapsed time and reduce CPU and coupling facility (CF)
processor consumption. By not registering interest in those pages, the utility might not be
aware of changes being made by another Db2 member in the data sharing group. You should
consider the impact of non-current data for these utilities before specifying REGISTER NO.
Retrying automatic logical page list and GBP recovery pending recovery
When pages are added to the logical page list (LPL) because they cannot be written to the
GBP or when objects are placed in GBP recovery pending (GRECP) status, Db2
automatically attempts recovery actions. Sometimes the initial recovery attempts are not
successful. Db2 12 retries automatic recovery up to three times if pages or objects are still in
the LPL or GRECP status after automatic LPL or GRECP recovery. After any retries are
complete, Db2 issues DSNB317I for successful recovery actions or DSNB328I if some
recovery actions were not completely successful, as shown in Example D-3 on page 295 and
Example D-4 on page 295.
You can use these messages to understand the results of Db2 automatic recovery. You might
want to automate responses to DSNB328I messages to reduce the impact that these types of
object restrictions can have on your applications.
Cross-invalidation is the process where Db2 writes newly changed pages to the GBP, and
cross-system extended services (XES) sends messages to the LPARs where other Db2
members have a copy of one or more of those pages. Those other Db2 member copies are
now out of date and must be invalidated. XI messages normally are synchronous with the
process to write the changed pages to the GBP. In some cases, especially where CF LPARs
are at some distance from the originating GBP, this synchronous process contributes to
application delays.
If the XI messages can be sent asynchronously, the GBP writes do not wait for the XI
messages to complete, and the application can proceed more quickly. If asynchronous XI
support is available, Db2 evaluates XI messages and compares the numbers of pages in a
page set that must be cross-invalidated concurrently with an internally defined threshold. If
the number of pages is higher than the threshold, Db2 can request that the XI signals be sent
asynchronously. Asynchronous XI can occur for deferred writes or force-at-commit writes to
the GBP, but it does not occur if the data page size is 32 KB.
When asynchronous XI occurs, application processes do not need to wait until the XI
completes. The primary benefit is for configurations where Db2 data sharing members are far
from each other, although data sharing members that are close together can also benefit. You
can observe the performance improvement in accounting class 2 elapsed times.
IRLM deadlock
The internal resource lock manager (IRLM) reduces the likelihood of workload stall conditions
in situations where there are many waiters and deadlock detection is complex. APARs
PH08431 and PH08708 combine to provide relief by timing waiters out more quickly and
suppressing resultant timeout messages (DSNT376I). Db2 continues to produce the IFCID
196 timeout records for this situation.
You can enable your distributed clients to cache MFA or RACF PassTicket caching in a Db2
data-sharing environment. The clients can use MFA caching whether they use Sysplex
workload balancing or seamless failover. The clients can use PassTicket caching only if
Sysplex workload balancing is enabled. In both cases, the following conditions must be true:
The ICSF load library, SCSFMOD0, is in the LINKLIST of the LPAR where Db2 is running.
The AUTHEXIT_CACHEREFRESH subsystem parameter is set to ALL for all the members of the
Db2 data sharing group.
Beyond these two requirements, the actions that you must take depend on whether the
distributed clients are using Sysplex workload balancing.
You can implement MFA if you installed the IBM Z Multi-Factor Authentication product. For
more information, see IBM Z Multi-Factor Authentication.
If you use RACF PassTickets, see Enabling Db2 to receive RACF PassTickets.
For more information, see Enabling caching of MFA-based authentication credentials for
client with Sysplex workload balancing.
For more information, see Enabling caching of MFA and RACF PassTickets credentials for
clients with Sysplex workload balancing.
DDF messaging
APAR PH30222 adds important information to several messages and adds two messages:
Db2 issues messages DSNL519I and DSNL523I at DDF startup or when the DDF service
is restored. Before this APAR, the messages included only the main TCP/IP port for this
DDF. These messages now include the main port, the resynchronization (resync) port,
and the secure port (if defined).
Db2 includes message DSNL093I as part of the output of the -DISPLAY DDF DETAIL
command. Before this APAR, this message included only the number of pooled Database
Access Threads (DBATs) and the number of inactive connections. This message now
includes the number of active, in-use DBATs, which are DBATs that are currently
processing client requests.
The extra fields in these updated messages help you to manage your DDF workload:
DSNL077I is a new message that is issued when the current number of active, in-use
DBATs exceeds 85% of the current value of the MAXDBAT subsystem parameter. MAXDBAT
represents the maximum number of concurrently active DBATs, including in-use and
disconnected DBATs.
DSNL078I is a new message that displays when the current number of active, in-use
DBATs no longer exceeds 75% of the value of MAXDBAT.
The combination of DSNL077I and DSNL078I provides greater insight into DDF activity.
With APAR PH12041, Db2 adds support for new counters for profile-related thread activity,
the ability to request synchronous reads of IFCID 402 trace records, and the ability to control
the queue depth for distributed threads.
Db2 produces IFCID 402 trace records for profile warnings or exceptions. These records now
include the following information at the profile level:
Current active thread counters
Current suspended thread counters
High water mark of thread counter since DDF started
Current connection counter
High water mark of connections counter since DDF started
A monitor program that uses the IFI READS function can now request synchronous data from
IFCID 402 trace records. You can now monitor your DDF workload and act more quickly.
With APAR PH30780, Db2 adds new keyword columns to monitor connections and threads
from unknown or unspecified IP addresses. With this significant change, you can control
floods of requests and keep them from overwhelming your high-priority distributed workload.
These keywords apply to profiles for the default location filtering criteria, where the
LOCATION column of the DSN_PROFILE_TABLE contains ‘*’, ‘::0’, or ‘0.0.0.0’. You can use
these keywords to enforce warning and exception thresholds for the cumulative number of
connections or threads from across all locations that match the default location filtering
criteria. In other words, the values that you specify for MONITOR ALL CONNECTIONS or
MONITOR ALL THREADS limit the total of number of connections or threads from locations
that you do not specify in other profile rows.
For more information about Db2 system profile monitoring, see Profiles for monitoring and
controlling Db2 for z/OS subsystems.
Subsystem monitoring
Db2 adds several enhancements to improve subsystem monitoring:
SYSPROC.ADMIN_INFO_IFCID stored procedure
STATIME_MAIN subsystem parameter
Reason for storage contraction in message DSNS005I
IBM CICS attachment and client information special registers
You can use these fields to filter specific data from Db2 accounting trace records (IFCID
0003).
DSNZPARM simplification
Db2 12 simplifies subsystem parameters (DSNZPARM) by changing defaults or removing
parameters.
Db2 12 changes the default values for the parameters that are shown in Table D-1 with APAR
PH21370.
NPGTHRSH 0 1
INLISTP 50 1000
The new default values represent best practices or reflect common usage.
Db2 12 removes the parameters that are shown in Table D-2. Db2 behaves according to
implicit values, sets the highest value, or no longer uses the process.
EDPROP NO PH28280
CHGDC NO PH28280
If you do not currently use the value that reflects the implicit behavior, you should verify that
the change is compatible with your subsystem and applications by going to Adjust subsystem
parameter setting for parameters removed in Db2 12.
Security
Db2 12 provides new security capabilities for encryption and auditing and increases security
by using existing interfaces.
Encryption
Auditing: tamper-proof audit policy
For more information about security enhancements for command-line processing (CLP),
secure socket layer-only connections, coordination of SECADM and BINDAGENT privileges,
improved cross-memory address space separation, and RACF synchronization with Db2
security caches, see “Security” on page 247.
In each case, the action to allocate the data set, such as during a REORG, causes the data set
to be encrypted.
Starting in FL 502, you can display key label information by using the -DIS GROUP command or
the REPORT utility. If you encrypt the active or archive log data sets, you can display the key
label with -DIS LOG or -DIS ARCHIVE.
The ADMIN_DS_LIST stored procedure can display data set encryption status and key label
information. This enhancement, which is delivered in APAR PH12920, is available in the Db2
12 base and does not require a particular function level.
For more information about data set encryption, see Db2 for z/OS Security with Data Set
Encryption.
An application invokes the encrypt and decrypt BIFs to protect sensitive data at the column
level. The application uses the encrypt BIF to specify the expression to be encrypted, the key
label, which identifies the key ICSF uses to encrypt the expression, and the encryption
algorithm, as shown in Figure D-3.
The data type of the encrypted value is either VARBINARY or BLOB depending upon the data
type of the input expression. BIGINT, INTEGER, DECIMAL, CHAR, VARCHAR, GRAPHIC,
and VARGRAPHIC input expressions produce VARBINARY data types. CLOB and DBCLOB
input expressions produce BLOB data types.
When the encrypted value is decrypted, your application uses the decrypt BIF that
corresponds to the original input data type. For example, if the encrypt BIF encrypted an
expression with data type of VARCHAR, the application uses the BIF
DECRYPT_DATAKEY_VARCHAR to retrieve the unencrypted value.
For more information about Db2 encryption, see Protecting data through encryption and
RACF.
A newly inserted tamper-proof audit policy record with a DB2START column value of ‘T’ is
started automatically during Db2 startup. If the audit trace must start immediately before Db2
restart, you must issue a START TRACE command.
For example, suppose Sara, a Db2 security administrator, wants to enable a tamper-proof
audit policy, TAMPERPRFPOLICY01, before Db2 restarts. The following steps occur:
1. Sam, a z/OS security administrator, activates the RACF DSNR class and issues the RACF
command RACLIST to the DSNR class if the command has not already been run.
Optionally, Sam defines a default profile, DSNAUDIT.*, in the RACF DSNR class that
prevents any tamper-proof audit policy records from being modified or stopped. The RACF
DSNR class must be refreshed afterward.
2. Sara creates a tamper-proof audit policy by inserting a record into the
SYSIBM.SYSAUDITPOLICIES table with the DB2START column set to a new value ‘T’:
INSERT INTO SYSIBM.SYSAUDITPOLICIES
(AUDITPOLICYNAME, CHECKING, VALIDATE, SYSADMIN, DB2START)
VALUES('TAMPERPRFPOLICY01','A','A','I','T');
For example, suppose John, a Db2 security administrator, must update the tamper-proof audit
policy TAMPERPRFPOLICY01 that is already started. John must first ask Sam, a z/OS
security administrator to complete the following steps:
1. Create a profile in the RACF DSNR class for the tamper-proof audit policy and permit John
access to the profile:
RDEFINE DSNR DSNAUDIT.TAMPERPRFPOLICY01 UACC(NONE) OWNER(DB2OWNER)
PE DSNAUDIT.TAMPERPRFPOLICY01 ID(JOHN) ACCESS(READ) CLASS(DSNR)
2. Refresh the RACF DSNR class.
John must then stop and restart the tamper-proof audit policy to pick up the policy updates.
After John finishes with his update, the following steps must be done:
1. John issues the STOP TRACE and START TRACE commands to manually pick up the update:
STO TRACE(AUDIT) AUDTPLCY(TAMPERPRFPOLICY01)
STA TRACE(AUDIT) AUDTPLCY(TAMPERPRFPOLICY01)
2. Sam removes John’s privileges for accessing the audit policy profile:
PE DSNAUDIT.TAMPERPRFPOLICY01 ID(JOHN) DELETE CLASS(DSNR)
3. Sam refreshes the RACF DSNR class to pick up the changes.
You cannot use the SCOPE(GROUP) option in a START TRACE or STOP TRACE command that starts
or stops a tamper-proof audit policy. To start or stop a tamper-proof audit policy for all
members of a data sharing group, issue the command on each data sharing member.
Summary
Db2 12 for z/OS continuous delivery provides significant enhancements for system
administration and system management. Improvements for performance, data sharing, DDF,
monitoring, parameter simplification, encryption, and auditing combine to increase your ability
to meet your organization’s requirements for performance, scalability, and security.
BSAM basic sequential access method FFDC First Failure Data Capture
SG24-8527-00
ISBN 0738460575
Printed in U.S.A.
®
ibm.com/redbooks