SG 248579
SG 248579
Redbooks
Draft Document for Review April 11, 2025 7:23 am 8579edno.fm
Note: Before using this information and the product it supports, read the information in “Notices” on
page 11.
This edition applies to IBM z17 Model ME1, Machine Type 9175.
i
8579edno.fm Draft Document for Review April 11, 2025 7:23 am
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Contents 1
SG248579TOC.fm Draft Document for Review April 11, 2025 7:26 am
Contents 3
SG248579TOC.fm Draft Document for Review April 11, 2025 7:26 am
Contents 5
SG248579TOC.fm Draft Document for Review April 11, 2025 7:26 am
Appendix A. IBM Z Integrated Accelerator for AI and IBM Spyre AI Accelerator . . . 515
Contents 7
SG248579TOC.fm Draft Document for Review April 11, 2025 7:26 am
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 531
...................................................................... 532
C.1 Tailored Fit Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
C.2 Software Consumption Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 533
C.2.1 International Program License Agreement in the Software Consumption Model 534
C.3 Hardware Consumption Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534
C.3.1 Tailored Fit Pricing for IBM Z hardware in more detail . . . . . . . . . . . . . . . . . . . . 535
C.3.2 Efficiency benefits of TFP-Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
C.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
C.5 IBM Z Flexible Capacity for Cyber Resiliency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537
C.6 Use cases of IBM Flexible Capacity for Cyber Resiliency . . . . . . . . . . . . . . . . . . . . . 538
C.6.1 Disaster recovery and DR testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
C.6.2 Frictionless compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
C.6.3 Facility maintenance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
C.6.4 Pro-active avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 538
C.7 How does IBM Flexible Capacity for Cyber Resiliency work? . . . . . . . . . . . . . . . . . . 539
C.7.1 Set up process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
C.7.2 Transferring workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
C.7.3 Multi-system environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541
C.8 Tailored fit pricing for hardware and IBM Z Flexible Capacity for Cyber Resiliency . . 542
C.9 Ordering and installing IBM Z Flexible Capacity for Cyber Resilience . . . . . . . . . . . . 543
C.10 Terms and conditions of IBM Z Flexible Capacity for Cyber Resiliency . . . . . . . . . . 544
C.11 IBM Z Flexible Capacity for Cyber Resilience versus Capacity Back Up . . . . . . . . . 545
Contents 9
SG248579TOC.fm Draft Document for Review April 11, 2025 7:26 am
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and customer examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation, registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright
and trademark information” at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
CICS® IBM Z® System z®
Connect:Direct® IBM z Systems® System z10®
DB2® IBM z13® System z9®
Db2® IBM z13s® VTAM®
DS8000® IBM z14® WebSphere®
FICON® IBM z16® z Systems®
FlashCopy® IBM z17™ z/Architecture®
GDPS® Interconnect® z/OS®
Guardium® Language Environment® z/VM®
HyperSwap® OMEGAMON® z/VSE®
IBM® Parallel Sysplex® z13®
IBM Blockchain® Passport Advantage® z13s®
IBM Cloud® PIN® z15®
IBM Security® RACF® z16™
IBM Spyre™ Redbooks® z9®
IBM Sterling® Redbooks (logo) ® zEnterprise®
IBM Telum® Resource Link® zSystems™
IBM Watson® Sterling™
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and
other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Preface
This IBM® Redbooks® publication describes the features and functions of the latest member
of the IBM Z® platform that was built with the IBM Telum® II processor: the IBM z17™
(machine type 9175). It includes information about the IBM z17 processor design, I/O
innovations, security features, and supported operating systems.
The IBM Z platform is recognized for its security, resiliency, performance, and scale. It is relied
on for mission-critical workloads and as an essential element of hybrid cloud infrastructures.
The IBM z17 server adds capabilities and value with innovative technologies that are needed
to accelerate the digital transformation journey.
The IBM z17 is a state-of-the-art data and transaction system that delivers advanced
capabilities, which are vital to any digital transformation. The IBM z17 is designed for
enhanced modularity, which is in an industry standard footprint.
This book explains how this system uses new innovations and traditional IBM Z strengths to
satisfy growing demand for cloud, analytics, and open source technologies. With the IBM z17
as the base, applications can run in a trusted, reliable, and secure environment that improves
operations and lessens business risk.
Authors
This book was produced by a working at IBM Redbooks, Poughkeepsie Center.
Ewerson Palacio is an IBM Redbooks Project Leader. He holds Bachelor’s degree in Math
and Computer Science. Ewerson worked for IBM Brazil for over 40 years and retired in 2017
as an IBM Distinguished Engineer. Ewerson co-authored many IBM Z Redbooks, and created
and presented ITSO seminars around the globe.
John Troy is an IBM Z and storage hardware National Top Gun in the northeast area of the
US. He has over 40 years of experience in the service field. His areas of expertise include
IBM Z servers and high-end storage systems technical and customer support and services.
John has also been an IBM Z hardware technical support course designer, developer, and
instructor for the last eight generations of IBM high-end servers.
Martijn Raave is an IBM Z and LinuxONE Client Architect and Hardware Technical Specialist
for IBM Northern Europe. Over a period of 27 years, his professional career has revolved
around the mainframe platform, supporting several large Dutch customers in their technical
Preface 13
8579pref.fm Draft Document for Review April 10, 2025 4:22 pm
and strategic journey on IBM Z. His focus areas are hardware, resiliency, availability,
architecture, but he is basically interested in any IBM Z related topic.
Kazuhiro Nakajima is a Senior IT Specialist at IBM Japan. He has a 35-year career at IBM
Japan and has been an advanced Subject Matter Expert on IBM Z products for over 20 years.
His areas of expertise include IBM Z hardware, performance, z/OS®, and IBM Z connectivity.
He has co-authored several IBM Z Redbooks publications, from the IBM zEC12 to the IBM
z16®.
Martin Packer is a mainframe performance and capacity specialist, with a penchant for SMF
data analysis. He has worked with many customers around the world, having almost 40 years
of mainframe experience. He has blogged, cast pods, and presented at conferences
extensively. His first degree is in Mathematics and Physics, his second in Electronics and
Computing.
Priyal Sha is a HW and Compilers Performance Analyst for IBM Z with over 20 years of
experience and 14 years of experience working on compilers performance on IBM Z. Her
niche area and focus is on Hardware-Software synergy and ensuring most optimal
performance value of the IBM Z platform by influencing design choices in Hardware and
Software. She enjoys this rare opportunity her role offers as well as the opportunity to connect
and collaborate with top technical experts across the stack.
Octavian Lascu is a Senior IT Infrastructure Architect with Inter Computer, Romania. He has
over 35 years of IT experience with IBM Z, IBM Power, and IBM Storage. Octavian was an
IBM Redbooks Project Leader for over 20 years, where he has co-authored many IBM
Redbooks publications covering IBM Z hardware, IBM Power, and various IBM solutions.
Pat Oughton is an IBM Z Brand Technical Specialist in New Zealand. He joined IBM in 2015
after working as a z/OS Systems Programmer for 30 years. His areas of expertise include
IBM Z installation (hardware and operating system) and IBM Parallel Sysplex Implementation.
He has written three other IBM Redbooks publications.
André Spahni is an IBM Z Brand Technical Specialist based in Zurich, Switzerland. He has
over 22 years of experience working with and supporting IBM Z clients. André has worked for
EMEA 2nd level supporter and national Top Gun. His areas of expertise include IBM Z
hardware, HMC/SE, and connectivity.
Artem Minin is currently a Technical Specialist in IBM’s Washington Systems Center, a team
of Subject Matter Experts that provide leading edge technical sales assistance for the design,
implementation, and support of solutions that leverage IBM Z. Specifically, Artem is
responsible for supporting IBM Z Data and AI solutions through PoCs, custom demos, and
client workshops. He focuses on supporting client engagements which leverage open-source
AI software, Machine Learning for z/OS, Cloud Pak for Data, Data Gate, DVM for z/OS, and
SQL Data Insights.
Lutz Kuehner is a Senior z/OS System Engineer at UBS AG in Switzerland. He has 39 years
of experience in IBM z Systems®. Lutz has worked for 10 years in the IBM Z Presales
support in Germany, developing and providing professional services for customers in the
financial market. In addition to co-authoring several IBM Redbooks publications since 2001,
he has been an official ITSO presenter at ITSO workshops.
Markus Ertl is a Senior IT Specialist in IBM Germany. He has more than 20 years of
experience with IBM Z, working with clients from various industries. His area of expertise
includes IBM Z hardware and infrastructure, performance, and Capacity on Demand topics.
Houda Achouri is a Senior IBM Z Technical Manager and IBM Z Technical Leader for UKI
based in the UK. She has 10 years of IBM Z Hardware experience spanning 5 generations of
IBM Z systems working on a diverse portfolio of clients across industries, including major
financial institutions and retailers. Houda has a BSc and MSc in Mathematics and Computer
Science from the University of Manchester.
John Campbell
IBM Z Washington Systems Center
Robert Haimowitz
Patrik Hysky
IBM Redbooks, Poughkeepsie Center
Dave Surman, David Hollar, Michael Groetzner, Kyle Giesen, Seth Lederer, Patty Driever,
Jeannie Kraus, John Torok, Brian Valentine, Patrick McKeone, Marna Walle, Leon Manten,
Dalibor Kurek, Ron Geiger, Richard Gagnon, Les Geer III, Chris Filacheck, Nicole Rae, Yamil
Rivera
IBM Poughkeepsie
Preface 15
8579pref.fm Draft Document for Review April 10, 2025 4:22 pm
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
The new member of the IBM Z family, IBM z17, continues that commitment and adds value
with innovative technologies that can help accelerate the digital transformation journey.
The IBM z17 system is built with the IBM Telum II processor1, which was introduced at the
2024 Hotchips Conference in August 2024. Hotchips is one of the semiconductor industry’s
leading conferences on high-performance microprocessors and related integrated circuits.
Along side the IBM Telum II processor IBM also announced the IBM Spyre™ Accelerator2. A
purpose-built enterprise-grade accelerator offering scalable capabilities for complex AI
models and generative AI use cases is being showcased. It features up to 1TB of memory,
built to work in tandem across the eight cards of a regular IO drawer, to support AI model
workloads across the mainframe while designed to consume no more than 75W per card.
Each chip will have 32 compute cores supporting int4, int8, fp8, and fp16 datatypes for both
low-latency and high-throughput AI applications. For more information on the IBM Spyre
Accelerator see: Appendix A, “IBM Z Integrated Accelerator for AI and IBM Spyre AI
Accelerator” on page 515.
The IBM z17 is designed to help businesses meet the following goals:
Create value in every interaction and to optimize decision making, with the on-chip
Artificial Intelligence (AI) accelerator. The Accelerator for AI is engineered for AI at scale
and allows you to Integrate AI with your mission critical transactions to accelerate insights
with near zero latency while ensuring data privacy and system availability.
Act now to protect today’s data against current and future threats with quantum-safe
protection immediately through quantum-safe cryptography APIs and crypto discovery
tools. Use AI for early detection of threats and simplify compliance while accelerating your
Quantum-safe journey.
1
IBM Telum II Processor: the next Telum generation microprocessor for IBM Z and IBM LinuxONE
2
Statement of general direction: The IBM Spyre AI Accelerator is planned to be available starting in 4Q 2025, in
accordance with applicable import/export guidelines.
Enhance resiliency with flexible capacity to dynamically shift system resources across
locations to proactively avoid disruptions.
Modernize and integrate applications and data in a hybrid cloud environment with
consistent and flexible deployment options to innovate with speed and agility.
Reduce cost and keep up with changing regulations through a solution that helps simplify
and streamline compliance tasks.
This chapter describes the basic characteristics of the IBM z17 platform. It includes the
following topics:
1.1, “IBM z17 ME1 highlights” on page 3
1.2, “IBM z17 ME1 technical overview” on page 7
1.3, “Hardware management” on page 14
1.4, “Reliability, availability, and serviceability” on page 14
The new processor chip design has a enhanced cache hierarchy as introduced with the IBM
z16, on-chip AI accelerator that is shared by the PU cores, transparent memory encryption,
and increased uniprocessor capacity (single thread and SMT similar).
Each PU chip will also include a new single Data Processing Unit (DPU), a dedicated core for
I/O operations.
The on-chip AI scoring logic provides submicrosecond AI inferencing for deep learning and
complex neural network models.
The IBM z17 (machine type 9175) has one model: the ME1. The maximum number of
characterizable processor units (PUs) with the IBM z17 is represented by feature names:
Max43, Max90, Max136, Max183, and Max208.
The number of characterizable PUs, spare PUs, System Assist Processors (SAPs), and
Integrated Firmware Processors (IFP) are included with each IBM z17 feature (see
Table 1-1).
Max43 1 0571 1 - 43 5 2
Max90 2 0572 1 - 90 10 2
The IBM z17 memory subsystem uses proven redundant array of independent memory
(RAIM) technology to ensure high availability. Up to 64 TB (16 TB per CPC drawer) of
addressable memory per system can be ordered.
The IBM z17 also has unprecedented capacity to meet consolidation needs with innovative
I/O features for transactional and hybrid cloud environments.
The IBM z17 (maximum configuration) can support up to 12 PCIe+ I/O drawers. Each I/O
drawer can support up to 16 I/O or special purpose features for storage, network, clustering
connectivity, and cryptography.
The IBM z17 is more flexible and features simplified on-demand capacity to satisfy peak
processing demands and quicker recovery times with built-in resiliency capabilities. The
Capacity on Demand (CoD) function can dynamically change available system capacity. This
function can help respond to new business requirements with flexibility and precise
granularity.
The IBM Tailored Fit Pricing for IBM Z options delivers unmatched simplicity and predictability
of hardware capacity and software pricing, even in the constantly evolving era of hybrid cloud.
Consider the following points:
The IBM z17 enhancements in resiliency include a capability that is called IBM Z Flexible
Capacity for Cyber Resiliency. With Flexible Capacity for Cyber Resiliency, you can
remotely shift capacity and production workloads between IBM z173 systems at different
sites on demand with no onsite personnel or IBM intervention. This capability is designed
to help you proactively avoid disruptions from unplanned events and planned scenarios,
such as site facility maintenance. Refer to “IBM Z Flexible Capacity for Cyber Resiliency”
on page 537.
IBM z17 provides no new System Recovery Boost (SRB) enhancements. As for IBM z16,
SRB provides boosted processor capacity and parallelism for specific events.
Client-selected middleware starts and restarts to expedite recovery for middleware regions
and restore steady-state operations as soon as possible. z/OS SVC memory dump
processing and HyperSwap® configuration load and reload are boosted to minimize the
effect on running workloads. See: Systems Recovery Boost content solution.
On IBM z17, with the new Coupling Facility Control Code (CFCC) Level 26, the enhanced
ICA-SR coupling link protocol provides improvements for read, lock, and write requests,
compared to CF service times on IBM z16 systems. The improved CF service times for CF
requests can translate into better Parallel Sysplex coupling efficiency; therefore, the
software costs can be reduced for the attached z/OS images in the Parallel Sysplex.
IBM z17 provides improved CF processor scalability for CF images, plus virtualization,
consolidation and density enhancements. Compared to IBM z16, the relative scaling of a
CF image beyond a 16-way is significantly improved; that is, that the effective capacity of
IBM z17 CF images continues to increase meaningfully all the way up to the maximum of
32 processors in a CF image.
The IBM z17 also added functions to protect today’s data now, and from future cyberattacks
that can be initiated by quantum computers. The IBM z17 provides the following
quantum-safe capabilities:
Key generation
Encryption
Key encapsulation mechanisms
Hybrid key exchange schemes
Dual digital signature schemes
3 IBM z16 also supports Flexible Capacity.
The IBM z17 ME1 provides increased processing and enhanced I/O capabilities over its
predecessor, the IBM z16 A01. This capacity is achieved by increasing the number of PUs
per system, increased system cache, and introducing new I/O technologies.
The IBM z17 feature Max208 is estimated to provide up to 15% (+/- 2%) more total system
capacity than the IBM z16 Model Max200, with the same amount of memory and power
requirements. With up to 64 TB of main storage and enhanced SMT, the performance of the
IBM z17 ME1 processors deliver considerable improvement. Uniprocessor performance also
increased significantly. An IBM z17 Model 701 offers average performance improvements of
up to 11% (+/-2%) over the IBM z16 Model 701.4
The Integrated Facility for Linux (IFL) and IBM Z Integrated Information Processor (zIIP)
processor units on the IBM z17 can be configured to run two simultaneous threads in a single
4 Observed performance increases vary depending on the workload types.
processor (SMT). This feature increases the capacity of these processors with 25% on
average4 over processors that are running single thread. SMT is also enabled by default on
System Assist Processors (SAPs).
Within each single drawer, IBM z17 provides 20% (+/-2%) greater capacity than IBM z16 for
standard models and 40% greater capacity on the maximum configured model, which
enables efficient partition scaling.
This comparison is based on the Large System Performance Reference (LSPR) mixed
workload analysis. The range of performance ratings across the individual LSPR workloads is
likely to feature a large spread. More performance variation of individual logical partitions
(LPARs) is available when an increased number of partitions and more PUs are available. For
more information, see Chapter 12, “Performance and capacity planning” on page 489.
For more information about millions of service units (MSUs) ratings, see the IBM Z Software
Contracts website.
21st Century Link Software VSEn V6.3.1 is supported on IBM z17. For more information, see
7.7, “VSEn migration considerations” on page 347.
IBM plans to support the following Linux on IBM Z distributions on IBM z17:
SUSE SLES 16.1 (Post GA)
SUSE SLES 15.6 (GA)
SUSE SLES 12.5 (Post GA)
Red Hat RHEL 10.0 (Post GA)
Red Hat RHEL 9.4
Red Hat RHEL 8.10
Red Hat RHEL 7.9 (Post GA)
Canonical Ubuntu 24.04 LTS (Post GA)
Canonical Ubuntu 22.04 LTS (Post GA)
Canonical Ubuntu 20.04 LTS (Post GA)
The support statements for the IBM z17 also cover the KVM hypervisor on distribution levels
that have KVM support.
For more information about the features and functions that are supported on IBM z17 by
operating system, see Chapter 7, “Operating systems support” on page 261.
5 z/OS 2.4 End of service support on 09/24 - requires IBM Software Support Services.
The compilers increase the return on your investment in IBM Z hardware by maximizing
application performance by using the compilers’ advanced optimization technology for
IBM z/Architecture®.
Through their support of web services, XML, and Java, they allow for the modernization of
assets in web-based applications. They also support the latest IBM middleware products
(CICS®, Db2®, and IMS), which allows applications to use their latest capabilities.
To fully use the capabilities of the IBM z17, you must compile your code by using the
minimum level of each compiler. To obtain the best performance, you must specify an
architecture level of applicable to your environment, being mindful of potential N-1 and N-2
generations at Disaster Recovery or Secondary sites.
For more information, see 7.5.8, “z/OS XL C/C++ considerations” on page 342.
1.2.1 Frames
The IBM z17 ME1 uses 19-inch frames and industry-standardized power and hardware. It can
be configured as a one-, two-, three-, or four-frame system. The IBM z17 ME1 packaging is
compared to the two previous IBM Z platforms in Table 1-2.
Table 1-2 IBM z17 configuration options compared to IBM z15 and IBM z16 configurations
System Number of Number of CPC Number of I/O I/O and power Power Cooling
frames drawers drawers connections options options
IBM z17 1-4 1-4 0 - 12 Rear only PDU only Radiator (air)
ME1 only
IBM z16 1-4 1-4 0 - 12a Rear only PDU or Radiator (air)
A01 BPA only
System Number of Number of CPC Number of I/O I/O and power Power Cooling
frames drawers drawers connections options options
Note: Loss of one PSU leaves enough power to satisfy the power requirements of the
entire drawer. The PSUs can be concurrently maintained.
With the IBM z17, Virtual Flash Memory (VFM) feature is offered from the main memory
capacity in 0.5 TB units (up to 6TBs maximum), to increase granularity for the feature. VFM
can provide much simpler management and better performance by eliminating the I/O to the
adapters in the PCIe+ I/O drawers.
For a four CPC drawer system, up to 48 PCIe+ fan-out slots can be populated with fan-out
cards for data communications between the CPC drawers and the I/O infrastructure, and for
coupling. The multiple channel subsystem (CSS) architecture allows up to six CSSs, each
with 256 channels.
The IBM z17 implements PCIe Generation 5 (PCIe+ Gen5), which is used to connect the
PCIe Generation 4 (PCIe+ Gen4) dual port fan-out features in the CPC drawers. The I/O
infrastructure is designed to reduce processor usage and I/O latency, and provide increased
throughput and availability.
FICON Express
FICON Express features follow the established Fibre Channel (FC) standards to support data
storage and access requirements, along with the latest FC technology in storage and access
devices. FICON Express features support the following protocols:
FICON
This enhanced protocol (as compared to FC) provides for communication across
channels, channel-to-channel (CTC) connectivity, and with FICON devices, such as disks,
tapes, and printers. It is used in z/OS, z/VM, VSE (VSEn V6.3.1 - 21st Century Software),
z/TPF (Transaction Processing Facility), and Linux on IBM Z environments.
Fibre Channel Protocol (FCP)
This standard protocol is used for communicating with disk and tape devices through FC
switches and directors. The FCP channel can connect to FCP SAN fabrics and access
FCP/SCSI devices. FCP is used by z/VM, KVM, VSE (VSEn V6.3.1 21st Century
Software), and Linux on IBM Z environments.
FICON Express32-4P features are implemented by using PCIe cards, and offers better port
granularity and improved capabilities over the previous FICON Express features. FICON
Express32-4P four port features support a link data rate of 32 gigabits per second (Gbps) (8,
16, or 32 Gbps auto-negotiate), and it is the preferred technology for new systems.
zHyperLink Express
zHyperLink was created to provide fast access to data by way of low-latency connections
between the IBM Z platform and storage.
The zHyperlink Express2.0 is updated to support PCIe+ Gen4 with new Gen4 retimer, Gen4
Switch with DMA, and the new CXP16 Gen4 optical transceiver. It connects to DS8K
zHyperlink adapter on the other side of the link.
The zHyperLink Express2.0 feature allows you to make synchronous requests for data that is
in the storage cache of the IBM DS8900F. This process is done by directly connecting the
zHyperLink Express2.0 port in the IBM z17 to an I/O Bay port of the IBM DS8000®. This
short distance (up to 150 m [492 feet]), direct connection is designed for low-latency reads
and writes, such as with IBM DB2® for z/OS synchronous I/O reads and log writes.
Working with the FICON SAN Infrastructure, zHyperLink can improve application response
time, which cuts I/O-sensitive workload response time in half without requiring application
changes.7
Note: The zHyperLink channels complement FICON channels, but they do not replace
FICON channels. FICON channels remain the main data driver and are mandatory for
zHyperLink usage.
Network Express
The Network Express adapter is the IBM z17 common hardware for OSA, RoCE and
Coupling Express3 Long Reach (CE-LR). It converges the legacy OSA-Express and RoCE
Express and CE-LR into one hardware platform offering.The adapter currently supports LR
and SR, and optics will be supported for 10G and 25G.
The Network Express card is considered an OSA card even though it can perform RoCE I/O
because it is not associated with a Resource Group (PSP) unlike RoCE adapters.
This new adapter supports all legacy functions available with OSA OSD, but uses EQDIO
(Enhanced-QDIO) architecture while OSD uses QDIO. The Network Express card only
supports the Enhanced QDIO (EQDIO) architecture, with CHPID type OSH for OSA-style I/O.
The card has 2 ports, each of which is a PCHID, regardless of variant. The PCHIDs on the
card are managed by the new on-chip DPU processor, unless they are defined as NETD in
the IOCDS for Physical Function (PF) Access Mode, where the PF is assigned to a customer
partition instead of DPU.
Network Express supports both PCIe (e.g. RoCE) and OSA functionality on the same port of
a card. This eliminates the need for some users of RoCE to buy a dedicated OSA card just to
enable RoCE I/O. Remote Direct Memory Access (RDMA) over Converged Ehernet (RoCE)
is a network protocol which allows data to be transferred directly between the memory of two
7 The performance results can vary depending on the workload. Use the zBNA tool for the zHyperLink planning.
computers in the same Ethernet broadcast domain, significantly reducing latency, CPU load,
and increasing bandwidth.
These features help reduce the use of CPU resources for applications that use the TCP/IP
stack (such as IBM WebSphere® that accesses an IBM Db2 database). They also can help
reduce network latency with memory-to-memory transfers by using Shared Memory
Communications over RDMA (SMC-R).
With SMC-R, you can transfer huge amounts of data quickly and at low latency. SMC-R is
transparent to the application and requires no code changes, which enables rapid time to
value.
The RoCE function can also provide LAN connectivity for Linux on IBM Z, and complies with
IEEE standards. In addition, RoCE assumes several functions of the TCP/IP stack that
normally are performed by the PU, which allows significant performance benefits by
offloading processing from the operating system.
Customers who require the legacy QDIO architecture (CHPID type OSD) must use
OSA-Express7S 1.2 cards. The OSA CHPID types used on Network Express cards must be
the same; thus, for example, if one PCHID is TYPE=OSH, then the other must also be
TYPE=OSH. However, both NETH FIDs and an OSH CHPID can coexist on the same PCHID.
Likewise, if one PCHID is defined as NETD for PF Access Mode, then the other PCHID can
only be NETD.
OSA-Express
The OSA-Express features provide local area network (LAN) connectivity and comply with
IEEE standards. In addition, OSA-Express features assume several functions of the TCP/IP
stack that normally are performed by the PU, which allows significant performance benefits by
offloading processing from the operating system.
OSA-Express7S 1.2 features continue to support copper and fiber optic (single-mode and
multimode) environments.
HiperSockets
IBM HiperSockets is an integrated function of the IBM Z platforms that supplies attachments
to up to 32 high-speed virtual LANs, with minimal system and network overhead.
HiperSockets is a function of the Licensed Internal Code (LIC). It provides LAN connectivity
across multiple system images on the same IBM Z platform by performing
memory-to-memory data transfers in a secure way.
The HiperSockets function eliminates the use of I/O subsystem operations. It also eliminates
having to traverse an external network connection to communicate between LPARs in the
same IBM Z platform. In this way, HiperSockets can help with server consolidation by
connecting virtual servers and simplifying the enterprise network.
SMC-D requires no extra physical resources (such as RoCE, PCIe bandwidth, ports, I/O
slots, network resources, or Ethernet switches). Instead, SMC-D uses LPAR-to-LPAR
communication through HiperSockets or an OSA-Express feature for establishing the initial
connection.
z/OS and Linux on IBM Z support SMC-R and SMC-D. Now, data can be shared by way of
memory-to-memory transfer between z/OS and Linux on IBM Z.
Coupling connectivity on the IBM z17 uses Coupling Express3 Long Reach (CE3 LR) and
Integrated Coupling Adapter Short Reach (ICA SR2.0) features. The ICA SR feature supports
distances up to 150 meters (492 feet); the CE3 LR feature supports unrepeated distances of
up to 10 km (6.21 miles) between IBM Z platforms. ICA SR features provide sysplex and
timing connectivity direct to the CPC drawer, while Coupling Express3 LR features connect
into the PCIe+ I/O Drawer.
Coupling links can also carry timing information such as Server Time Protocol (STP) for
synchronizing time across multiple IBM Z CPCs in a Coordinated Time Network (CTN).
For more information about coupling and clustering features, see 4.5, “I/O features” on
page 181.
1.2.7 Cryptography
IBM z17 provides two main cryptographic functions: CP Assist for Cryptographic Functions
(CPACF) and Crypto-Express8S.
CPACF
CPACF is a high-performance, low-latency coprocessor that resides in every Telum II z17 PU
chip, performs symmetric key encryption operations, and calculates message digests
(hashes) in hardware. The following algorithms are supported:
Encryption (DES, TDES, AES)
Hashing (SHA-1, SHA-2, SHA-3, SHAKE)
Random Number Generation (PRNG, DRNG, TRNG)
CPACF supports Elliptic Curve Cryptography (ECC) clear key, improving the performance of
Elliptic Curve algorithms. The following algorithms are supported:
ECDH[E]
P-256, P-384, and P-521
X25519, and X448
ECDSA
Keygen, sign, verify
P-256, P-384, P521,
EdDSA
KeyGen, sign, verify
Ed25519, Ed448
Support for protected key signature creation
Crypto-Express8S
The tamper-sensing and tamper-responding Crypto-Express8S features provide acceleration
for high-performance cryptographic operations and support up to 85 domains with the IBM
z17 ME1. This specialized hardware performs AES, DES and TDES, RSA, Elliptic Curve
(ECC), SHA-1, and SHA-2, and other cryptographic operations.
It supports specialized high-level cryptographic APIs and functions, including those functions
that are required with quantum-safe cryptography and in the banking industry.
Crypto-Express8S features are designed to meet the Federal Information Processing
Standards (FIPS) 140-2 Level 4 and PCI HSM security requirements for hardware security
modules.
For more information about cryptographic features and functions, see Chapter 6,
“Cryptographic features” on page 221.
Clustering connectivity11:
– ICA SR2.0 (new build)
– Coupling Express3 Long Reach (new build)
The IBM z17 delivers a range of features and functions that allows PUs to concentrate on
computational tasks, while distinct, specialized features take care of the rest. For more
information about these features and other IBM z17 features, see in 3.5, “Processor unit
functions” on page 103.
The HMC is an appliance that provides a single point of control for managing local or remote
hardware elements.
For IBM z17 new built systems, IBM Z Hardware Management Appliance (FC 0129) is the
only available HMC. The HMC Appliance and SE Appliance run virtualized on the SE
hardware.
Existing HMA features of the IBM z15 and IBM z16 can be upgraded to driver 61 HMC code
to support IBM z17, but older stand-alone HMCs (rack-mounted or tower) cannot be carried
forward during an MES upgrade to IBM z17.
For more information, see Chapter 10, “Hardware Management Console and Support
Element” on page 429.
9
The Crypto Express8S is available with either one or two hardware security modules (HSM). The HSM is the IBM
4770 PCIe Cryptographic Coprocessor (PCIeCC).
10 The Crypto Express7S comes with either one (1-port) or two (2-port) hardware security modules (HSM). The HSM
IBM Z platforms are designed to enable highest availability and lowest downtime. These facts
are recognized by various IT analysts, such as ITIC12 and IDC13. A comprehensive,
multi-layered strategy includes the following features:
Error Prevention
Error Detection and Correction
Error Recovery
System Recovery Boost
With a suitably configured IBM z17, further reduction of outages can be attained through First
Failure Data Capture (FFDC), which is designed to reduce service times and avoid
subsequent errors. It also improves nondisruptive replace, repair, and upgrade functions for
memory, drawers, and I/O adapters. IBM z17 supports the nondisruptive download and
installation of LIC updates.
IBM z17 RAS features provide unique high-availability and nondisruptive operational
capabilities that differentiate IBM Z in the marketplace. IBM z17 RAS enhancements are
made on many components of the CPC (processor chip, memory subsystem, I/O, and
service) in areas, such as error checking, error protection, failure handling, error checking,
faster repair capabilities, sparing, and cooling.
The ability to cluster multiple systems in a Parallel Sysplex takes the commercial strengths of
the z/OS platform to higher levels of system management, scalable growth, and continuous
availability.
The IBM z17 builds on the RAS of the IBM z16 family with the following RAS improvements:
System Recovery Boost
– System Recovery Boost was introduced with IBM z15. It offers customers more Central
Processor (CP) capacity during system recovery operations to accelerate the startup
(IPL), shutdown, or stand-alone memory dump operations (at image level - LPAR14).
System Recovery Boost requires operating system support. No other IBM software
changes are required to be made during the boost period.
System Recovery Boost can be used during LPAR shutdown or startup to make the
running operating system and services available in a shorter period.
The System Recovery Boost provides the following options for the capacity increase:
• Subcapacity CP speed boost: During the boost period, subcapacity engines that
are allocated to the boosted LPAR are transparently activated at their full capacity
(CP engines).
• zIIP Capacity Boost: During the boost period, all active zIIPs that are assigned to an
LPAR are used to extend the CP capacity (CP workload is dispatched to zIIP
processors during the boost period).
System Recovery Boost enhancements that were delivered with the IBM z16 maximized
service availability by using tailored short-duration boosts to mitigate the effect of the
following recovery processes:
• z/OS SVC memory dump boost boosts the system on which the SVC memory
dump is taken to reduce system affect and expedite diagnostic capture. It is
possible to enable, disable, or set thresholds for this option.
• Middleware restart/recycle boost boosts the system on which a middleware
instance is being restarted to expedite resource recovery processing, release
retained locks, and so on. It is applicable to planned restarts, or restarts after
12
For more information, see ITIC Global Server Hardware, Server OS Reliability Report.
13
For more information, see Quantifying the Business Value of IBM Z.
14 LPAR that is running an Operating System image.
15 z/OS that is configured as a guest system under z/VM does not use the boost.
Naming: The IBM z17 Model ME1, Machine Type (M/T) 9175, is further identified in this
document as IBM z17, unless otherwise specified.
The redesigned CPC drawer and I/O infrastructure also lower power consumption, reduces
the footprint, and allows installation in virtually any data center. The IBM z17 server is rated
for ASHRAE class A31 data center operating environment.
The IBM z17 server is similar to IBM z16 and IBM z15, but differentiates itself from previous
IBM Z server generations through the following significant changes to the modular hardware:
All external cabling (power, I/O, and management) is performed at the rear of the system
Flexible configurations: Frame quantity is determined by the system configuration (1 - 4
frames)
Feature codes that reserve slots for plan-ahead CPC drawers and a new Spyre adapter
Internal water cooling plumbing for systems with up to four CPC drawers (Frames A and B)
PCIe+ Gen4 I/O drawers (19-inch format) supporting 16 PCIe adapters
The only power option for the IBM z17 is PDU-based power. The IBM z17 ME1 system is
designed as a radiator (air) cooled system.
The IBM z17 ME1 includes the following basic hardware building blocks:
19-inch 42u frame (1 - 4)
CPC (Processor) drawers (1 - 4)
PCIe+ Gen4 I/O drawers (up to 12)
Radiator cooling assembly (RCA) for CPC drawers cooling the Dual Chip Modules (DCM)
Power, with Intelligent Power Distribution Units (iPDU) pairs (2 - 4 per frame, maximum 8,
depending on the configuration).
Support Elements combined with optional Hardware Management Appliance (two):
– Single Keyboard, Mouse, Monitor (KMM) device (USB-C connection)
– Optional IBM Hardware Management Appliance feature
24-port 1 GbE Switches (two or four, depending on the system configuration)
Hardware for cable management at the rear of the system
1
For more information, see Chapter 2, Environmental specifications in IBM 9175 Installation Manual for Physical
Planning, GC28-7049.
An example of a fully configured system with PDU-based power, four CPC drawers, and up to
12 PCIe+ I/O drawers is shown in Figure 2-1.
FRAME
Z A B C
42
41
Switches SE’s
40
39
38
37
36
35 I/O 9 I/O 12
34
33
I/O 4
32
31
30
1-4 Frames
29
28 I/O 8 I/O 11 2 or 4 GbE Switches
27
26 2 SEs
25
24 1-4 CPCs
CPC2
23
22 0-12 I/Os
21
20 I/O 7 I/O 10 I/O 15 2,4,6,8 PDUs
19
18
17 CPC1 1 or 2 RCAs
16
15
14
13
12 I/O 6 CPC0 CPC3 I/O 14
11
10
9
8
7
5
4
6
I/O 5 RCA RCA I/O 13
3
2
1
The key features that are used to build the system are listed in Table 2-1. For more
information about the various configurations, see Appendix E, “Frame configurations with
Power Distribution Units” on page 551.
PDU power
380 - 415 V 32A 3 Phase (Wye) Worldwide (except North America and Japan)
0564
PDU
I/O
Power Options
The 9175 (z17 ME1) has the power options shown in Table 2-2:
Water Cooling No
Balanced Power No
DC Power available No
a. Wye cords require five wires, three for Power phases, one for
Neutral and one for Ground.
Caution: Installation of a low voltage system to a high voltage facility will cause significant
damage to the system’s power components.
Considerations
Consider the following points:
A-Frame is always present in every configuration
1u Support Elements (x2) are always in A-Frame at locations A41 and A42
1u 24-port internal Ethernet switches (x2) are always at locations A39 and A40
Additional Ethernet switches (x2) are available when necessary in Frames C or B
A new shipping container has been developed for z17 to minimize carbon footprint impact
– Container will include both the system frame as well as its front and rear covers (i.e.,
will no longer have separate package for the cover-set to be ship in and/or dispose of).
For the IBM z17 Model ME1 server, the top exit of all cables for I/O or power is always an
option with no feature codes required. Adjustable cover plates are available for the openings
at the top rear of each frame. See Figure on page 24.
All external I/O cabling enters the system at the rear of the frames for all adapters,
management LAN, and power connections.
The Top Exit feature code (FC 5823) provides an optional new Top Exit Enclosure. The
optional Top Exit Enclosure provides new fiber cable organizers within the enclosure to
optimize the fiber cable storage and strain relief. It also provides mounting locations to secure
Fiber Quick Connector (FQC) MPO2 brackets (FC 5827) on the top of the frames.
Note: Overhead I/O cabling is contained within the frames. Extension “chimneys” that were
featured with systems before IBM z15 systems are no longer used.
For additional details about cabling feature codes, refer to 11.4, “Physical planning” on
page 479.
A view of the top rear of the frame and the openings for top exit cables and power is shown in
Figure . When FC 7803 is installed, the plated adjustable shields are removed, and the top
exit enclosure is installed.
slider plates
FC 5823
Top Exit Cable Enclosure
MTP
Brackets (6x
max) Cable Strain
Relief
(bridge lances
with hook & loop
FC 7803 REAR
fastening)
Top Exit without
Top Exit Cable Enclosure
Cable Organizing
Spool (6x)
Figure 2-3 Top exit FC 7803 (top exit) without and with FC 5823 (top hat)
Care must be taken when ordering the Feature Codes (Table 2-3 on page 25) for cables that
are entering the frames from above the floor, below the floor, or both, and if the Top Exit
feature is wanted, which provides the Top Exit Enclosure.
Table 2-3 IBM z17 Model ME1 cabling Feature Code combinations
Environment Bottom exit Top exit Feature Code Comments
Raised Floor Yes No 7804 only Ships with Bottom Exit Tailgate &
supports Bottom FQC FC 5827
Raised Floor Yes Yes (no Top Exit 7803 & 7804 Ships with Bottom Exit Tailgate &
Enclosure) supports Bottom FQC FC 5827
Raised Floor Yes Yes (with Top Exit 7804 and 5823 Top (5827) and Bottom (5824) FQC
Enclosure) support
Raised Floor No Yes (no Top Exit 7803 Ships with Bottom Seal Plate and does
Enclosure) not support FQC FCs
Raised Floor No Yes (with Top Exit 5823 and 7803 Ships with Bottom Seal Pate and only
Enclosure) supports FQC FC 5824 & 5826
Non-Raised Floor No (not Yes (no Top Exit 7998a and 7803 Ships with Bottom Seal Plate and does
supported) Enclosure) not support FQC FCs
Non-Raised Floor No (not Yes (with Top Exit 7998a and 5823 Ships with Bottom Seal Plate and does
supported) Enclosure) not support FQC FCs
a. FC 7998: Non-Raised floor support (flag)
A vertical cable management guide (“spine”) can assist with proper cable management for
fiber, copper, and coupling cables. A top to bottom spine is present from manufacturing with
cable organizer clips that are installed for frames Z and C when present. Frames A and B
contain mini-spines that serve the same purpose.
The cable retention clips can be relocated for best usage. All external cabling to the system
(from top or bottom) can use the spines to minimize interference with the PDUs that are
mounted on the sides of the rack. See Figure 2-4 on page 26.
The rack with the spine mounted and the optional fiber cable organizer hoops is shown in
Figure 2-4. If necessary, the spine, and organizer hoops can be easily relocated for service
procedures.
The IBM z17 ME1 can be configured with 1- 4 CPC drawers (three in the A frame and one in
the B frame). A CPC drawer and its components are shown in Figure 2-5.
SMP-10
(Cross-Drawer
connectivity) DCM DCM
Cooling manifolds Cold Plates
4 x PU DCMs under
12 x PCIe+ Gen5
Memory DIMMs
the cold plates
Up to 48
Cooling manifold
Rear Front
connector
Figure 2-5 CPC drawer components (top view)
The IBM z17 Model ME1 5u CPC drawer always contains four Processor Unit (PU) DCMs,
and up to 48 memory DIMM slots.
Depending on the feature, the IBM z17 ME1 contains the following CPC components:
The number of CPC drawers installed is driven by the following feature codes:
– FC 0571: One CPC drawer, Max43, up to 43 characterizable PUs
– FC 0572: Two CPC drawers, Max90, up to 90 characterizable PUs
– FC 0573: Three CPC drawers, Max136, up to 136 characterizable PUs
– FC 0574: Four CPC drawers, Max183, up to 183 characterizable PUs
– FC 0575: Four CPC drawers, Max208, up to 208 characterizable PUs
The following Processor Unit DCM is used:
PU DCM contains two PU chips (Telum II) on one single module that use 5 nm silicon
wafer technology, 42 Billion transistors, and a core that is running at 5.5 GHz, designed
with nine cores per chip (eight PUs and one Data Processing Unit (DPU,) 18 cores per PU
DCM - (16 PUs and two DPUs).
Memory plugging:
– Six memory controllers per drawer (two each on DCM2 / DCM1;
one each on DCM3 / DCM0)
– Each memory controller supports eight DIMM slots
– All eight DIMMs on one memory controller are the same size
– Four / Six memory controllers per drawer are populated (up to 48 DIMMs)
– Different memory controllers can have different DIMM sizes
The front view of the CPC drawer, which includes the cooling fans, BMC/OSC and processor
power cards (PPC), is shown in Figure 2-6. The rear view of a fully populated CPC Drawer is
shown in Figure 2-7 on page 29.
BMC/OSC Cards
PPS Ports
A Frame
Dual port I/O fan-outs and ICA SR adapters are plugged in specific slots for best performance
and availability. Redundant power supplies and six SMP10 ports also are shown. Each pair of
SMP10 ports is redundant. If a single cable fails, the repair can be performed concurrently.
The CPC drawer logical structure and component connections are shown in Figure 2-8.
Memory is connected to the DCMs through memory control units (MCUs). Up to six MCUs
are available in a CPC drawer (one or two per DCM) and provide the interface to the DIMM
controller. A memory controller uses eight DIMM slots.
Drawer 0
A-BUS
D2 D2 CP-1 CP-1 CP-1 CP-1
Drawer 1
CP-2 CP-3
CP-0 CP-0 CP-0 CP-0
Drawer 2
CP-1 CP-7 D1
CP-0 CP-0 CP-0 CP-0
Drawer 3
D3
M-Bus
D3
CP-0 CP-0 CP-0 CP-0
A-BUS
The CPC drawers that are installed in Frame A and Frame B are populated from bottom to
top. The order of CPC drawer installation is listed in Table 2-4.
The time synchronization security introduced for IBM z17 address the potential security
vulnerabilities, and will also serve as a foundation for future IBM Z time synchronization
security improvements (such as quantum safe techniques) that will make IBM Z the most
secure platform for time synchronization in the industry:
Include separation of current IBM Carpo z16 ETS container into individual containers
(ETS, NTP/Chrony, PTP4l4), restricting and minimizing root access
Include full implementation of Chrony (NTP) to enable:
– authentication and use of NTP algorithms to mitigate MiTM5 attacks
Include implementation of NTS for NTP and PTP
For IBM z17 Model ME1, Precision Time Protocol (PTP, IEEE 1588) can be used as an
external time source for IBM Z Server Time Protocol (STP) for an IBM Z Coordinated Timing
Network (CTN). The initial implementation for PTP connectivity was provided by using the
IBM Z Support Element (SE).
As with IBM z16, on IBM z17, the external time source (PTP or NTP) is connected directly to
the CPC and bypasses the SE connection. This IBM z17 feature allows more accurate sync
with the external time source.
The accuracy of an STP-only CTN is improved by using an NTP or PTP server with the PPS
output signal as the External Time Source (ETS). Devices with PPS output are available from
several vendors that offer network timing solutions.
4
PTP4l - its the main program that implements PTP according to iEEE standard 1588 for Linux.
5
MiTM - Man in the middle: a cyberattack where a hacker intercepts and alter communications between two parties
to steal sensitive information.
Two local redundant oscillator cards are available per CPC drawer, each with one PPS
port and one ETS port (RJ45 Ethernet, for both PTP and NTP).
Current design requires Pulse Per Second signal use for providing maximum time
accuracy for both NTP and PTP.
An augmented precision oscillator (20 Parts Per Million [PPM] versus 50 PPM on previous
systems) is used.
The following PPS plugging rules apply (see Figure on page 33):
– Single CPC drawer plugs left and right OSC PPS coaxial connectors.
– Multi-drawer plug CPC0 left OSC PPS and CPC1 left OSC PPS coaxial connectors.
– Multi-drawer plug CPC0 left OSC ETS1 J03 ethernet and CPC1 right OSC ETS2 J03
ethernet connectors
– Cables are routed from rear to front by using a pass-through hole in the frame, and
under the CPC bezel by using a right-angle Bayonet Neill-Concelman (BNC) connector
that provides the pulse per second (PPS) input for synchronization to an external time
source with PPS output.
– Cables are supplied by the customer.
– Connected PPS ports must be assigned in the Manage System Time menus on the
HMC.
BMC/OSC Cards
PPS Ports
A Frame
CPC 2
CPC 0 CPC 3
PPS ports
Tip: STP is available as FC 1021. It is implemented in the Licensed Internal Code (LIC),
and allows servers to maintain time synchronization with each other and synchronization to
an ETS. In a multi-server STP Coordinated Timing Network (CTN) coupling/timing links are
required for STP communication
For more information, see IBM Z Server Time Protocol Guide, SG24-8480.
With IBM z17 Model ME1, the CPC drawer BMC is combined with the Oscillator card in a
single Field Replaceable Unit (FRU). Two combined BMC/OSC cards are used per CPC
drawer.
Also, the PCIe+ I/O drawer has an improved BMC. Each BMC card has one Ethernet port that
connects to the internal Ethernet LANs through the internal network switches (SW1, SW2,
and SW3, SW4, if configured). The BMCs communicate with the SEs and provide a
subsystem interface (SSI) for controlling components.
BMC
Switch 2 Internal BMC
HMC
Alternate SE / HMA
Switch 4 Internal BMC
Customer
LAN 1
Note: The maximum IBM z17 ME1 system configuration features four GbE switches, four
CPC drawers, and up to 12 PCIe I/O drawers.
A typical BMC operation is to control a power supply. An SE sends a command to the BMC to
start the power supply. The BMC cycles the various components of the power supply,
monitors the success of each step and the resulting voltages, and reports this status to the
SE.
SEs are duplexed (N+1), and each element has at least one BMC. Two internal Ethernet LAN
switches and two SEs, for redundancy, and crossover connectivity between the LANs, are
configured so that both SEs can operate on both LANs.
The Hardware Management Appliances (HMAs) and SEs are connected directly to one or
two Ethernet Customer LANs. One or more HMCs can be used.
With IBM z17 current ordered systems, the Hardware Management Appliance (HMA, FC
0355) is the only HMC-orderable feature. The HMA is running on the same hardware (virtual
appliance) with the Support Elements (the SE code runs as a guest of the HMA).
The PU DCM is shown in Figure 2-12 on page 35. The DCM features a thermal cap that is
placed over the chips. Each DCM is water-cooled by way of a cold plate manifold assembly.
Rear
Each DCM socket size is 71.5 mm X 79 mm and holds two 5nm FinFET PU chips measuring
23.75 mm x 23.82 mm (565 mm2 ).
The DCMs are each plugged into a socket that is part of the CPC drawer packaging. Each
DCM is cooled by water flowing through a manifold assembly where cold plates are attached
and used to help remove the heat from the processor chips.
L2 L2
Core Cache Cache Core
(36 MB) (36 MB)
L2 L2
Cache Cache Core
(36 MB) (36 MB)
DPU L2 L2
Cache Cache Core
(36 MB) (36 MB)
L2 L2
Core Cache Cache Core
(36 MB) (36 MB)
L2 L2
Core Cache Cache Core
(36 MB) (36 MB)
AI
The IBM z17 Model ME1 PU chip (two PUs packaged on each DCM) includes the following
features and improvements:
5 nm silicon lithography wafer technology
565.6 mm2 chip size
43 Billion transistors
5.5 GHz base clock frequency
Nine core design with increased on-chip cache sizes
Two PCIe Gen5 interfaces
DDR5 memory controller
AES256 Memory Encryption
Two M-Bus to support DCM internal chip to chip connectivity
Six X-Bus (12 per DCM) to support DCM to DCM connectivity in the CPC drawer
One A-Bus to support drawer to drawer connectivity
New cache structure design compared to IBM z16 PU:
– L1D (data) and L1I (instruction) caches - ON-core (128 KB each)
– 10 x L2 - 36 MB dense SRAM - outside the core, semi-private to the core
– L3 (virtual) - up to 360 MB
– L4 (virtual) - up to 2.88 GB
40% more cache per chip
DPU - Data Processing Unit
– 32 small processor cores
– Protocol Accelerators
– Interface to one of the L2 caches
– PCIe Gen5 x16 (bifurcated as Gen4 x8 used for I/O)
Improved branch prediction design by using SRAM
Improved Gzip Compression
2nd Generation AI Unit
– On chip Artificial Intelligence Unit AIU: Deep learning focus for AI inference
– AIU Intelligent Routing
– Hardware acceleration for NLP and transformer models
– Pounces AI ecosystem support
Significant architecture changes: COBOL compiler and more
Speeds and Feeds
– 10% increase in performance per thread
– 20% increase in standard models drawer capacity
– 14% increase in max config system capacity
– Increased Memory capacity to 64 TB (40% increase over IBM z16)
Instruction decode unit (IDU): The IDU is fed from the IFU buffers and is responsible for
parsing and decoding of all z/Architecture operation codes.
Translation unit (XU): The XU has a large translation lookaside buffer (TLB) and the
Dynamic Address Translation (DAT) function that handles the dynamic translation of
logical to physical addresses.
Instruction sequence unit (ISU): This unit enables the out-of-order (OoO) pipeline. It tracks
register names, OoO instruction dependency, and handling of instruction resource
dispatch.
Instruction fetching unit (IFU) (prediction): These units contain the instruction cache,
branch prediction logic, instruction fetching controls, and buffers. Its relative size is the
result of the elaborate branch prediction design.
Recovery unit (RU): The RU keeps a copy of the complete state of the system that
includes all registers, collects hardware fault signals, and manages the hardware recovery
actions.
Dedicated Co-Processor (CoP): The dedicated coprocessor is responsible for data
compression and encryption functions for each core.
Core pervasive unit (PC) for instrumentation and error collection.
Modulo arithmetic (MA) unit: Support for Elliptic Curve Cryptography
Vector and Floating point Units (VFU):
– BFU: Binary floating point unit
– DFU: Decimal floating point unit
– DFx: Decimal fixed-point unit
– FPd: Floating point divide unit
– VXx: Vector fixed-point unit
– VXs: Vector string unit
– VXp: Vector permute unit
– VXm: Vector multiply unit
L2 – Level 2 cache
2.3.3 PU characterization
The PUs are characterized for client use. The characterized PUs can be used in general to
run supported operating systems, such as z/OS, z/VM, and Linux on IBM Z. They also can
run specific workloads, such as Java, XML services, IPsec, and some Db2 workloads, or
clustering functions, such as the Coupling Facility Control Code (CFCC).
The maximum number of characterizable PUs depends on the IBM z17 Model ME1 CPC
drawer feature code. Some PUs are characterized for system use; some are characterized for
client workload use.
By default, two spare PUs are available to assume the function of failed PUs. The maximum
number of PUs that can be characterized for client use are listed in Table 2-5 on page 39.
Max43 0 - 43 0 - 43 0 - 42 0 - 42 0 - 43 2 5 2
Max90 0 - 90 0 - 90 0 - 89 0 - 89 0 - 90 10
The rule for the CP to zIIP purchase ratio, that for every CP purchased, up to two zIIPs can be
purchased has been removed starting with IBM z16.
Converting a PU from one type to any other type is possible by using the Dynamic Processor
Unit Reassignment process. These conversions occur concurrently with the system
operation.
Note: The addition of ICFs, IFLs, and zIIPs to the IBM z17 Model ME1 does not change
the system capacity setting or its million service units (MSU) rating.
MEMORY
Virtual L4 1792 MB
DCM 0 DCM 3
CHIP 0 CHIP 1
4 DCMs CHIP 1
CHIP 0
MBUS MBUS
L2
ON-CHIP L2 L2 ON-CHIP L2 L2 ON-CHIP L2 L2 ON-CHIP L2
32 MB RING RING RING RING
MEMORY
Virtual L4 2.8 GB 6
DCM 0 DCM 3
.. 4 DCMs ..
L2 L2 L2 L2 L2 L2 L2 L2
L2 L2 L2 L2 L2 L2 L2 L2
36 36 36 36 36 36 36 36
36 MB 36 MB 36 MB 36 MB 36 MB 36 MB 36 MB 36 MB
MB MB MB MB MB MB MB MB
CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU
L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1)
BMC1 BMC2
SLOT 02 – SLOT 12 –
RG1
SLOT 03 – RG2
SLOT 13 –
RG3
SLOT 04 – RG4
SLOT 14 –
front rear RG1
SLOT 05 – RG2
SLOT 15 - 8U
RG3 RG4
Domain 1
Domain 0
SLOT 07 – SLOT 17 –
RG1
SLOT 08 – RG2
SLOT 18 –
RG3
SLOT 09 – RG4
SLOT 19 –
RG1
SLOT 10 – RG2
SLOT 20 –
RG3 RG4
rear
Figure 2-17 PCIe I/O drawer (front and rear view)
A total of 16 I/O cards are spread over two I/O domains (0 and 1):
– Each I/O slot reserves up to four PCHIDs.
– Left-side slots are numbered LG01 - LG10 and right side slots are numbered
LG11 - LG20 from the rear of the rack. A location and LED identifier panel is at the
center of the drawer.
– With IBM z17 Model ME1, the numbering of the PCHIDs by location for the I/O drawers
has changed. I/O drawers in the Z frame will start with PCHID 100 and continue
incrementing assigning PCHIDs in a specific sequence depending on the number of
drawers and location.
For more information about examples of the various configurations, see Appendix E,
“Frame configurations with Power Distribution Units” on page 551.
Two PCIe+ Gen4 Switch Cards provide connectivity to the PCIe+ Gen4 Fanouts that are
installed in the CPC drawers.
Each I/O drawer domain has four dedicated support partitions (two per domain) to
manage the native PCIe cards.
Two Baseboard Management Controllers (BMC) cards are used to control the drawer
function.
Redundant N+1 power supplies (two) are mounted on the rear and redundant blowers
(six) are mounted on the front.
The following example shows a maximum configuration of CPC drawers and I/O features
ordered, the layout of the PCIe+ I/O drawers, and order of install, see Figure 2-18.
The IBM z17 introduces a new Feature Code (FC 0352 - Z-frame 1st I/O). This FC will
allow clients to prepare to carry their Z-Frame forward into their future IBM z System.
Feature codes used for reserving CPC drawer slots for future growth is possible by
ordering FC 2933 (CPC1) and FC2934 (CPC2). Care must be taken to reserve CPC
drawers. If not ordered, the open space may be populated with I/O drawers.
The I/O drawer plugging sequence for z17 without Z-frame 1st I/O Placement FC 0352 is
shown in Table 2-6. and with Z-frame 1st I/O Placement FC 0352 shown in
Table 2-6
I/O Drawer Location
Max43 Max43 Max43 Max90 Max90 Max136 Max183
I/O Drawer 1-CPC 1-CPC 1-CPC 2-CPC 2-CPC 3-CPC Max208
Plug CPC1 CPC1/CPC2 CPC2 4-CPC
Sequence Reserved Reserved Reserved
12 C33B C17B
The I/O drawer plugging sequence for z17 with Z-frame 1st I/O Placement FC 0352 is shown
in Table 2-7.
12 C33B C17B
Table 2-8
I/O Drawer Location
The port and PCHID layout where the top of the adapter (port D1) is closest to the location
panel on both sides of the drawer are shown in Figure 2-19 on page 44.
Figure 2-19 I/O feature orientation in PCIe I/O drawer (rear view)
Note: The Configuration Mapping Tool (available on ResourceLink) can be used to print a
CHPID Report that displays the drawer and PCHID/CHPID layout.
2.5 Memory
The maximum physical memory size is directly related to the number of CPC drawers in the
system. Each CPC drawer can contain up to 14 TB of customer memory, for a total of 64 TB
of memory per system.
The minimum and maximum memory sizes that you can order for each IBM z17 Model ME1
feature are listed in Table 2-9.
The memory granularity, which is based on the installed customer memory, is listed in
Table 2-10.
64 512 - 768
The CPC drawer memory topology of an IBM z17 is shown in Figure 2-20.
Servers are configured with the most efficient configuration of memory DIMMs that can
support Addressable Memory that is required for Customer Ordered Memory plus HSA.
In some cases, Available Addressable Memory might be available to support one or more
concurrent LIC CC Customer Memory upgrades with no DIMM changes.
IBM z17 implements enhanced RAIM design that includes the following features:
Eight Channel Reed-Solomon6 RAIM
90 → 80 DRAMs accessed across memory channels (11% reduction, excluding unused
spare)
Staggered Memory Refresh → Leverage RAIM to hide memory refresh penalty
z17 Memory Buffer Chip Interface:
– Open-top Memory Buffer (OCMB); Fully (meso)synchronous OCMB
– Lane sparing replaced with lane degrade
– 256 B fetch support, remove128 B store support
Up 16TB / Drawer (with six MCUs populated with 512GB DIMMs)
Host-side Memory Encryption
– Memory encryption is performed by the memory controllers (MCUs) using an
encryption mechanism which combines an encryption key with part of the memory
address to encrypt and protect the data. Encryption and decryption occur between
RAIM error correction and memory. Encrypting data is done post-RAIM encoding
during store, and decryption is done pre-RAIM decoding during fetch operations. IBM
z17 implements 256 bit AES keys while IBM z16 uses 128 bit keys. ECC protected
Memory Encryption keys are auto generated once per IML, and their values are not
exposed.
Supported drawer memory configurations are listed in Table 2-11. Each CPC drawer is
included from manufacturing with one of these memory configurations.
22 0 32 32 32 32 0 1024 140
CFG M0CP0 M1CP0 M1CP1 M2CP0 M2CP1 M3CP0 Physical INC - HSA
number MD01-04 MD05-08 MD09-12 MD13-16 MD17-20 MD21-24 884 GB
and and and and and and
MD25-28 MD29-32 MD33-36 MD37-40 MD41-44 MD45-48
DIMM plugging for the configurations in each CPC drawer do not have to be similar. Each
memory 8-slot DIMM bank must have the same DIMM size; however, a drawer can have a
mix of DIMM banks.
The support element View Hardware Configuration task can be used to determine the size
and quantity of the memory plugged in each drawer. Figure 2-21 on page 49 shows an
example of configuration number 27 from the previous tables, and displays the location and
description of the installed memory modules.
Table 2-12 on page 50 lists the physical memory plugging configurations by feature code from
manufacturing when the system is ordered. Consider the following points:
The CPC drawer columns for the specific feature contain the Memory Plug Drawer
Configuration number that is referenced in Table 2-11 on page 47 and the Population by
DIMM Bank that is listed in Table 2-12 on page 50.
Dial Max indicates the maximum memory that can be enabled by way of the LICC
concurrent upgrade.
If more storage is ordered by using other feature codes, such as Virtual Flash Memory or
Flexible Memory, the extra storage is installed and plugged as necessary.
For example, a customer orders FC 3943 that features 9472 GB customer memory and FC
0573 Max136 (3 CPC drawers). The drawer configurations include the following components:
CPC0 (3584 GB), CPC1 (3584 GB), CPC2 (3584 GB) - Configuration #27 (3584 GB)
Total 3584 + 3584 + 3584 - 884 HSA= 9868 GB (Dial Max).
For a model upgrade that results in the addition of a CPC drawer, the minimum memory
increment is added to the system. Each CPC drawer has a minimum physical memory size of
1024 GB.
Model Upgrades
During a model upgrade, adding a CPC drawer is a concurrent operation7. Adding physical
memory to the added drawer is also concurrent. If all or part of the added memory is enabled
for use, it might become available to an active LPAR if the partition includes defined reserved
storage. (For more information, see 3.7.3, “Reserved storage” on page 130.) Alternatively, the
added memory can be used by a defined LPAR that is activated after the memory is added.
Note: Memory downgrades within an IBM z17 are not supported. Feature downgrades
(removal of a CPC quantity feature) are not supported.
For more information, see 2.7.1, “Redundant I/O interconnect (RII)” on page 59.
Removing a CPC drawer often results in removing active memory. With the flexible memory
option, removing the affected memory and reallocating its use elsewhere in the system is
possible. (For more information, see 2.5.7, “Flexible Memory Option” on page 53.) This
process requires more available memory to compensate for the memory that becomes
unavailable with the removal of the drawer.
7
CPC 1 and CPC 2 can be added concurrently in the field if FC 2933 and FC 2934 are ordered with the initial
configuration. The addition of a fourth CPC Drawer (CPC 3) is not supported. Four CPC drawer systems are
factory-built only.
No application changes are required to change from IBM Flash Express to VFM. Consider the
following points:
Dialed memory + VFM = total hardware plugged
Dialed memory + VFM + Flex memory option = total hardware plugged
VFM helps improve availability and handling of paging workload spikes when V2.3, V2.4,
V2.5, or V3,1 is run. With this support, z/OS helps improve system availability and
responsiveness by using VFM across transitional workload events, such as market openings
and diagnostic data collection. z/OS also helps improve processor performance by supporting
middleware use of pageable large (1 MB) pages and to help eliminate delays that can occur
when collecting diagnostic data.
VFM also can be used by coupling facility images running on IBM z15 or IBM z16, to provide
extended capacity and availability for workloads that use IBM WebSphere MQ Shared
Queues structures.
VFM can help organizations meet their most demanding service level agreements and
compete more effectively. VFM is easy to configure in the LPAR Image Profile and provides
rapid time to value.
Note: Beginning with IBM z17, a coupling facility (CF) partition can no longer use VFM.
When you order memory, you can request extra flexible memory. The extra physical memory,
(if required) is calculated by the configuration and priced accordingly.
The required memory DIMMs are installed before shipping and are based on a target
capacity that the customer specifies. These memory DIMMs are enabled by using a Licensed
Internal Code Configuration Control (LICCC) order that the client places when they determine
more memory capacity is needed.d.
The flexible memory sizes that are available for the IBM z17 ME1 are listed in Table 2-13 on
page 54.
The IBM Z hardware has decades of intense engineering behind it, which results in a robust
and reliable platform. The hardware has many RAS features that are built into it.
For more information, see Chapter 9, “Reliability, availability, and serviceability” on page 401.
N+2 redundant environmental sensors (ambient temperature, relative humidity, and air
density8)
The internal intelligent Power Distribution Unit (iPDU) provides the following capabilities:
Individual outlet control by way of Ethernet:
– Provide a System Reset capability
– Power cycle an SE if a hang occurs
– Verify power cables at installation
System Reset Function:
– No physical EPO switch is available on the IBM z17, which provides a means for the
service technician9 to put a server into a known state
– This function does not provide the option to power down and keep the power down to
the system. The power must be unplugged or the customer-supplied power turned off
at the panel.
Other characteristics:
– PDU Firmware can be concurrently updated
– Concurrently repairable
– Power redundancy check
Cable verification test by way of PDU:
– By power cycling individual iPDU outlets, the system can verify proper cable
connectivity
– Power cable test runs during system Power On
– Runs at system installation and at every system Power On until the test passes
PCIe service enhancements:
– Mandatory end-to-end cyclic redundancy check (ECRC)
– Customer operation code is separate from maintenance code
– Native PCIe firmware stack that is running on the integrated firmware processor (IFP)
to manage isolation and recovery
The power service and control network (PSCN) is used to control and monitor the elements in
the system and include the following components:
Ethernet Top of Rack (TOR) switches provide the internal PSCN connectivity:
– Switches are redundant (N+1)
– Concurrently maintainable
– Each switch has one integrated power supply
– BMCs are cross wired to the Ethernet switches
Redundant SEs
Each SE has two power supplies (N+1) and input power is cross-coupled from the PDUs
Concurrent CPC upgrades
CPC1 to (CPC1 + CPC2) and (CPC1 + CPC2) to (CPC1+CPC2+CPC3) if CPC1 Reserve
or CPC2 Reserve features are part of the initial system order (FC 2933 or FC 2934)
All PCIe+ I/O drawer MESs and rebalance are concurrent
All LICC model changes are concurrent
8
The air density sensor measures air pressure and is used to control blower speed.
9 This function is available to IBM System Service Representatives (SSRs) only.
Service restoration involves speeding up IPL and shutdown operations of an image (LPAR),
and short-duration recovery process boosts for specific sysplex and operating system events.
For more information see Introducing IBM Z System Recovery Boost, REDP-5563.
Important: The base System Recovery Boost capability is built into IBM z17 firmware and
does not require ordering of extra features.
IBM z17 servers continue to deliver robust server designs through new technologies,
hardening innovative and classic redundancy. For more information, see Chapter 9,
“Reliability, availability, and serviceability” on page 401.
2.7 Connectivity
Connections to PCIe+ I/O drawers and Integrated Coupling Adapters are driven from the CPC
drawer fan-out cards. These fan-outs are installed in the rear of the CPC drawer.
Figure 2-22 shows two locations of the fan-out slots. Each slot is identified with a location
code (label) of LGxx.
PCIe F/O
cooling manifold
A fan-out can be repaired concurrently with the use of redundant I/O interconnect. For more
information, see 2.7.1, “Redundant I/O interconnect (RII)” on page 59.
– This adapters provides coupling connectivity to IBM z17, IBM z16, and IBM z15
servers.
One, two, or three pairs of redundant SMP-1010 connectors provide connectivity to the
other one, two, or three CPC drawers in the configuration.
When configured for availability, the channels and coupling links are balanced across CPC
drawers. In a system that is configured for maximum availability, alternative paths maintain
access to critical I/O devices, such as disks and networks. The CHPID Mapping Tool can be
used to assist with configuring a system for high availability.
The PCIe+ I/O drawer supports up to 16 PCIe features, which are organized in two hardware
domains (for each drawer). The infrastructure for the fan-out to I/O drawers and external
coupling is shown in Figure 2-23.
Figure 2-23 Infrastructure for PCIe+ I/O drawers (IBM z17 Max43 system with two PCIe+ I/O drawers)
The PCIe+ Gen4 fan-out cards are used to provide the connection from the PU DCM PCIe
Bridge Unit (PBU), which splits the PCIe Gen5 (@32GBps) processor busses into two PCIe
Gen4 x16 (@16 GBps) interfaces to the PCIe switch card in the PCIe+ I/O drawer.
10 SMP-10 is new with the IBM z17 (IBM z16 uses SMP-9 for the same functions)
The PCIe switch card spreads the x16 PCIe bus to the PCIe I/O slots in the domain.
In the PCIe+ I/O drawer, the two PCIe switch cards (LG06 and LG16) provide a backup path
(Redundant I/O Interconnect [RII]) for each other through the passive connection in the PCIe+
I/O drawer backplane.
During a CPC Drawer PCIe+ Gen4 fan-out or cable failure, all 16 PCIe cards in the two
domains can be driven through a single PCIe switch card (see Figure 2-24).
To support RII between domain pair 0 and 1, the two interconnects to each pair must be
driven from two different PCIe fan-outs. Normally, each PCIe interconnect in a pair supports
the eight features in its domain. In backup operation mode, one PCIe interconnect supports
all 16 features in the domain pair. Refer to 4.3, “PCIe+ I/O drawer” on page 175.
Note: The PCIe Interconnect (switch) adapter must be installed in the PCIe+ I/O drawer to
maintain the interconnect across I/O domains. If the adapter is removed (for a service
operation), the I/O cards in that domain (up to eight) become unavailable.
Before removing the CPC drawer, the contents of the PUs and memory of the drawer must be
relocated. PUs must be available on the remaining CPC drawers to replace the deactivated
drawer. Also, sufficient redundant memory must be available if no degradation of applications
is allowed.
To ensure that the CPC configuration supports removal of a CPC drawer with minimal effect
on the workload, consider the flexible memory option. Any CPC drawer can be replaced,
including the first CPC drawer that initially contains the HSA.
Removal of a CPC drawer also removes the CPC drawer connectivity to the PCIe I/O
drawers, and coupling links. The effect of the removal of the CPC drawer on the system is
limited by the use of redundant I/O interconnect. (For more information, see 2.7.1,
“Redundant I/O interconnect (RII)” on page 59.) However, all ICA SR2.0 links that are
installed in the removed CPC drawer must be configured offline.
If the enhanced drawer availability and flexible memory options are not used when a CPC
drawer must be replaced, the memory in the failing drawer also is removed. This process
might be necessary during an upgrade or a repair action.
Until the removed CPC drawer is replaced, a power-on reset of the system with the remaining
CPC drawers is supported. The CPC drawer then can be replaced and added back into the
configuration concurrently.
11
The KVM hypervisor is part of supported Linux on IBM Z distributions.
12 See 1.1.3, “Supported operating systems” on page 6
A minimum of one PU that is characterized as a CP, IFL, or ICF is required per system. The
maximum number of characterizable PUs is 208. A zIIP requires at least one CP to be
present in the configuration. The maximum number of zIIPs is up to one minus the total
capacity of a MaxXX model. For instance, a Max43 can have up to 42 zIIPs. Refer to
Table 2-5 on page 39.
The following components are present in the IBM z17, but they are not part of the PUs that
clients purchase and require no characterization:
SAPs are used by the channel subsystem. The number of predefined SAPs depends on
the IBM z17 model. See Table 2-14
Two IFPs, which are used in the support of designated features and functions, such as
RoCE (NETD and NETH CHPD types - all features), Coupling Express3 LR, zHyperLink
Express 2.0, Internal Shared Memory (ISM) SMC-D, and other management functions.
Two spare PUs, which can transparently assume any characterization during a permanent
failure of another PU.
The IBM z17 uses features to define the number of PUs that are available for client use in
each configuration. The models are listed in Table 2-14.
drawer
PUs per
SAPs
Opt
SAPs
Base
Spares
CPs IFLs ICFs uIFLs
Max43 1 48 0 - 43 0 - 43 0 - 43 0 - 42 0 - 42 2 no 5 2
Max90 2 48 0 - 90 0 - 90 0 - 90 0 - 89 0 - 89 2 10 2
Not all PUs available on a model are required to be characterized with a feature code.
Only the PUs purchased by a customer are identified with a feature code
zIIP maximum quantity for new build systems depends on the Model (MaxXX)
All PU conversions can be performed concurrently
A capacity marker identifies the number of CPs that were purchased. This number of
purchased CPs is higher than or equal to the number of CPs that is actively used. The
capacity marker marks the availability of purchased but unused capacity that is intended to be
used as CPs in the future. This status often is present for software-charging reasons.
Unused CPs are not a factor when establishing the millions of service units (MSU) value that
is used for charging monthly license charge (MLC) software, or when charged on a
per-processor basis.
2.8.1 Upgrades
Concurrent upgrades of CPs, IFLs, ICFs, and zIIPs, are available for the IBM z17. However,
concurrent PU upgrades require that more PUs are installed but not activated.
Spare PUs are used to replace defective PUs. Two spare PUs always are on an IBM z17 ME1
server. In the rare event of a PU failure, a spare PU is activated concurrently and
transparently, and is assigned the characteristics of the failing PU.
The following upgrade paths for the IBM z17 are shown in Figure 2-25:
13 FCs 2933 and 2934 are CPC1 (A15B) and CPC2 (A20B) CPC reserves
Max43 -- Y Y N N
Max90 N -- Y N N
Max136 N N -- N N
Max183 N N N -- N
Max208 N N N N --
Characterization of a PU as an IFL, ICF, or zIIP is not reflected in the output of the STSI
instruction because characterization has no effect on software charging. For more information
about STSI output, see “Processor identification” on page 395.
The following distinct model capacity identifier ranges are recognized (one for full capacity
and three for granular capacity):
For full-capacity engines, model capacity identifiers 701 - 7K8 are used. They express
capacity settings for 1 - 208 characterized CPs.
Three model subcapacity identifier ranges offer a unique level of granular subcapacity
engines at the low end. They are available for up to 43 characterized CPs. These three
subcapacity settings are applied to up to 43 CPs, which combined offer129 more capacity
settings, as described next.
Granular capacity
The IBM z17 ME1 server offers 129 capacity settings (granular capacity) for up to 43 CPs.
When subcapacity settings are used, PUs beyond 43 can be characterized only as specialty
engines. For models with more than 43 CPs, all CPs are running at full capacity (7xx).
The three defined ranges of subcapacity settings include model capacity identifiers numbered
401- 443, 501 - 543, and 601 - 643.
Consideration: All CPs have the same capacity identifier. Specialty engines (IFLs, zIIPs,
and ICFs) operate at full speed.
Max43 - (FC0571) 701 - 743, 601 - 643, 501 - 543, and 401 - 443
Max90 - (FC0572) 701 - 790, 601 - 643, 501 - 543, and 401 - 443
Max136 - (FC0573) 701 - 7D6, 601 - 643, 501 - 543, and 401 - 443
Max183 - (FC0574) 701 - 7I3, 601 - 643, 501 - 543, and 401 - 443
Max208 - (FC0575) 701 - 7K8, 601 - 643, 501 - 543, and 401 - 443
For more information about temporary capacity increases, see Chapter 8, “System upgrades”
on page 353.
The PDUs are controlled by using an Ethernet port and support the following input:
3-phase 200 - 240 V AC (4-wire “Delta”)
3-phase 380 - 415 V AC (5-wire “Wye”)
The power supply units convert the AC power to DC power that is used as input for the Point
of Loads (POLs) in the CPC drawer and the PCIe+ I/O drawers.
The power requirements depend on the number of CPC drawers (1 - 4), number of PCIe I/O
drawers (0 - 12) and I/O features that are installed in the PCIe I/O drawers.
PDUs are installed and serviced from the rear of the frame. Unused power ports should never
be used by any external device.
A view of a maximum configured system with PDU-based power is shown in Figure 2-26.
FRAME
C B A Z
42
41
40
39
38
37
36
35
34
33 A A
32
31 3 4
30
29
28
27
26
25
24
23
22
21
20
19
18
17
16
C C B B A A
15
14
1 2 1 2 1 2
13
12
11
10
9
8
7
6
5
4
3
2
1
Each PDU installed requires a customer-supplied power feed. The number of power cords
that are required depends on the system configuration.
Note: For initial installation, all power sources are required to run the system checkout
diagnostics successfully.
PDUs are installed in pairs. A system can have 2, 4, 6, or 8 PDUs, depending on the
configuration. Consider the following points:
Paired PDUs are A1/A2, A3/A4, B1/B2, and C1/C2.
From the rear of the system, the odd-numbered PDUs are on the left side of the rack; the
even-numbered PDUs are on the right side of the rack.
The total loss of one PDU in a pair does not affect the system operation.
Components that plug into the PDUs for redundancy (by using two power cords) include the
following features:
CPC Drawers, PCIe+ I/O drawers, Radiators, and Support Elements
The redundancy for each component is achieved by plugging the power cables into the
paired PDUs.
For example, the top Support Element (1), has one power supply plugged into PDU A1
and the second power supply plugged into the paired PDU A2 for redundancy.
Note: Customer power sources should always maintain redundancy across PDU pairs;
that is, one power source or distribution panel supplies power for PDU A1 and the separate
power source or distribution panel supplies power for PDU A2.
As a best practice, connect the odd-numbered PDUs (A1, B1, C1, and D1) to one power
source or distribution panel, and the even-numbered PDUs (A2, B2, C2, and D2) to a
separate power source or distribution panel.
The frame count rules (number of frames) for IBM z17 ME1 are listed in Table 2-17.
CPC drawers 0 1 2 3 4 5 6 7 8 9 10 11 12
1 1 1 1 1 2 2 2 2 2 3 3 3 3
2 1 1 1 2 2 2 2 2 3 3 3 3 3
3 1 1 2 2 2 2 2 3 3 3 3 3 N/A
4 2 2 2 2 2 3 3 3 3 3 4 4 4
The number of CPC drawers and I/O drawers determines the number of racks in the system
and the number of PDUs in the system.
The PDU/line cord rules (number of PDU/Cord pairs) for IBM z17 ME are listed in Table 2-18.
Table 2-18 PDU/line cord rules (# PDU/Cord pairs) for IBM z17 ME1
PDU/Linecord I/O drawers
CPC drawers 0 1 2 3 4 5 6 7 8 9 10 11 12
1 1 1 1 1 2 2 2 2 2 3 3 3 3
2 2 2 2 2 2 2 2 2 3 3 3 3 3
3 2 2 2 2 2 2 2 3 3 3 3 3 N/A
4 3 3 3 3 3 3 3 3 3 4 4 4 4
This tool estimates the power consumption for the specified configuration. The tool does not
verify that the specified configuration can be physically built.
Tip: The exact power consumption for your system varies. The object of the tool is to
estimate the power requirements to aid you in planning for your system installation. Actual
power consumption after installation can be confirmed by using the HMC Monitors
Dashboard task.
2.9.3 Cooling
The PU DCMs for IBM z17 ME1 are cooled by a cold plate that is connected to the internal
water-cooling loop. In an air-cooled system, the radiator unit dissipates the heat from the
internal water loop with air. The radiator unit provides improved availability with N+ 1 pumps
and blowers.
For all IBM z17 ME1 servers, the CPC drawer components (except for PU DCMs) and the
PCIe+ I/O drawers are air cooled by redundant fans. Airflow of the system is directed from
front (cool air) to the back of the system (hot air).
For more information, see 2.9.4, “Radiator Cooling Unit” on page 68.
Although the PU DCMs are cooled by water, the heat is exhausted into the room from the
radiator heat exchanger by forced air with blowers. At the system level, these IBM z17 ME1
are still air-cooled systems.
The RCU discharges heat from the internal frame water loop to the customer’s data center.
The RCU contains two independent pump FRUs. Because the cooling capability is a
redundant N+1 design, a single working pump and blower can support the entire load. The
replacement of one pump or blower can be done concurrently and does not affect
performance.
Each RCU provides cooling to PU DCMs with closed loop water within the respective frame.
No connection to an external chilled water supply is required. The IBM z17 ME1 server will
use a new coolant which consists of 40% propylene glycol and 60% DI (deionized) water. This
new coolant has many advantages:
Elimination of the IBM Z fill and drain tool (used with IBM z16 and z15)
Elimination of SSR fluid handling in the field
Reduced install time
Elimination of customer requirement to store the fill and drain tool and BTA canisters
Elimination of customer requirement to discard coolant at the end of life Fill and Drain Tool
Each RCU contains up to four independent fan assemblies that can be concurrently serviced.
The number of fans present depends on the number of CPC drawers that are installed in the
frame.
The water pumps, manifold assembly, radiator assembly (which includes the heat exchanger),
and fan assemblies are the main components of the IBM z17 RCU, as shown in Figure 2-27.
front
The closed water loop in the radiator unit is shown in Figure 2-28. The warm water that is
exiting from the PU DCMs cold plates enters pumps through a common manifold and is
pumped through a heat exchanger where heat is extracted by the air flowing across the heat
exchanger fins. The cooled water is then recirculated back into the PU DCMs cold plates.
2.10 Summary
All aspects of the IBM z17 ME1 structure are listed in Table 2-19
Standard SAPs 5 10 16 21 24
Number of IFP 2 2 2 2 2
Enabled Memory sizes GB 512 - 15616 512 - 31488 512 - 47872 512 - 64256 512 - 64256
Flexible memory sizes GB N/A 512 - 13652 512 - 27988 512 - 42324 512 - 47872
Clock frequency 5.5 GHz 5.5 GHz 5.5 GHz 5.5 GHz 5.5 GHz
Note: The IBM z17 Model ME1, Machine Type (M/T) 9175 (M/T 91751), is further identified
in this document as IBM z17, unless otherwise specified.
For more information about the processor unit, see z/Architecture Principles of Operation,
SA22-7832.
3.1 Overview
The IBM z17 symmetric multiprocessor (SMP) system is the next step in an evolutionary
trajectory that began with the introduction of the IBM System/360 in 1964. Over time, the
design was adapted to the changing requirements that were dictated by the shift toward new
types of applications on which clients depend.
IBM Z servers offer high levels of reliability, availability, serviceability (RAS), resilience, and
security. The IBM z17 fits into the IBM strategy in which mainframes play a central role in
creating an infrastructure for cloud, artificial intelligence, and analytics, which is underpinned
by security. The IBM z17 server is designed so that everything around it, such as operating
systems, middleware, storage, security, and network technologies that support open
standards, helps you achieve your business goals.
The IBM z17 extends the platform’s capabilities and adds value with breakthrough
technologies, such as the following examples:
An industry-first system that uses quantum-safe technologies, cryptographic discovery
tools, and end-to-end data encryption to protect against future attacks now.
A continuous compliance solution to help keep up with changing regulations, which
reduces cost and risk.
A consistent cloud experience to enable accelerated modernization, rapid delivery of new
services, and end-to-end automation.
New options in flexible and responsible consumption to manage system resources across
geographical locations, with sustainability that is built in across its lifecycle.
The modular CPC drawer design aims to reduce (or in some cases even eliminate) planned
and unplanned outages. The design does so by offering concurrent repair, replace, and
upgrade functions for processors, memory, and I/O.
For more information about the IBM z17 RAS features, see Chapter 9, “Reliability, availability,
and serviceability” on page 401.
For more information about frames and configurations, see Chapter 2, “Central processor
complex hardware components” on page 19, and Appendix E, “Frame configurations with
Power Distribution Units” on page 551.
The modular CPC drawer design is flexible and expandable. It offers unprecedented capacity
and security features to meet consolidation needs.
IBM z17 servers CPC continues the line of mainframe processors that are compatible with an
earlier version. The IBM z17 brings the following processor design enhancements:
5 nm silicon lithography
Eight cores per PU chip design with 43 billion transistors per PU chip, compared to 22.5
billion for z16
Level 2 cache increase from 30MB to 36MB per core
Four PU Dual Chip Modules per CPC Drawer
Each PU chip features two PCIe Generation 5 ports (x16 @ 32 GBps)
Optimized pipeline
Improved SMT and SIMD
Improved branch prediction
Improved co-processor functions (CPACF)
IBM integrated accelerator for Z Sort (on-core sort accelerator)
IBM Second Generation integrated accelerator for AI (on-chip AI accelerator)
IBM integrated Data Processing Unit (DPU)
Improved transparent memory encryption with 256 bit AES
It uses 24-, 31-, and 64-bit addressing modes, multiple arithmetic formats, and multiple
address spaces for robust interprocess security.
The IBM z17 system design features the following main objectives:
Offer a data-centric approach to information (data) security that is simple, transparent, and
consumable (extensive data encryption from inception to archive, in-flight, and at-rest).
Offer a flexible infrastructure to concurrently accommodate a wide range of operating
systems and applications, from the traditional systems (for example, z/OS and z/VM) to
the world of Linux, cloud, analytics, Artificial Inteligence, and mobile computing.
Offer state-of-the-art integration capability for server consolidation by using virtualization
capabilities in a highly secure environment:
– Logical partitioning, which allows up to 85 independent logical servers
– z/VM, which can virtualize hundreds to thousands of servers as independently running
virtual machines (guests)
– HiperSockets, which implement virtual LANs between logical partitions (LPARs) within
the system
– Efficient data transfer that uses direct memory access (SMC-D), Remote Direct
Memory Access (SMC-R), and reduced storage access latency for transactional
environments
– The IBM Z Processor Resource/System Manager (PR/SM) is designed for Common
Criteria Evaluation Assurance Level 5+ (EAL 5+) certification for security; therefore, an
application that is running on one partition (LPAR) cannot access another application
on a different partition, which provides essentially the same security as an air-gapped
system.
– The Secure Execution feature, which securely separates second-level guest operating
systems that are running under KVM for Z from each other and securely separates
access to second-level guests from the hypervisor.
This configuration allows for a logical and virtual server coexistence and maximizes
system utilization and efficiency by sharing hardware resources.
Offer high-performance computing to achieve the outstanding response times that are
required by new workload-type applications. This performance is achieved by
high-frequency, enhanced superscalar processor technology, out-of-order core execution,
large high-speed buffers (cache) and memory, an architecture with multiple complex
instructions, and high-bandwidth channels.
Offer the high capacity and scalability that are required by the most demanding
applications, from the single-system and clustered-systems points of view.
Offer the capability of concurrent upgrades for processors, memory, and I/O connectivity,
which prevents system outages in planned situations.
Implement a system with high availability and reliability. These goals are achieved with
redundancy of critical elements and sparing components of a single system, and the
clustering technology of the Parallel Sysplex environment.
Have internal and external connectivity offerings, supporting open standards, such as
Gigabit Ethernet (GbE) and Fibre Channel Protocol (FCP).
Provide leading cryptographic performance. Every processor unit (PU) includes a
dedicated and optimized CP Assist for Cryptographic Function (CPACF). Optional Crypto
Express features with cryptographic coprocessors provide the highest standardized
security certification.1 These optional features also can be configured as Cryptographic
Accelerators to enhance the performance of Secure Sockets Layer/Transport Layer
Security (SSL/TLS) transactions.
Provide on-chip compression. Every PU chip design incorporates a compression unit,
which is the IBM Integrated Accelerator for z Enterprise Data Compression (zEDC). This
configuration is different from the CMPSC (Compression Coprocessor) that is
implemented in each core.
Provide a second generation dedicated on-chip integrated AI Accelerator for high-speed
inference to enable real-time AI embedded directly in transactional workloads, and
improvements for performance, security, and availability.
Provide an on-chip Data Processing Unit (sometimes known as an I/O Engine), moving
functionality from the Application-specific integrated circuit (ASIC) on the I/O adapters and
in-boarding it into 32 Assist Processors in the IBM z17 PU chip. The DPU decreases
channel latencies and improves performance and power efficiency.
Be self-managing and self-optimizing, adjusting itself when the workload changes to
achieve the best system throughput. This process can be done by using the Intelligent
1 Federal Information Processing Standard (FIPS) 140-2 Security Requirements for Cryptographic Modules.
The remaining sections in this chapter describe the IBM z17 system structure. It shows a
logical representation of the data flow from PUs, caches, memory cards, and various
interconnect capabilities.
Both "7nm" and "5nm" are industry standard terms for process nodes.These terms do not
indicate that any physical feature (such as gate length, metal pitch or gate pitch) of the
transistors is that size. However there is a considerable increase in density going from 7nm to
5nm.
In particular the Telum 2 processor chip has 43.0 billion transistors, compared to 22.5 billion
for Telum.
The chip design uses this increase in density, along with a slight chip size increase from
530mm2 to 565.6mm2 so that
Power consumption and heat dissipation are improved.
Level 2 cache sizes are larger, increasing from 8 x 32MB to 10 x 36MB.
The Data Processing Unit (DPU) has been introduced.
The following types of CPC drawer configurations are available for IBM z17 ME1:
One drawer: Max43
Two drawers: Max90
Three drawers: Max136
Four drawers: Max183
Four drawers: Max208
Note: Max183 and Max208 are factory-build only. It is not possible to upgrade in the field to
Max183 or Max208.
2
See https://en.wikipedia.org/wiki/5_nm_process
3 See https://en.wikipedia.org/wiki/7_nm_process
The IBM z17 ME1 has up to 24 memory controller units (MCUs) for a Max208 feature (two
MCUs per PU chip and up to six MCUs (out of eight) populated per CPC drawer). The MCU
configuration uses an eight channel Reed-Solomon (R-S) redundant array of independent
memory (RAIM).
The RAIM design is an 8-channel R-S RAIM design on the IBM z17. The DIMM sizes (32, 64,
128, 256 or 512 GB) include RAIM overhead. An IBM z17 CPC drawer can have up to 48
memory DIMMs:
DDR4 Memory DIMMS in 32, 64, 128 GB sizes
DDR5 Memory DIMMS in 32, 64, 128, 256, 512 GB sizes
The IBM z17 microprocessor chip integrates a cache hierarchy design with only two levels of
physical cache (L1 and L2). The cache hierarchy (L1, L2) is implemented with dense static
random access memory (SRAM).
eDRAM is no longer used in the IBM processor. On an IBM z17, L2 cache (36 MB) is
semi-private with 18 MB dedicated to the associated core, and 18 MB shared with the system
(the 50/50 split is adjustable). Level 3 (L3) and Level 4 (L4) caches are now virtual caches
and are allocated on L2.
Two processor chips (up to eight active cores per PU chip) are combined in a Dual Chip
Module (DCM) and four DCMs are assembled in a CPC drawer (eight PU chips). An IBM z17
can have from one CPC drawer (Max43) up to four (Max183 and Max208).
The new IBM z17 Dual Chip Module (DCM) is shown in Figure 3-2 on page 77.
DPU
M-Bus
DPU
Concurrent maintenance allows dynamic Central Processing Complex (CPC) drawer add and
repair.4
IBM z17 processors use 5nm extreme ultraviolet (EUV) lithography chip technology with
advanced low latency pipeline design, which creates high-speed yet power-efficient circuit
designs. The PU DCM has a dense packaging, which allows closed water loop cooling. The
heat from the closed loop is dissipated into the air by a radiator unit (RU).
The external water-cooling option is no longer available starting with IBM z16. For more
information, see 2.9, “Power and cooling” on page 65.
MEMORY
Virtual L4 2.8 GB 6
DCM 0 DCM 3
.. 4 DCMs ..
L2 L2 L2 L2 L2 L2 L2 L2
L2 L2 L2 L2 L2 L2 L2 L2
36 36 36 36 36 36 36 36
36 MB 36 MB 36 MB 36 MB 36 MB 36 MB 36 MB 36 MB
MB MB MB MB MB MB MB MB
CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU
L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1)
On IBM z17, L1 and L2 caches are implemented on the PU chip, and L3 and L4 caches are
implemented as virtual caches and dynamically allocated on the shared part of the L2
semi-private cache.
The cache structure of the IBM z17 features the following characteristics:
Large L1, L2 caches (more data closer to the core).
L1 cache is implemented by using SRAM technology and has the same size as on IBM
z16 (128 KB for instructions and 128 KB for data).
L2 cache (36 MB in total) uses SRAM technology, and is semi-private to each PU core
with 18 MB dedicated to the associated core, and 18 MB shared with the system (the
50/50 split is adjustable).
L3 cache (up to 360 MB) now becomes a virtual cache and can be allocated on any of the
shared parts of the L2 caches. It operates at the DCM level.
L4 cache (up to 2880 MB) is also a virtual cache and can be allocated on any of the
shared parts of the L2 caches. It operates at the drawer level.
Figure 3-4 shows the cache structure that is implemented in an IBM z17 CPC drawer.
Main storage has up to 16TB addressable memory per CPC drawer, which uses up to 48
DDR4 and DDR5 DIMMs. A system with four CPC drawers can have up to 64TB of main
storage.
Considerations
Cache sizes are limited by ever-diminishing cycle times because they must respond quickly
without creating bottlenecks. Access to large caches costs more cycles. Instruction and data
cache (L1) sizes must be limited because larger distances must be traveled to reach long
cache lines. This L1 access time generally occurs in one cycle, which prevents increased
latency.
Also, the distance to remote caches as seen from the microprocessor becomes a significant
factor. For example, on an IBM z15 server, access to L4 physical cache (on the SC chip and
which might not even be in the same CPC drawer) requires several cycles to travel the
distance to the cache. (Off-drawer access can take hundreds of cycles.) On an IBM z17,
having an L4 virtual, physically allocated on the shared L2 requires fewer processor cycles in
many instances.
Although large caches mean increased access latency, the new technology 5nm EUV chip
lithography and the lower cycle time allows IBM z17 servers to increase the size of L2 cache
level within the PU chip.
To overcome the inherent delays of the SMP CPC drawer design and save cycles to access
the remote virtual L4 content, the system keeps instructions and data as close to the
processors as possible. This configuration can be managed by directing as much work of a
specific LPAR workload to the processors in the same CPC drawer as the L4 virtual cache.
The cache structures of IBM z17 ME1 systems are compared with the previous generation of
IBM Z (IBM z16 A01) in Figure 3-5.
IBM z16 Single CPC Drawer (logical view) IBM z17 Single CPC Drawer (logical view)
Compared to IBM z16, the IBM z17 has a larger L2 cache. Cache L3 and L4 are virtual
caches in the IBM z16 and IBM z17. More affinity exists between the memory of a partition,
the L4 virtual cache in a drawer, and the cores in the PU chips. As in IBM z16, the IBM z17
cache level structure is focused on keeping more data closer to the PU. This design can
improve system performance on many production workloads.
HiperDispatch
To help avoid latency in a high-frequency processor design, PR/SM and the dispatcher must
schedule and dispatch a workload to run on as small a portion of the system as possible. The
cooperation between z/OS and PR/SM is bundled in a function called HiperDispatch.
HiperDispatch uses the IBM z17 cache topology, which features reduced cross-cluster “help”
and better locality for multitasking address spaces.
PR/SM can use dynamic PU reassignment to move processors (CPs, ZIIPs, IFLs, ICFs, and
spares) to a different chip and drawer to improve the reuse of shared caches by processors of
the same partition. It can use dynamic memory relocation (DMR) to move a running partition’s
memory to different physical memory to improve the affinity and reduce the distance between
the memory of a partition and the processors of the partition. These are relatively infrequent
events.
For more information about HiperDispatch, see 3.7, “Logical partitioning” on page 121.
The IBM z17 ME1 inter-CPC drawer communication structure is shown in Figure 3-6.
D2 – Link0 D2 – Link1
DCM1
CP-2 CP-3
DCM0 DCM3
D1 – Link0 D1 – Link1
Inter-CPC drawer communication occurs at the Level 4 virtual cache level, which is
implemented on the semi-private part of one of the Level 2 caches in a chip module. The
Level 4 cache function regulates coherent drawer-to-drawer traffic.
Note: PU chips 0 and 1 (the first DCM) of each drawer don’t have direct connections to other
drawers; Transfer is through other DCMs in the drawer.
The IBM z17 ME1 core frequency is 5.5 GHz (increased from 5.2 GHz in the IBM z16 A01),
with increased number of processors that share larger caches to have shorter access times
and improved capacity and performance.
Through innovative processor design (significant architecture changes, new cache structure,
new Core-Nest interface, new branch prediction design that uses dense SRAM, on-chip AI
accelerator for inference, and new on-chip I/O processor), the IBM Z processor’s performance
continues to evolve.
Enhancements were made on the processor unit design, including the following examples:
Branch prediction mechanism
Floating point unit
Divide engine scheduler
Load/Store Unit and Operand Store Compare (OSC)
Relative nest intensity (RNI) redesigns
For more information about RNI, see 12.4, “Relative Nest Intensity” on page 495.
Performance was enhanced through the following changes to the IBM z17 processor design:
Core optimization to enable performance and capacity growth.
A larger cache Level 2 (SRAM) and virtual Level 3 and Level 4 cache to reduce latency.
DPU On-chip IBM I/O processor. For more information see 3.4.7, “IBM z17 DPU - Data
Processing Unit” on page 97.
Enhancement of nest-core staging.
On-chip IBM second generation Integrated Accelerator for AI. For more information, see
Chapter 2, “Central processor complex hardware components” on page 19, and
Appendix A, “IBM Z Integrated Accelerator for AI and IBM Spyre AI Accelerator” on
page 515.
Because of these enhancements, the IBM z17 processor full speed z/OS single-thread
performance is on average 1.11 times faster than the IBM z16 at equal N-way, and an
average 1.15 times faster for the max capacity (IBM z17 Max 208). For more information
about performance, see Chapter 12, “Performance and capacity planning” on page 489.
IBM z13® servers introduced architectural extensions with instructions that reduce processor
quiesce effects, cache misses, and pipeline disruption, and increase parallelism with
instructions that process several operands in a single instruction (SIMD). The processor
architecture was further developed for IBM z16 and IBM z17.
The IBM z17 enhanced Instruction Set Architecture (ISA) includes a set of instructions that
are added to improve compiled code efficiency. These instructions optimize PUs to meet the
demands of various business and analytics workload types without compromising the
performance characteristics of traditional workloads.
An operating system with SMT support can be configured to dispatch work to a thread on a
zIIP (for eligible workloads in z/OS) or an IFL (for z/VM and Linux on IBM Z) core in single
thread or SMT mode so that HiperDispatch cache optimization can be considered. For more
information about operating system support, see Chapter 7, “Operating systems support” on
page 261. All SAP processors except one also work in SMT mode.
SMT technology allows instructions from more than one thread to run in any pipeline stage at
a time. SMT can handle up to four pending translations.
Each thread has its own unique state information, such as Program Status Word (PSW) and
registers. The simultaneous threads cannot necessarily run instructions instantly and must at
times compete to use certain core resources that are shared between the threads. In some
cases, threads can use shared resources that are not experiencing competition.
Two threads (A and B) that are running on the same processor core on different pipeline
stages and sharing the core resources is shown in Figure 3-8.
Figure 3-8 Two threads running simultaneously on the same processor core
The use of SMT provides more efficient use of the processors’ resources and helps address
memory latency, which results in overall throughput gains. The active thread shares core
resources in space, such as data and instruction caches, TLBs, branch history tables, and, in
time, pipeline slots, execution units, and address translators.
Although SMT increases the processing capacity, the performance in some cases might be
superior if a single thread is used. Enhanced hardware monitoring supports measurement
through CPUMF (via SMF 113 for z/OS) and z/OS RMF for thread usage and capacity.
For workloads that need maximum thread speed, the partition’s SMT mode can be turned off.
For workloads that need more throughput to decrease the dispatch queue size, the partition’s
SMT mode can be turned on.
SMT use is functionally transparent to middleware and applications, and no changes are
required to run them in an SMT-enabled partition.
SIMD provides the next phase of enhancements of IBM Z analytics capability. The set of
SIMD instructions is a type of data parallel computing and vector processing that can
decrease the number of instructions in a program and accelerate code that handles integer,
string, character, and floating point data types. The SIMD instructions improve performance of
complex mathematical models and allow integration of business transactions and analytic
workloads on IBM Z servers.
The 32 vector registers feature 128 bits. The instructions include string operations, vector
integer, and vector floating point operations. Each register contains multiple data elements of
a fixed size. The following instruction codes specify which data format to use and the size of
the elements:
Byte (16 8-bit operands)
Halfword (eight 16-bit operands)
Word (four 32-bit operands)
Doubleword (two 64-bit operands)
Quadword (one 128-bit operand)
The 128-bit collection of elements in a register is called a vector. A single instruction operates
on all of the elements in the register. Instructions include a nondestructive operand encoding
that allows the addition of the register vectors A1, A2 and A3 and registers vectors B1, B2 and
B3 and stores the result in the register vector Cn (Cn = An + Bn with n= 1 to 3).
A schematic representation of a SIMD instruction with 16-byte size elements in each vector
operand is shown in Figure 3-9.
Scalar: SIMD:
SINGLE INSTRUCTION, SINGLE DATA SINGLE INSTRUCTION, MULTIPLE DATA
A3 B3 C3
A3 B3
C3
A2 B2 C2 INSTRUCTION A2 B2
C2
A1 B1
C1 C1
A1 B1
Sum and Store
Sum and Store
Instruction is performed for Perform instructions on
every data element every element at once
Figure 3-9 SIMD operation logic
The vector register file overlays the floating-point registers (FPRs), as shown in Figure 3-10.
The FPRs use the first 64 bits of the first 16 vector registers, which saves hardware area and
power, and makes it easier to mix scalar and SIMD codes. Effectively, the core gets 64 FPRs,
which can further improve FP code efficiency.
For most operations, the condition code is not set. A summary condition code is used only for
a few instructions.
The IBM z17 processor includes pipeline enhancements that benefit Out-of-Order execution.
The processor design features advanced micro-architectural innovations that provide the
following benefits:
Maximized instruction-level parallelism (ILP) for a better cycles per instruction (CPI)
design.
Maximized performance per watt.
Enhanced instruction dispatch and grouping efficiency.
Increased Out-of-Order) resources, such as Global Completion Table entries, physical
GPR entries, and physical FPR entries.
Improved completion rate.
Reduced cache/TLB miss penalty.
Improved execution of D-Cache store and reload and new Fixed-point divide.
New Operand Store Compare (OSC) (load-hit-store conflict) avoidance scheme.
Enhanced branch prediction structure and sequential instruction fetching.
Load Indexed Address instruction to speed up address manipulation, especially in Linux.
Program results
Out-of-Order execution does not change any program results. Execution can occur out of
(program) order, but all program dependencies are accepted, and the same results are seen
as in-order (program) execution. The design was optimized by increasing the Global
Completion Table (GCT) from 48x3 to 60x3, which increased the issue queue size from 2x30
to 2x36 and designed a new Mapper.
This implementation requires special circuitry to make execution and memory accesses
display in order to the software. The logical diagram of an IBM z17 core is shown in
Figure 3-11.
Memory address generation and memory accesses can occur out of (program) order. This
capability can provide a greater use of the IBM z17 superscalar core, and improve system
performance.
The IBM z17 ME1 processor unit core is a superscalar, out-of-order, SMT processor with
eight execution units. Up to six instructions can be decoded per cycle, and up to 12
instructions or operations can be started to run per clock cycle (0.181ns). The execution of
the instructions can occur out of program order and memory address generation and memory
accesses can also occur out of program order. Each core includes special circuitry to display
execution and memory accesses in order to the software.
The IBM z17 superscalar PU core can have up to 10 instructions or operations that are
running per cycle. This technology results in shorter workload runtime.
Equally, if the branch prediction logic fails to keep ahead of instruction execution the latter has
to pause. So the logic has to be highly performant and efficient.
With successive generations of IBM Z processor, optimisations have been made - both for
prediction accuracy and for efficiency.
To ensure branch prediction and branch target prediction are effective, various history-based
prediction mechanisms are used, as shown in the in-order part of the IBM z17 PU core logical
diagram in Figure 3-11 on page 87.
The Branch Target Buffer (BTB) runs ahead of instruction cache prefetches to prevent branch
misses in an early stage. Furthermore, a branch history table (BHT) offers a high branch
prediction success rate in combination with a pattern history table (PHT) and the use of
tagged multi-target prediction technology branch prediction.
Since z15 Branch Direction Prediction has used a two table TAGE5 Pattern History Table
(PHT) to predict based on history: Each of the tables uses a different history length:
A “short history” table, using the last 9 branches
A “long history” table, using the last 17 branches
If both tables supply a hit the longer one wins, being more specific and therefore more likely to
reflect the current behaviour.
For z17 these tables are paired, with common predictions residing in a slightly faster PHT-1
and less common ones in a slightly slower PHT-2. PHT-1 and PHT-2 are both the same size.
The intention of doubling up is to increase PHT capacity and to improve branch prediction’s
ability to keep ahead.
Branch direction prediction also uses a Perceptron-based mechanism, introduced with z14.
“Perceptron” is an industry term. This is a neural network algorithm that learns to correlate
branch history over time to predict the direction of branches that the other mechanisms
cannot catch with sufficient accuracy.
Branch Target Prediction is implemented using a two level Branch Target Buffer (BTB):
BTB1 (“small” and “fast”)
BTB2 (large, dense-SRAM)
Starting with z16, BTB1 and BTB2 feature dynamic (variable) capacity, adapting to changing
conditions:
BTB1: First Level Branch Target Buffer, which is smaller than IBM z15, dynamic director,
variable capacity:
– Minimum total branches in all parents (all large branches) = 8 K
– Maximum total branches in all parents (all medium branches) = 12 K
BTB2: Second Level Branch Target Buffer, also variable capacity (variable directory), up to
260 k branches
IBM z17 is a superscalar processor. Each processor unit, or core, is a superscalar and
out-of-order processor that supports 10 concurrent issues to execution units in a single CPU
cycle:
Fixed-point unit (FXU): The FXU handles fixed-point arithmetic.
Load-store unit (LSU): The LSU contains the data cache. It is responsible for handling all
types of operand accesses of all lengths, modes, and formats as defined in the
z/Architecture.
Instruction fetch and branch (IFB) (prediction) and Instruction cache and merge (ICM).
These two sub units (IFB and ICM) contain the instruction cache, branch prediction logic,
instruction fetching controls, and buffers. Its relative size is the result of the elaborate
branch prediction.
L1 data and L1 instruction are incorporated into the LSU and ICM, respectively.
COBOL enhancements
IBM z17 core implements new instructions for the compiler to accelerate numeric formatting,
and hardware support for new numeric conversion instructions (exponents and arithmetic
common in financial applications).
One Nest Accelerator Unit (NXU) is used per processor chip, which is shared by all cores on
the chip and features the following benefits:
Brand new concept of sharing and operating an accelerator function in the nest
Supports DEFLATE compliant compression/decompression and GZIP CRC/ZLIB Adler
Low latency
High bandwidth
Problem state execution
Hardware/Firmware interlocks to ensure system responsiveness
Designed instruction
Run in millicode
Moving the compression function from the I/O drawer to the processor chip means that
compression can operate directly on L2 cache and data does not need to be passed by using
I/O.
Data compression is running in one of the two execution modes available: Synchronous mode
or Asynchronous mode:
Synchronous execution occurs in problem states where the user application starts the
instruction in its virtual address space.
Asynchronous execution is optimized for Large Operations under z/OS for authorized
applications (for example, BSAM/QSAM) and issues I/O by using EADMF for
asynchronous execution.
Asynchronous execution maintains the current user experience and provides a
transparent implementation for existing authorized users of zEDC.
Figure 3-12 shows the nest compression accelerator (NXU) for On-Chip Compression
acceleration.
DPU
Figure 3-12 Integrated Accelerator for zEDC (NXU) on the IBM z17 PU chip
For more information about sizing, migration considerations, and software support, see
Chapter B, “IBM Integrated Accelerator for zEnterprise Data Compression” on page 525.
The compression engine uses static dictionary compression and expansion that is based on
CMPSC6 instruction. The compression dictionary uses the level 1 (L1) cache (instruction
cache).
The cryptography coprocessor is used for CPACF, which offers a set of symmetric
cryptographic functions for encrypting and decrypting of clear key operations.
The location of the coprocessor on the IBM z17 chip is shown in Figure 3-13.
For more information about these instructions, see the latest version of the z/Architecture
Principles of Operation, SA22-7832.
New CPACF accelerator that is built into every core supports Pervasive Encryption by
providing fast synchronous cryptographic services:
Encryption (DES, TDES, and AES)
Hashing (SHA-1, SHA-2, SHA-3, and SHAKE)
Random Number Generation (PRNG, DRNG, and TRNG)
Elliptic Curve operations (ECDH[E], ECDSA, and EdDSA)
For more information about cryptographic functions on IBM z17 servers, see Chapter 6,
“Cryptographic features” on page 221.
Introduced on IBM z15 was the sort accelerator that is known as the IBM Integrated
Accelerator for Z Sort (see Figure 3-13 on page 92). The SORTL hardware instruction that is
implemented on each core is used by DFSORT and the Db2 utilities for z/OS Suite to allow
the use of a hardware-accelerated approach to sorting.
The IBM Integrated Accelerator for Z Sort feature termed as “ZSORT” helps to reduce the
CPU costs and improve the elapsed time for eligible workloads. One of the primary
requirements for ZSORT is providing enough virtual, real, and auxiliary storage.
Sort jobs that run in memory-constrained environments in which the amount of memory that
is available to be used by DFSORT jobs is restricted might not achieve optimal performance
results or might not be able to use ZSORT.
The 64-bit memory objects (above-the-bar-storage) can use the ZSORT accelerator for sort
workloads for optimal results. Because ZSORT is part of the CPU and memory latency is
much less than disk latency, sorting in memory is more efficient than sorting with memory and
disk workspace. By allowing ZSORT to process the input completely in memory, it can
achieve the best results in elapsed time and CPU time.
Because the goal of ZSORT is to reduce CPU time and elapsed time, it can require more
storage than a DFSORT application that does not use ZSORT.
Note: Not all sorts are eligible to use ZSORT. IBM’s zBNA tool provides modeling support
for identifying potential ZSORT-eligible candidate jobs and estimates the benefits of
ZSORT. The tool uses information in the SMF type 16 records.
The following restrictions disable ZSORT and revert to the use of traditional sorting technique:
SORTL facility is not enabled/unavailable on the processor
ZSORT is not enabled
OPTION COPY or SORT FIELDS=COPY is specified
Use of:
– INREC
– JOINKEYS
– MERGE FIELDS
– MODS(EXITS) statements
– OUTREC
– OUTFIL
– SUM FIELDS
Program-invoked sorts
Memory objects cannot be created
Insufficient memory object storage available (required more than currently available)
Unsupported sort fields specified (examples Unicode, Locale, and ALTSEQ)
Unknown file size or file size=0.
SIZE/FILSZ=Uxxxxxx is specified
SORTIN/SORTOUT is a VSAM Cluster
Sort control field positions are beyond 4092 and VLSHRT is specified
Use of EXCP access method was requested
Insufficient storage (for example, above or below the line)
Sorting key greater than 4088 bytes or greater than 4080 bytes if EQUALS is specified
For variable records, the record length (LRECL) must be greater than 24
zHPF is unavailable for a sort that cannot be performed entirely in memory
Insufficient amount of sort workspace
The new IBM z17 microprocessor chip, also called the IBM Telum II processor, integrates an
I/O engine and redesigned AI accelerator. These innovations bring incredible value to
applications and workloads that are running on IBM Z platform.
Customers can benefit from the integrated AI accelerator by adding AI operations that are
used to perform fraud prevention and fraud detection, customer behavior predictions, and
supply chain operations. All of these operations are done in real time and fully integrated in
transactional workloads. As a result, valuable insights are gained from their data instantly.
The integrated accelerator for AI delivers AI inference in real time, at large scale, and high
throughput rate, with no transaction left behind. The AI capability applies directly to the
running transaction. It shifts the traditional paradigm of applying AI to the transactions that
were completed. This innovative technology also can be used for intelligent IT workloads
placement algorithms, which contributes to the better overall system performance.
The Telum II processor also integrates powerful mechanisms of data prefetch, fast and high
capacity level 1 (L1) and level 2 (L2) caches, enhanced branch prediction, and other
improvements and innovations that streamlines the data processing by the AI accelerator.
The hardware, firmware, and software are vertically integrated to deliver the new AI for
inference functions seamless to the applications.
The location of the integrated accelerator for AI on the Telum II chip is shown in Figure 3-14.
DPU
The AI accelerator is driven by the new Neural Networks Processing Assist (NNPA)
instruction.
Figure 3-15 IBM z17 2nd generation Integrated Accelerator for AI logical diagram
As shown in Figure 3-15 above, all cores in a PU chip have access to its local AIU.
With IBM z17, the cores in one PU chip can transparently access the AIUs in the remote PU
chips in the same CPC drawer (access to AIUs in IBM z16 was limited to the AIUs in the
same PU chip).
Intelligent data movers and prefetchers are connected to the chip by way of ring interface for
high-speed, low-latency, read/write cache operations at 200+ GBps read/store bandwidth,
and 600+ GBps bandwidth between engines.
Compute Arrays consist of 128 processor tiles with 8-way FP-16 FMA SIMD, which are
optimized for matrix multiplication and convolution, and 32 processor tiles with 8-way
FP-16/FP-32 SIMD, which are optimized for activation functions and complex functions.
The integrated AI accelerator delivers more than 24 Trillions of instructions per second
(TOPs) per chip and over 752 TOPs in fully configured IBM z17 system with the 32 chips. The
AI accelerator is shared by all cores on the chip and by all cores on the remote chips in the
same drawer. The firmware, running on the cores and accelerator, orchestrates and
synchronizes the execution on the accelerator.
Operating • Optimizations of AI
Systems, z/OS z/CX inference and pipeline
Container Env. execution
Hardware & • Specialized CPU HW and
Facilities SIMD CPU AIU Integrated Accelerators
Acknowledging the diverse AI training frameworks, customers can train their models on
platforms of their choice, including IBM Z (on-premises and in hybrid cloud) and then, deploy
it efficiently on IBM Z in colocation with the transactional workloads. No other development
effort is needed to enable this strategy.
IBM has invested into Open Neural Network Exchange (ONNX), which is a standard format
for representing AI models that allows a data scientist to build and train a model in the
framework of choice without worrying about the downstream inference implications.
To enable deployment of ONNX models, IBM provides an ONNX model compiler that is
optimized for IBM Z. IBM also optimized key Open Source frameworks, such as TensorFlow
and TensorFlow Serving, for use on IBM Z platform.
IBM open-sourced zDNN library provides common APIs for the functions that allow to convert
tensor format to the accelerator required one. Customers can run zDNN under z/OS (in zCX)
and Linux on IBM Z.
A Deep Learning Compiler (DLC) for z/OS and for Linux on IBM Z provides the AI functions to
the applications.
With the DPU, functionality from I/O Adapters’ ASICs is moved into the Z CP chip. The design
aims to build an I/O subsystem with better qualities of services than the existing
overprovisioned I/O subsystem. It delivers value by improved performance and power
efficiency. It also deliver value to our clients with decreased channel latencies.
Generally, it carries functional existing capabilities forward in the channels that it supports,
with some important improvements and increased capabilities. Refer to Figure..
Supported Protocols
The following protocols will run on the DPU:
1. Legacy Mode FICON
2. HPF (High Performance FICON)
3. FCP (SCSI over fiber channel)
4. OSA (Open Systems Adapter)
5. OSA-ICC (Open Systems Adapter - Integrated Console Controller)
DPU Components
The DPU engine design reduces the amount of custom hardware in the I/O cards, eliminating
anything that’s protocol specific (like data routers), providing strategic hardware assists where
appropriate, and doing as much processing as possible in firmware.
There is one DPU complex per CP (PU Chip), where the logic physically occupies the space
of 1.5 z-cores. Each PU Chip has a single associated DPU.
A DPU comprises 32 cores arranged in 4 clusters of eight. The DPU has a coprocessor
interface as a microarchitectural feature. Figure 3-18 shows the IBM z17 Telum II DPU I/O
Engine.
PU Chip
< DPU
Figure 3-18 Location of the DPU I/O Engine in the Telum II chip
DPU Sharing
Each DPU is shared by PU cores not only on the chip it resides on but other PU chips in the
drawer. This is necessary because each CHPID is managed by a single DPU.
Base 10 arithmetic is used for most business and financial computation. Floating point
computation that is used for work that is typically done in decimal arithmetic involves frequent
data conversions and approximation to represent decimal numbers. This process makes
floating point arithmetic complex and error-prone for programmers who use it for applications
in which the data is typically decimal.
IBM z17 servers have two DFP accelerator units per core, which improve the decimal floating
point execution bandwidth. The floating point instructions operate on newly designed vector
registers (32 new 128-bit registers).
IBM z17 servers include decimal floating point packed conversion facility support with the
following benefits:
Reduces code path length because extra instructions to format conversion are no longer
needed.
Packed data is operated in memory by all decimal instructions without general-purpose
registers, which were required only to prepare for decimal floating point packed conversion
instruction.
Converting from packed can now force the input packed value to positive instead of
requiring a separate OI, OILL, or load positive instruction.
Converting to packed can now force a positive zero result instead of requiring ZAP
instruction.
Cobol and PL/I compilers were updated to support the new IBM z17 enhancements:
BCD to HFP conversions
Numeric editing operation
Zoned decimal operations
Software support
DFP is supported in the following programming languages and products:
Release 4 and later of the High Level Assembler
C/C++, which requires supported z/OS version
Enterprise PL/I Release 3.7 and Debug Tool Release 8.1 or later
Java Applications that use the BigDecimal Class Library
SQL support as of Db2 Version 9 and later
The IBM z17 core implements two other execution subunits for 2x throughput on BFP
(single/double precision) operations (see Figure 3-11 on page 87).
The key point is that Java and C/C++ applications tend to use IEEE BFP operations more
frequently than earlier applications. Therefore, the better the hardware implementation of this
set of instructions, the better the performance of applications.
Relocation under hardware control is possible because the R-unit has the full designed state
in its buffer. PU error detection and recovery are shown in Figure 3-19.
The BHT (Branch History Table) implementation on processors provides a large performance
improvement. Originally introduced on the IBM ES/9000 9021 in 1990, the BHT is
continuously improved.
It offers significant branch performance benefits. The BHT allows each PU to take instruction
branches that are based on a stored BHT, which improves processing times for calculation
routines.
In addition to the BHT, IBM z17 servers use the following techniques to improve the prediction
of the correct branch to be run:
BTB
PHT
CTB
The success rate of branch prediction contributes significantly to the superscalar aspects of
IBM z17 processor. This success is because the architecture rules prescribe that the correctly
predicted result of the branch is essential for successful parallel execution of an instruction
stream.
IBM z17 integrates a new branch prediction design that uses SRAM and supports the
following:
BTB1: 8K - 12K
BTB2: up to 260K
TAGE PHT: 4K x 2
TAGE PHT2: 4K x 2
TAGE CTB: 1K x 2
With the wild branch hardware facility, the last address from which a successful branch
instruction was run is kept. z/OS uses this information with debugging aids, such as the SLIP
command, to determine from where a wild branch came.
It also can collect data from that storage location. This approach decreases the number of
debugging steps that are necessary when you want to know from where the branch came.
The size of the TLB is kept as small as possible because of its short access time
requirements and hardware space limitations. Because memory sizes recently increased
significantly as a result of the introduction of 64-bit addressing, a smaller working set is
represented by the TLB.
To increase the working set representation in the TLB without enlarging the TLB, large (1 MB)
page and giant page (2 GB) support is available and can be used when suitable. For more
information, see “Large page support” on page 119.
With the enhanced DAT-2 (EDAT-2) improvements, the IBM Z servers support 2 GB page
frames.
The new translation engine allows up to four translations pending concurrently. Each
translation step is ~2x faster, which helps second level guests.
In z17 the TLB lookup pipeline is modified to handle both Demand and Prefetches.
Instruction fetching
Instruction fetching normally tries to get as far ahead of instruction decoding and execution as
possible because of the relatively large instruction buffers that are available. In the
microprocessor, smaller instruction buffers are used. The operation code is fetched from the
I-cache and put in instruction buffers that hold prefetched data that is awaiting decoding.
z17 has a new Prefetch Pipeline, driven by the branch prediction logic. It looks for lines that
are not in the Level 1 instruction cache but are likely to be in the Level 2 cache.
Instruction decoding
The processor can decode up to six instructions per cycle. The result of the decoding process
is queued and later used to form a group.
Instruction grouping
From the instruction queue, up to 12 instructions can be completed on every cycle. A
complete description of the rules is beyond the scope of this publication.
Compilers and JVMs are responsible for selecting instructions that best fit with the
superscalar microprocessor. They abide by the rules to create code that best uses the
superscalar implementation. All IBM Z compilers and JVMs are constantly updated to benefit
from new instructions and advances in microprocessor designs.
The Transaction Execution Facility provides instructions, including declaring the beginning
and end of a transaction, and canceling the transaction. TX is expected to provide significant
performance benefits and scalability by avoiding most locks. This benefit is especially
important for heavily threaded applications, such as Java.
z17 is the last processor generation to support the Transaction Execution Facility.
3.5.1 Overview
All PUs on an IBM z17 are physically identical. When the system is initialized, two integrated
firmware processors (IFP) are allocated from the pool of PUs that is available for the entire
system. The other PUs can be characterized to specific functions (CP, IFL, ICF, zIIP, or SAP).
The function that is assigned to a PU is set by the Licensed Internal Code (LIC). The LIC is
loaded when the system is initialized at power-on reset (POR) and the PUs are characterized.
This design brings outstanding flexibility to IBM z17 servers because any PU can assume any
available characterization. The design also plays an essential role in system availability
because PU characterization can be done dynamically, with no system outage.
For more information about software level support of functions and features, see Chapter 7,
“Operating systems support” on page 261.
Concurrent upgrades
For all IBM z17 ME1 features that have more processor units (PUs) installed
(non-characterized) than activated, concurrent upgrades can be done by using LIC activation.
This activation assigns a PU function to a previously non-characterized PU. No hardware
changes are required.
If the PU chips in the installed CPC drawers have no available remaining PUs, an upgrade
results in a feature upgrade and the installation of an extra CPC drawer. Field add (MES) of a
CPC drawer is possible for IBM z17 Model ME1 features Max43 and Max90 only. These
features can be upgraded to a Max136 provided initial order for the CPC Reserve features FC
2933 or FC 2934. CPC drawer installation is nondisruptive, but takes more time than a simple
LIC upgrade. Features Max183 and Max208 are factory build only.
For more information about Capacity on Demand, see Chapter 8, “System upgrades” on
page 353.
PU sparing
If a PU failure occurs, the failed PU’s characterization is dynamically and transparently
reassigned to a spare PU. IBM z17 servers have two spare PUs. PUs that are not
characterized on a CPC configuration also can be used as extra spare PUs.
For more information about PU sparing, see 3.5.10, “Sparing rules” on page 116.
PU pools
PUs that are defined as CPs, IFLs, ICFs, and zIIPs are grouped in their own pools from where
they can be managed separately. This configuration significantly simplifies capacity planning
and management for LPARs. The separation also affects weight management because CP
and zIIP weights can be managed separately.
All assigned PUs are grouped in the PU pool. These PUs are dispatched to online logical
PUs. For example, consider an IBM z17 with 10 CPs, 2 IFLs, 5 zIIPs, and 1 ICF. This system
has a PU pool of 18 PUs, called the pool width. Subdivision defines the following pools:
A CP pool of 10 CPs
An ICF pool of one ICF
An IFL pool of two IFLs
A zIIP pool of five zIIPs
When a dedicated LPAR is activated, its PUs are configured from the appropriate pools. This
process also is the case when an LPAR logically configures a PU as on, if the width of the
pool allows for it.
For an LPAR, logical PUs are dispatched on physical PUs in the supporting pool. The logical
CPs are dispatched from the CP pool, logical zIIPs from the zIIP pool, logical IFLs from the
IFL pool, and the logical ICFs from the ICF pool.
PU weighting
Because CPs, zIIPs, IFLs, and ICFs have their own pools from where they are dispatched,
shared logical processors are given pool-specific weights. For more information about PU
pools and processing weights, see the Processor Resource/Systems Manager Planning
Guide, SB10-7178.
The IBM z17 can be initialized in LPAR (PR/SM) mode or in Dynamic Partition Manger (DPM)
mode.
CPs are defined as dedicated or shared. Reserved CPs can be defined to an LPAR to allow
for nondisruptive image upgrades. If the operating system in the LPAR supports the logical
processor add function, reserved processors are no longer needed. Regardless of the
installed model, an LPAR can have up to 208 logical CPs that are defined (the sum of active
and reserved logical CPs). In practice, define no more CPs than the operating system
supports.
All PUs that are characterized as CPs within a configuration are grouped into the CP pool.
The CP pool can be seen on the Hardware Management Console (HMC) workplace. Any
z/Architecture operating systems and CFCCs can run on CPs that are assigned from the CP
pool.
The IBM z17 ME1 server recognizes four distinct capacity settings for CPs. Full-capacity CPs
are identified as CP7. In addition to full-capacity CPs, three subcapacity settings (CP6, CP5,
and CP4), each for up to 43 PUs, are offered.
Granular capacity adds 129 subcapacity settings to the 208 capacity settings that are
available with full capacity CPs (CP7). Each of the 129 subcapacity settings applies to up to
43 CPs only, independent of the model installed.
Note: Information about CPs in the remainder of this chapter applies to all CP capacity
settings, unless indicated otherwise. For more information about granular capacity, see
2.3.3, “PU characterization” on page 38.
Note: IFLs can be dedicated to a Linux, a z/VM, or an SSC LPAR, or can be shared by
multiple Linux guests, z/VM LPARs, or SSC that are running on the same IBM z17 server.
Only z/VM, Linux on IBM Z operating systems, SSC, and designated software products
can run on IFLs. IFLs are orderable by using FC 1651.
IFL pool
All PUs that are characterized as IFLs within a configuration are grouped into the IFL pool.
The IFL pool can be seen on the HMC workplace.
IFLs do not change the model capacity identifier of the IBM z17. Software product license
charges that are based on the model capacity identifier are not affected by the addition of
IFLs.
Unassigned IFLs
An IFL that is purchased but not activated is registered as an unassigned IFL (FC 1654).
When the system is later upgraded with another IFL, the system recognizes that an IFL was
purchased and is present.
The allowable number of IFLs and Unassigned IFLs per feature is listed in Table 3-1.
Unassigned ICFs
An ICF that is purchased but not activated is registered as an unassigned ICF (FC 1655).
When the system is later upgraded with another ICF, the system recognizes that an ICF was
purchased and is present.
The allowable number of ICFs and Unassigned ICFs for each model is listed in Table 3-2.
ICFs exclusively run CFCC. ICFs do not change the model capacity identifier of the IBM z17
system. Software product license charges that are based on the model capacity identifier are
not affected by the addition of ICFs.
All ICFs within a configuration are grouped into the ICF pool. The ICF pool can be seen on the
HMC workplace.
The ICFs can be used by coupling facility LPARs only. ICFs are dedicated or shared. ICFs
can be dedicated to a CF LPAR, or shared by multiple CF LPARs that run on the same
system. However, having an LPAR with dedicated and shared ICFs at the same time is not
possible.
After the image is dispatched, “poll for work” logic in CFCC and z/OS can be used largely
as-is to locate and process the work. The new interrupt expedites the redispatching of the
partition.
7 It is the only option for shared processors in a CF image (whether they be ICFs or CPs) on IBM z16.
LPAR presents these Coupling Thin Interrupts to the guest partition, so CFCC and z/OS both
require interrupt handler support that can deal with them. CFCC also changes to relinquish
control of the processor when all available pending work is exhausted, or when the LPAR
undispatches it off the shared processor, whichever comes first.
CF processor combinations
A CF image can have one of the following combinations that are defined in the image profile:
Dedicated ICFs
Shared ICFs
Shared CPs
Shared ICFs add flexibility. However, running only with shared coupling facility PUs (ICFs or
CPs) is not a preferable production configuration. It is preferable for a production CF to
operate by using dedicated ICFs.
In Figure 3-20, the CPC on the left participates in two parallel sysplexes (Production and
Test), and each has one z/OS and one coupling facility image. The coupling facility images
share an ICF.
The LPAR processing weights are used to define how much processor capacity each CF
image is entitled to. The capped option also can be set for a test CF image to protect the
production environment.
Connections between these z/OS and CF images can use internal coupling links to avoid the
use of real (external) coupling links, and get the best link bandwidth available.
Dynamic CF dispatching
The dynamic coupling facility dispatching (DYNDISP) function features a dispatching
algorithm that you can use to define a backup CF in an LPAR on the system. When this LPAR
is in backup mode, it uses few processor resources.
DYNDISP allows more environments with multiple CF images to coexist in a server, and to
share CF engines with reasonable performance. DYNDISP THIN is the only option for CF
images that use shared processors on IBM z17. For more information, see 3.9.3, “Dynamic
CF dispatching” on page 142.
With IBM z17, the maximum number of CF processors in an LPAR increases from 16 to 32.
With this increase, customers might be able to consolidate the CF workload across fewer CF
images. This could reduce complexity with fewer coupling links, and logical CHPIDs to define
and manage for connectivity, and so on.
For more information about CFCC Level 26 enhancements, see 3.9.1, “CF Control Code
(CFCC)” on page 135.
A zIIP enables eligible z/OS workloads to have a portion of them directed for execution to a
processor that is characterized as a zIIP. Because the zIIPs do not increase the MSU value of
the processor, they do not affect the IBM software license charges.
IBM z17 is the fifth generation of IBM Z processors to support SMT. IBM z17 servers
implement two threads per core on IFLs and zIIPs. SMT must be enabled at the LPAR level
and supported by the z/OS operating system. SMT was enhanced for IBM z17 and it is
enabled for SAPs by default (no customer intervention required).
Introduced in z/OS V2R4, the z/OS Container Extensions9 allows deployment of Linux on IBM
Z software components, such as Docker Containers in a z/OS system, in direct support of
z/OS workloads without requiring a separately provisioned Linux server. It also maintains
overall solution operational control within z/OS and with z/OS qualities of service. Workload
deployed in z/OS Container Extensions is zIIP eligible.
This process reduces the CP time that is needed to run Java WebSphere applications, which
frees that capacity for other workloads.
8 IBM z Systems Application Assist Processors (zAAPs) are not available since IBM z14 servers. A zAAP workload is
dispatched to available zIIPs (zAAP on zIIP capability).
9
z/OS Container Extensions that are running on IBM z16 require IBM Container Hosting Foundation for z/OS
software product (5655-HZ1).
The logical flow of Java code that is running on an IBM z17 that has a zIIP available is shown
in Figure 3-21. When JVM starts the execution of a Java program, it passes control to the
z/OS dispatcher that verifies the availability of a zIIP.
A zIIP runs IBM authorized code only. This IBM authorized code includes the z/OS JVM in
association with parts of system code, such as the z/OS dispatcher and supervisor services.
A zIIP cannot process I/O or clock comparator interruptions. It also does not support operator
controls, such as IPL.
Java application code can run on a CP or a zIIP. The installation can manage the use of CPs
so that Java application code runs only on CPs or zIIPs, or on both.
The following execution options for zIIP-eligible code execution are available and supported
for z/OS10. These options are user-specified in IEAOPTxx and can be dynamically altered by
using the SET OPT command:
Option 1: Java dispatching by priority (IIPHONORPRIORITY=YES)
This option is the default option and specifies that CPs must not automatically consider
zIIP-eligible work for dispatching on them. The zIIP-eligible work is dispatched on the zIIP
engines until Workload Manager (WLM) determines that the zIIPs are overcommitted.
WLM then requests help from the CPs. When help is requested, the CPs consider
dispatching zIIP-eligible work on the CPs based on the dispatching priority relative to other
workloads. When the zIIP engines are no longer overcommitted, the CPs stop considering
zIIP-eligible work for dispatch.
This option runs as much zIIP-eligible work on zIIPs as possible. It also allows it to spill
over onto the CPs only when the zIIPs are overcommitted.
Option 2: Java dispatching by priority (IIPHONORPRIORITY=NO)
zIIP-eligible work runs on zIIPs only while at least one zIIP engine is online. zIIP-eligible
work is not normally dispatched on a CP, even if the zIIPs are overcommitted and CPs are
unused. The exception is that zIIP-eligible work can sometimes run on a CP to resolve
resource conflicts.
Therefore, zIIP-eligible work does not affect the CP utilization that is used for reporting
through the subcapacity reporting tool (SCRT), no matter how busy the zIIPs are.
If zIIPs are defined to the LPAR but are not online, the zIIP-eligible work units are processed
by CPs in order of priority. The system ignores the IIPHONORPRIORITY parameter in this
case and handles the work as though it had no eligibility to zIIPs.
The following Db2 UDB for z/OS V8 or later workloads can run in Service Request Block
(SRB) mode:
Query processing of network-connected applications that access the Db2 database over a
TCP/IP connection by using IBM Distributed Relational Database Architecture (DRDA).
DRDA enables relational data to be distributed among multiple systems. It is native to Db2
for z/OS, which reduces the need for more gateway products that can affect performance
and availability. The application uses the DRDA requester or server to access a remote
database. IBM Db2 Connect is an example of a DRDA application requester.
Star schema query processing, which is mostly used in business intelligence work.
A star schema is a relational database schema for representing multidimensional data. It
stores data in a central fact table and is surrounded by more dimension tables that hold
information about each perspective of the data. For example, a star schema query joins
various dimensions of a star schema data set.
10 z/OS V2R4 and later (older z/OS versions are out of support)
Db2 utilities that are used for index maintenance, such as LOAD, REORG, and REBUILD.
Indexes allow quick access to table rows. However, the databases become less efficient
over time and must be maintained as data in large databases is manipulated.
The zIIP runs portions of eligible database workloads, which helps to free computer capacity
and lower software costs. Not all Db2 workloads are eligible for zIIP processing. Db2 UDB for
z/OS V8 and later gives z/OS the information to direct portions of the work to the zIIP. The
result is that in every user situation, different variables determine how much work is redirected
to the zIIP.
On an IBM z17, the following workloads also can benefit from zIIPs:
z/OS Communications Server uses the zIIP for eligible Internet Protocol Security (IPSec)
network encryption workloads. Portions of IPSec processing take advantage of the zIIPs,
specifically end-to-end encryption with IPSec. The IPSec function moves a portion of the
processing from the general-purpose processors to the zIIPs. In addition, to run the
encryption processing, the zIIP also handles the cryptographic validation of message
integrity and IPSec header processing.
z/OS Global Mirror, formerly known as Extended Remote Copy (XRC), also uses the zIIP.
Most z/OS Data Facility Storage Management Subsystem (DFSMS) system data mover
(SDM) processing that is associated with z/OS Global Mirror can run on the zIIP.
The first IBM user of z/OS XML system services is Db2 V9. For Db2 V9 before the z/OS
XML System Services enhancement, z/OS XML System Services non-validating parsing
was partially directed to zIIPs when used as part of a distributed Db2 request through
DRDA. This enhancement benefits Db2 by making all z/OS XML System Services
non-validating parsing eligible to zIIPs. This configuration is possible when processing is
used as part of any workload that is running in enclave SRB mode.
z/OS Communications Server also allows the HiperSockets Multiple Write operation for
outbound large messages (originating from z/OS) to be run by a zIIP. Application
workloads that are based on XML, HTTP, SOAP, and Java, and traditional file transfer can
benefit.
During the SRB boost period, ANY work in a boosting image is eligible to run on a zIIP
processor associated with the image (LPAR).
Many more workloads and software can use zIIP processors, such as the following examples:
IBM z/OS Container Extensions (zCX)
IBM z/OS CIM monitoring
IBM z/OS Management Facility (z/OSMF)
System Display and Search Facility (SDSF)
IBM z/OS Connect EE components
IBM Sterling® Connect:Direct®
IBM Z System Automation:
Java components of IBM Z SMS and SAS
IBM Z NetView RESTful API server
IBM Z Workload Scheduler & Dynamic Workload Console (under WebSphere Liberty)
IMS workloads (DRDA, SOAP, MSC, ISC)
Db2 for z/OS Data Gate
Db2 Sort for z/OS
Db2 Analytics Accelerator Loader for z/OS
Db2 Utilities Suite for z/OS
Db2 Log Analysis Tool for z/OS
Data Virtualization Manager for z/OS (DVM)
IzODA (Apache Spark workloads)
Watson Machine Learning for z/OS (WMLz) for Mleap and Spark workloads
For more information about zIIP and eligible workloads, see IBM zIIP web page. <---
BROKEN LINK.
zIIP installation
One CP must be installed with or before any zIIP is installed. Since z16, the zIIP-to-CP ratio
of 2:111 has been eliminated, which means for z17 up to 207 zIIPs on feature Max208 can be
characterized.
Unassigned zIIPs
Since z16 a zIIP that is purchased but not activated is registered as an unassigned zIIP (FC
1656). When the system is later upgraded with another zIIP, the system recognizes that an
zIIP was purchased and is present.
The allowable number of zIIPs for each model is listed in Table 3-3.
zIIPs are orderable by using FC 1653. At least one CP must be configured in order to add
zIIPs to the system configuration. If the installed CPC drawer has no remaining unassigned
PUs, the assignment of the next zIIP might require the installation of another CPC drawer.
PUs that are characterized as zIIPs within a configuration are grouped into the zIIP pool. This
configuration allows zIIPs to have their own processing weights, independent of the weight of
parent CPs. The zIIP pool can be seen on the hardware console.
The number of temporary zIIPs cannot exceed the number of permanent zIIPs.
LPAR: In an LPAR, as many zIIPs as are available can be defined together with at least
one CP.
11
The 2:1 ratio can be exceeded (during boost periods) if System Recovery Boost Upgrade (FC 9930 and FC 6802)
is used for activating temporary zIIP capacity.
Standard SAPs 5 10 16 21 24
By using reserved processors, you can define more logical processors than the number of
available CPs, IFLs, ICFs, and zIIPs in the configuration to an LPAR. This process makes it
possible to nondisruptively configure online more logical processors after more CPs, IFLs,
ICFs, and zIIPs are made available concurrently. They can be made available with one of the
capacity on-demand options.
The maximum number of reserved processors that can be defined to an LPAR depends on
the number of logical processors that are defined. A maximum of 208 logical processors plus
reserved processors can be used. If the operating system in the LPAR supports the logical
processor add function, reserved processors are no longer needed.
Do not define more active and reserved processors than the operating system for the LPAR
can support. For more information about logical processors and reserved processors and
their definitions, see 3.7, “Logical partitioning” on page 121.
The two PUs that are characterized as IFP are dedicated to supporting firmware functions
that are implemented in Licensed Internal Code (LIC); for example, the resource groups
(RGs) that are used for managing the following native Peripheral Component Interconnect
Express (PCIe) feature:
Coupling Express3 Long Reach
IFPs also are initialized at POR. They support Resource Group (RG) LIC13 to provide native
PCIe I/O feature management and virtualization functions.
The IBM z17 ME1 PU assignment is based on CPC drawer plug order (not “ordering”).
Feature upgrade provides more processor (CPC) drawers. Max136 cannot be upgraded
because the supposed targeted features (Max183 and Max208) are factory built only.
The CPC drawers are populated from the bottom up. This process defines the low-order and
the high-order CPC drawers:
CPC drawer 0 (CPC 0 at position A10): Plug order 1 (low-order CPC drawer)
CPC drawer 1 (CPC 1 at position A15): Plug order 2
CPC drawer 2 (CPC 2 at position A20): Plug order 3
CPC drawer 3 (CPC 3 at position B10): Plug order 4 (high-order CPC drawer)
The rules above are intended to isolate processors that are used by different operating
systems as much as possible on different CPC drawers and even on different PU chips. This
configuration ensures that different operating systems do not use the same shared caches.
For example, CPs and zIIPs are all used by z/OS, and can benefit by using the same shared
caches. However, IFLs are used by z/VM and Linux, and ICFs are used by CFCC.
This initial PU assignment, which is done at POR, can be dynamically rearranged by an LPAR
by swapping an active core to a core in a different PU chip in a different CPC drawer to
improve system performance. For more information, see “LPAR dynamic PU reassignment”
on page 127.
When a CPC drawer is added concurrently after POR and new LPARs are activated, or
processor capacity for active partitions is dynamically expanded, the extra PU capacity can
be assigned from the new CPC drawer. The processor unit assignment rules consider the
newly installed CPC drawer dynamically.
13
IBM zHyperLink Express2.0 is not managed by Resource Groups LIC.
14 For a layout of CPC drawers’ locations, refer to 3.5.11, “CPC drawer numbering” on page 116
Systems with a failed PU for which no spare is available call home for a replacement. A
system with a failed PU that is spared and requires a DCM to be replaced (referred to as a
pending repair) can still be upgraded when sufficient PUs are available.
With transparent sparing, the status of the application that was running on the failed
processor is preserved. The application continues processing on a newly assigned CP, IFL,
ICF, zIIP, SAP, or IFP (allocated to one of the spare PUs) without client intervention.
Application preservation
If no spare PU is available, application preservation (z/OS only) is started. The state of the
failing processor is passed to another active processor that is used by the operating system.
Through operating system recovery services, the task is resumed successfully (in most
cases, without client intervention).
Rear view
CPC2
CPC1
CPC3 CPC0
Frame B Frame A
Figure 3-22 CPC drawer numbering
3.6.1 Overview
The IBM z17 ME1 memory design provides flexibility, high availability, and the following
capabilities:
Concurrent memory upgrades if the physically installed capacity is not yet reached
IBM z17 servers can have more physically installed memory than the initial available
capacity. Memory upgrades within the physically installed capacity can be done
concurrently by LIC, and no hardware changes are required. However, memory upgrades
cannot be done through CBU or On/Off CoD.
Concurrent memory upgrades if the physically installed capacity is reached
Physical memory upgrades require a processor drawer to be removed and reinstalled after
replacing the memory cards in the processor drawer. Except for the feature Max43, the
combination of enhanced drawer availability and the flexible memory option allows you to
concurrently add memory to the system. For more information, see 2.5.5, “Drawer
replacement and memory” on page 52, and 2.5.7, “Flexible Memory Option” on page 53.
When the total capacity that is installed has more usable memory than required for a
configuration, the Licensed Internal Code Configuration Control (LICCC) determines how
much memory is used from each processor drawer. The sum of the LICCC provided memory
from each CPC drawer is the amount that is available for use in the system.
Memory allocation
When the system is activated by a POR, PR/SM determines the total installed memory and
the customer enabled memory. Later in the process, during LPAR activation, PR/SM assigns
and allocates each partition memory according to their image profile.
PR/SM controls all physical memory, and can make physical memory available to the
configuration when a CPC drawer is added.
In older IBM Z processors, memory allocation was striped across the available CPC drawers
because relatively fast connectivity (that is, fast relative to the processor clock frequency)
existed between the drawers. Splitting the work between all of the memory controllers allowed
a smooth performance variability.
The memory allocation algorithm changed starting with IBM z13. For IBM z17, PR/SM tries to
allocate memory into a single CPC drawer. If memory does not fit into a single drawer, PR/SM
tries to allocate the memory into the CPC drawer with the most processor entitlement.15
The PR/SM memory and logical processor resources allocation goal is to place all partition
resources on a single CPC drawer, if possible. The resources, such as memory and logical
processors, are assigned to the logical partitions at the time of their activation. Later on, when
all partitions are activated, PR/SM can move memory between CPC drawers to benefit the
performance of each LPAR, without operating system knowledge. This process was done on
the previous families of IBM Z servers only for PUs that use PR/SM dynamic PU reallocation.
With IBM z17 servers, this process occurs whenever the configuration changes, such as in
the following circumstances:
Activating or deactivating an LPAR
Changing the LPARs’ processing weights
Upgrading the system through a temporary or permanent record
Downgrading the system through deactivation of a temporary record
PR/SM schedules a global reoptimization of the resources in use. It does so by reviewing all
the partitions that are active and prioritizing them based on their processing entitlement and
weights, which creates a high- and low-priority rank. Then, the resources, such as logical
processors and memory, can be moved from one CPC drawer to another to address the
priority ranks that were created.
When partitions are activated, PR/SM tries to find a home assignment CPC drawer, home
assignment node, and home assignment chip for the logical processors that are defined to
them. The PR/SM goal is to allocate all the partition logical processors and memory to a
single CPC drawer (the home drawer for that partition).
If all logical processors can be assigned to a home drawer and the partition-defined memory
is greater than what is available in that drawer, the exceeding memory amount is allocated on
another CPC drawer. If all the logical processors cannot fit in one CPC drawer, the remaining
logical processors spill to another CPC drawer. When that overlap occurs, PR/SM stripes the
memory (if possible) across the CPC drawers where the logical processors are assigned.
The process of reallocating memory is based on the memory copy/reassign function, which is
used to allow enhanced drawer availability (EDA) and concurrent drawer replacement
(CDR)16. This process was enhanced starting with z13 and IBM z13s® to provide more
efficiency and speed to the process without affecting system performance.
IBM z17 ME1 implements a faster dynamic memory reallocation mechanism, which is
especially useful during service operations (EDA and CDR). PR/SM controls the
reassignment of the content of a specific physical memory array in one CPC drawer to a
physical memory array in another CPC drawer. To accomplish this task, PR/SM uses all the
15 Entitlement is based on PR/SM shares in all the pools for which the LPAR has logical processors. For example, an
LPAR with 5 GCP’s worth of share has a greater entitlement than one with 2 GCP’s and 1 zIIP’s worth of share.
16
In previous IBM Z generations (before z13), these service operations were known as enhanced book availability
(EBA) and concurrent book repair (CBR).
available physical memory in the system. This memory includes the memory that is not in use
by the system that is available but not purchased by the client, and the planned memory
options, if installed.
Because of the memory allocation algorithm, systems that undergo many miscellaneous
equipment specification (MES) upgrades for memory can have different memory mixes and
quantities in all processor drawers of the system. If the memory fails, it is technically feasible
to run a POR of the system with the remaining working memory resources. After the POR
completes, the memory distribution across the processor drawers is different, as is the total
amount of available memory.
Each TLB entry represents one page. As with other buffers or caches, lines are discarded
from the TLB on a least recently used (LRU) basis.
The worst-case translation time occurs when a TLB miss occurs and the segment table
(which is needed to find the page table) and the page table (which is needed to find the entry
for the particular page in question) are not in cache. This case involves two complete real
memory access delays plus the address translation delay. The duration of a processor cycle
is much shorter than the duration of a memory cycle, so a TLB miss is relatively costly.
It is preferable to have addresses in the TLB. With 4 K pages, holding all of the addresses for
1 MB of storage takes 256 TLB lines. When 1 MB pages are used, it takes only one TLB line.
Therefore, large page size users have a much smaller TLB footprint.
Large pages allow the TLB to better represent a large working set and suffer fewer TLB
misses by allowing a single TLB entry to cover more address translations.
Users of large pages are better represented in the TLB and are expected to see performance
improvements in elapsed time and processor usage. These improvements are because DAT
and memory operations are part of processor busy time, even though the processor waits for
memory operations to complete without processing anything else in the meantime.
To overcome the processor usage that is associated with creating a 1 MB page, a process
must run for some time. It also must maintain frequent memory access to keep the pertinent
addresses in the TLB.
Short-running work does not overcome the processor usage. Short processes with small
working sets are expected to receive little or no improvement. Long-running work with high
memory-access frequency is the best candidate to benefit from large pages.
Long-running work with low memory-access frequency is less likely to maintain its entries in
the TLB. However, when it does run, few address translations are required to resolve all of the
memory it needs. Therefore, a long-running process can benefit even without frequent
memory access.
Weigh the benefits of whether something in this category must use large pages as a result of
the system-level costs of tying up real storage. A balance exists between the performance of
a process that uses large pages and the performance of the remaining work on the system.
On IBM z17 server, 1 MB large pages become pageable if Virtual Flash Memory17 is
available and enabled. They are available only for 64-bit virtual private storage, such as virtual
memory that is above 2 GB.
It is easy to assume that increasing the TLB size is a feasible option to deal with TLB-miss
situations. However, this process is not as simple as it seems. As the size of the TLB
increases, so does the processor usage that is involved in managing the TLB’s contents.
Correct sizing of the TLB is subject to complex statistical modeling to find the optimal tradeoff
between size and performance.
Main storage can be accessed by all processors, but cannot be shared between LPARs. Any
system image (LPAR) must include a defined main storage size. This defined main storage is
allocated exclusively to the LPAR during partition activation.
The fixed size of the HSA eliminates planning for future expansion of the HSA because the
hardware configuration definition (HCD)/input/output configuration program (IOCP) always
reserves space for the following items:
Six channel subsystems (CSSs)
A total of 15 LPARs in CSSs 1 through 5, and 10 LPARs for the sixth CSS for a total of 85
LPARs
Subchannel set 0 with 63.75-K devices in each CSS
Subchannel set 1 with 64-K devices in each CSS
Subchannel set 2 with 64-K devices in each CSS
Subchannel set 3 with 64-K devices in each CSS
The HSA features sufficient reserved space to allow for dynamic I/O reconfiguration changes
to the maximum capability of the processor.
17 Virtual Flash Memory replaced IBM zFlash Express. No carry forward of zFlash Express exists.
For IBM z17 ME1, IBM VFM provides up to 6.0 TB of virtual flash memory in 512 GB
increments. The minimum is 0, while the maximum is 12 features. The number of VFM
features ordered reduces the maximum orderable memory for the IBM z17.
3.7.1 Overview
Logical partitioning is a function that is implemented by PR/SM. IBM z17 servers can run in
LPAR mode, or in DPM mode. DPM provides a GUI for PR/SM to manage I/O resources
dynamically.
PR/SM is aware of the processor drawer structure on IBM z17 servers. LPARs have logical
resources that are allocated to them from various physical resources. From a systems
standpoint, LPARs have no control over these physical resources, but the PR/SM functions do
have this control.18 PR/SM manages and optimizes allocation and the dispatching of work on
the physical topology.
PR/SM’s job - in modern systems - is exceedingly complex. Its overall goal is to allocate
resources according to policy and in a way that delivers the expected capacity, while
optimizing the use of resources. It does this for up to 16 DCMs in four processor drawers, and
up to 85 diverse LPARs, with often conflicting constraints.
As described in 3.5.9, “Processor unit assignment” on page 115, PU’s are initially assigned
during POR using algorithms to optimize cache usage. This step is the “physical” step, where
cores are characterized as CPs, zIIPs, IFLs, ICFs, and SAPs in the appropriate processor
drawers.
When an LPAR is activated, PR/SM builds logical processors and allocates memory for the
LPAR.
PR/SM attempts to assign home addresses for an LPAR’s logical processors to one CPC
drawer.
With HiperDispatch PR/SM cooperates with the operating system to concentrate a unit of
work’s dispatching on the same logical processor, which is in turn concentrated on the same
physical processor. “Concentrated” because it cannot be guaranteed a logical processor will
always be dispatched on the same physical. Most particularly, logical processors without a full
processor’s weight can be dispatched on different physical processors. Nor can it be
guaranteed that a unit of work is always dispatched on the same logical.
All processor types of an IBM z17 can be dynamically reassigned, except IFPs.
18
Starting with z16, information on all logical processor home addresses in the server is available to LPARs. This is
returned in the SYSIB control block (specifically SYSIB 15.1.2). Resource Measurement Facility (RMF) exposes
this information in the Logical Processor Data Section of SMF Record Type 70 Subtype 1 and Type 74 Subtype 4.
In z16 memory allocation changed from the previous IBM Z servers, and this change persists
with z17. Partition memory is now allocated based on processor drawer affinity. For more
information, see “Memory allocation” on page 117.
Logical processors are dispatched by PR/SM on physical processors. The assignment used
by PR/SM to dispatch logical processors on physical PUs is also based on cache usage
optimization.
Processor drawers assignment is more important because they optimize virtual L4 cache
usage. Therefore, logical processors from a specific LPAR are packed into a processor
drawer as much as possible.
PR/SM optimizes chip assignments within the assigned processor drawer (or drawers) to
maximize virtual L3 cache efficiency. Logical processors from an LPAR are dispatched on
physical processors on the same PU chip as much as possible.
PR/SM also tries to redispatch a logical processor on the same physical processor to
optimize private cache (L1 and L2) usage.
HiperDispatch
PR/SM and z/OS work in tandem to use processor resources more efficiently. HiperDispatch
is a function that combines the dispatcher actions and the knowledge that PR/SM has about
the topology of the system.
The nested topology is returned to z/OS by the Store System Information (STSI) instruction.
HiperDispatch uses the information to concentrate logical processors around shared caches
(virtual L3 and virtual L4 caches at drawer level), and dynamically optimizes the assignment
of logical processors and units of work.
The z/OS dispatcher manages multiple queues, called affinity queues, with a small number
of processors per queue, which fits well onto a single PU chip. These queues are used to
assign work to as few logical processors as are needed for an LPAR workload. Therefore,
even if the LPAR is defined with many logical processors, HiperDispatch optimizes this
number of processors to be near the required capacity. The optimal number of processors to
be used is kept within a processor drawer boundary, when possible.
Logical partitions
PR/SM enables IBM z17 ME1 systems to be initialized for a logically partitioned operation,
supporting up to 85 LPARs. Each LPAR can run its own operating system image in any image
mode, independently from the other LPARs.
An LPAR can be added, removed, activated, or deactivated at any time. Changing the number
of LPARs is not disruptive and does not require a POR. Certain facilities might not be
available to all operating systems because the facilities might have software corequisites.
Each LPAR has the following resources that are the same as a real CPC:
Processors
Called logical processors, they can be defined as CPs, IFLs, ICFs, or zIIPs. They can be
dedicated to an LPAR or shared among LPARs. When shared, a processor weight can be
defined to provide the required level of processor resources to an LPAR. Also, the capping
option can be turned on, which prevents an LPAR from acquiring more than its defined
weight and limits its processor consumption.
LPARs for z/OS can have CP and zIIP logical processors. The logical processor types can
be defined as all dedicated or all shared. The zIIP support is available in z/OS.
The weight and number of online logical processors of an LPAR can be dynamically
managed by the LPAR CPU Management function of the Intelligent Resource Director
(IRD). These functions can be used to achieve the defined goals of this specific partition
and of the overall system. The provisioning architecture of IBM z17 systems adds a
dimension to the dynamic management of LPARs, as described in Chapter 8, “System
upgrades” on page 353.
PR/SM supports an option to limit the amount of physical processor capacity that is used
by an individual LPAR when a PU is defined as a general-purpose processor (CP) or an
IFL that is shared across a set of LPARs.
This capability is designed to provide a physical capacity limit that is enforced as an
absolute (versus relative) limit. It is not affected by changes to the logical or physical
configuration of the system. This physical capacity limit can be specified in units of CPs or
IFLs. The Change LPAR Controls and Customize Activation Profiles tasks on the HMC
were enhanced to support this new function.
For the z/OS Workload License Charges (WLC) pricing metric and metrics that are based
on it, such as Advanced Workload License Charges (AWLC), an LPAR defined capacity
can be set. This defined capacity enables the soft capping function. Workload charging
introduces the capability to pay software license fees that are based on the processor
utilization of the LPAR on which the product is running, rather than on the total capacity of
the system.
Consider the following points:
– In support of WLC, the user can specify a defined capacity in millions of service units
(MSUs) per hour. The defined capacity sets the capacity of an individual LPAR when
soft capping is selected.
The defined capacity value is specified on the Options tab in the Customize Image
Profiles window.
– WLM keeps a four-hour rolling average of the processor usage of the LPAR. When the
four-hour average processor consumption exceeds the defined capacity limit, WLM
dynamically activates LPAR capping (soft capping). When the rolling four-hour average
returns below the defined capacity, the soft cap is removed.
For more information about WLM, see System Programmer's Guide to: Workload
Manager, SG24-6472.
For more information about software licensing, see 7.8, “Software licensing” on page 348.
Memory
Memory (main storage) must be dedicated to an LPAR. The defined storage must be
available during the LPAR activation; otherwise, the LPAR activation fails.
Reserved storage can be defined to an LPAR, which enables nondisruptive memory
addition to and removal from an LPAR by using the LPAR dynamic storage reconfiguration
(z/OS and z/VM). For more information, see 3.7.5, “LPAR dynamic storage
reconfiguration” on page 131.
Channels
Channels can be shared between LPARs by including the partition name in the partition
list of a channel-path identifier (CHPID). I/O configurations are defined by the IOCP or the
HCD with the CHPID mapping tool (CMT). The CHPID Mapping Tool (CMT) is an optional
tool that is used to map CHPIDs onto physical channel IDs (PCHIDs). PCHIDs represent
the physical location of a port on a card in a PCIe I/O drawer.
IOCP is available on the z/OS, z/VM, and VSEn V6.3.1 (21st Century Software) operating
systems, and as a stand-alone program on the hardware console. For more information,
see Input/Output Configuration Program User’s Guide for ICP IOCP, SB10-7177. HCD is
available on the z/OS and z/VM operating systems. Review the suitable 9175DEVICE
Preventive Service Planning (PSP) buckets before implementation.
Fibre Channel connection (FICON) channels can be managed by the Dynamic CHPID
Management (DCM) function of the Intelligent Resource Director. DCM enables the
system to respond to ever-changing channel requirements by moving channels from
heavily used control units to lesser-used control units, as needed.
Modes of operation
The modes of operation are listed in Table 3-5. All available mode combinations, including
their operating modes and processor types, operating systems, and addressing modes, also
are listed. Only the currently supported versions of operating systems are considered.
The 64-bit z/Architecture mode has no special operating mode because the architecture
mode is not an attribute of the definable images operating mode. The 64-bit operating
systems are in 31-bit mode at IPL and change to 64-bit mode during their initialization. The
operating system is responsible for taking advantage of the addressing capabilities that are
provided by the architectural mode.
For information about operating system support, see Chapter 7, “Operating systems support”
on page 261.
zIIP usage: zIIPs can be defined to General mode or z/VM mode image, as listed in
Table 3-5 on page 124. However, zIIPs are used by z/OS only. Other operating
systems cannot use zIIPs, even if they are defined to the LPAR. z/VM V7R1 and
later support real and virtual zIIPs to guest z/OS systems.
General mode also is used to run the z/TPF operating system on dedicated or shared CPs
CF mode, by loading the CFCC code into the LPAR that is defined as one of the following
types:
– Shared CPs
– Dedicated or shared ICFs
Linux only mode to run the following systems:
– A Linux on IBM Z operating system, on either of the following types:
• Dedicated or shared IFLs
• Dedicated or shared CPs
– A z/VM operating system, on either of the following types:
• Dedicated or shared IFLs
• Dedicated or shared CPs
z/VM mode to run z/VM on dedicated or shared CPs or IFLs, plus zIIPs and ICFs
IBM SSC mode LPAR can run on dedicated or shared:
– CPs
– IFLs
All LPAR modes, required characterized PUs, operating systems, and the PU
characterizations that can be configured to an LPAR image are listed in Table 3-6. The
available combinations of dedicated (DED) and shared (SHR) processors are also included.
For all combinations, an LPAR also can include reserved processors that are defined, which
allows for nondisruptive LPAR upgrades.
Linux only IFLs or CPs Linux on IBM Z IFLs DED or IFLs SHR
z/VM or
CPs DED or CPs SHR
z/VM CPs, IFLs, z/VM (V7R1 and later) All PUs must be SHR or DED
zIIPs, or
ICFs
The extra channel subsystem and multiple image facility (MIF) image ID pairs (CSSID/MIFID)
can be later assigned to an LPAR for use (or later removed). This process can be done
through dynamic I/O commands by using the HCD. At the same time, required channels must
be defined for the new LPAR.
Partition profile: Cryptographic coprocessors are not tied to partition numbers or MIF IDs.
They are set up with Adjunct Processor (AP) numbers and domain indexes. These
numbers are assigned to a partition profile of a specific name. The client assigns these AP
numbers and domains to the partitions and continues to have the responsibility to clear
them out when their profiles change.
Logical processors also can be concurrently added to a logical partition dynamically by using
the Support Element (SE) “Logical Processor Add” function under the CPC Operational
Customization task. This SE function allows the initial and reserved processor values to be
dynamically changed. The operating system must support the dynamic addition19 of these
resources.
For more information, see 3.5.9, “Processor unit assignment” on page 115.
Swapping of specialty engines and general processors with each other, with spare PUs, or
with both, can occur as the system attempts to compact LPAR configurations into physical
configurations that span the least number of processor drawers.
LPAR dynamic PU reassignment can swap client processors of different types between
processor drawers. For example, reassignment can swap an IFL on processor drawer 1 with a
CP on processor drawer 2. Swaps can also occur between PU chips within a processor
drawer or a node and can include spare PUs. The goals are to pack the LPAR on fewer
processor drawers and also on fewer PU chips, based on the IBM z17 processor drawers’
topology. The effect of this process is evident in dedicated and shared LPARs that use
HiperDispatch.
PR/SM and WLM work together to enforce the capacity that is defined for the group and the
capacity that is optionally defined for each individual LPAR.
19
In z/OS, this support is available since Version 1 Release 10 (z/OS V1R10), while z/VM supports this addition
since z/VM V5R4, and z/VSE® since V4R3. However, IBM z17 supports z/OS V2R4 and later, VSEn V6.3.1 (21st
Century Software) and z/VM V7R3 and later.
Unlike traditional LPAR capping, absolute capping is designed to provide a physical capacity
limit that is enforced as an absolute (versus relative) value that is not affected by changes to
the virtual or physical configuration of the system.
Absolute capping provides an optional maximum capacity setting for logical partitions that is
specified in the absolute processors capacity (for example, 5.00 CPs or 2.75 IFLs). This
setting is specified independently by processor type (namely CPs, zIIPs, and IFLs) and
provides an enforceable upper limit on the amount of the specified processor type that can be
used in a partition.
Absolute capping is ideal for processor types and operating systems that the z/OS WLM
cannot control. Absolute capping is not intended as a replacement for defined capacity or
group capacity for z/OS, which are managed by WLM.
Absolute capping can be used with any z/OS, z/VM, or Linux on IBM Z LPAR (that is running
on an IBM Z server). If specified for a z/OS LPAR, absolute capping can be used concurrently
with defined capacity or group capacity management for z/OS. When used concurrently, the
absolute capacity limit becomes effective before other capping controls.
The implementation provides built-in integrated capabilities that allow advanced virtualization
management on IBM Z servers. With DPM, you can use your Linux and virtualization skills
while taking advantage of the full value of IBM Z hardware, robustness, and security in a
workload optimized environment.
DPM provides facilities to define and run virtualized computing systems by using a
firmware-managed environment that coordinate the physical system resources that are
shared by the partitions. The partitions’ resources include processors, memory, network,
storage, crypto, and accelerators.
DPM provides a new mode of operation for IBM Z servers that provide the following services:
Facilitates defining, configuring, and operating PR/SM LPARs in a similar way to how
these tasks are performed on another platform.
Lays the foundation for a general IBM Z new user experience.
DPM is not another hypervisor for IBM Z servers. DPM uses the PR/SM hypervisor
infrastructure and provides an intelligent interface on top of it that allows customers to define,
use, and operate the platform virtualization without IBM Z experience or skills.
For more information about operating system main storage support, see the PR/SM Planning
Guide, SB10-7178.
For more information, see 3.7.5, “LPAR dynamic storage reconfiguration” on page 131.
Operating systems that run as guests of z/VM can use the z/VM capability of implementing
virtual memory to guest virtual machines. The z/VM dedicated real storage can be shared
between guest operating systems.
Important: The memory allocation and usage depends on the operating system
architecture and tested (documented for each operating system) limits.
While the maximum supported memory per LPAR for IBM z17 is 32 TB, each operating
system has its own support specifications.
For more information about general guidelines, see the PR/SM Planning Guide,
SB10-7178, which is available at the IBM Resource Link website (log in required).
Only Linux and z/VM operating systems can run in Linux only mode. Linux on IBM Z 64-bit
distributions:
(SUSE SLES 16.1 (Post GA),
SUSE SLES 15.6 (GA),
use 64-bit addressing and operate in z/Architecture mode. z/VM also uses 64-bit addressing
and operates in z/Architecture mode.
Note: For information about the (kernel) supported amount of memory, check the Linux
Distribution specific documentation.
z/VM
In z/VM mode, specific types of processor units can be defined within one LPAR. This
feature increases flexibility and simplifies systems management by allowing z/VM to run
the following tasks in the same z/VM LPAR:
– Manage guests to operate Linux on IBM Z on IFLs
– Operate z/VSE (or VSEn V6.3.1 - Century Link Software) and z/OS on CPs
– Offload z/OS system software processor usage, such as Db2 workloads on zIIPs
– Provide an economical Java execution environment under z/OS on zIIPs
IBM SSC
In IBM SSC mode, storage addressing is 64-bit for an embedded product. The amount of
usable main storage by the appliance code that is deployed in the SSC LPAR is
documented by the appliance code supplier.
Without the reserved storage definition, an LPAR storage upgrade is a disruptive process that
requires the following steps:
1. Partition deactivation.
2. An initial storage size definition change.
3. Partition activation.
The extra storage capacity for an LPAR upgrade can come from the following sources:
Any unused available storage
Another partition that features released storage
A memory upgrade
A concurrent LPAR storage upgrade uses DSR. z/OS uses the reconfigurable storage unit
(RSU) definition to add or remove storage units in a nondisruptive way.
z/VM V7R3 and later releases support the dynamic addition of memory to a running LPAR by
using reserved storage. It also virtualizes this support to its guests.
Removing memory from a z/VM guest is not disruptive to the z/VM LPAR.
z/VM V7R2 and later also support Dynamic Memory Downgrade (DMD), which allows the
removal of up to 50% of the real storage from a running z/VM system.
LPAR storage granularity information is required for LPAR image setup and for z/OS RSU
definition. On IBM z17 ME1, LPARs are limited to a maximum of 16 TB of main storage.
However, the maximum amount of memory that is supported by z/OS V2R4 is 4 TB. z/OS
V2R5 and later supports up to 16 TB. For z/VM V7R3 and V7R4 and later the limit to 4 TB.
With dynamic storage reconfiguration, the unused storage does not have to be continuous.
When an operating system that is running on an LPAR assigns a storage increment to its
configuration, PR/SM determines whether any free storage increments are available. PR/SM
then dynamically brings the storage online.
PR/SM dynamically takes offline a storage increment and makes it available to other
partitions when an operating system running on an LPAR releases a storage increment.
20 When defining an LPAR on the HMC, the 2 G boundary must be followed in PR/SM.
An LPAR cluster is shown in Figure 3-23. It contains three z/OS images and one Linux image
that is managed by the cluster. Included as part of the entire Parallel Sysplex is another z/OS
image and a CF image. In this example, the scope over which IRD has control is the defined
LPAR cluster.
For more information about implementing LPAR processor management under IRD, see z/OS
Intelligent Resource Director, SG24-5952.
Figure 3-24 shows an IBM z17 model ME1 system that contains multiple z/OS sysplex
partitions. It contains an internal CF, an IBM z15 model T02 system that contains a
stand-alone CF, and an IBM z16 model A01 that contains multiple z/OS sysplex partitions.
STP over coupling links provides time synchronization to all systems. Selecting the suitable
CF link technology Coupling Express3 Long Reach (CE3 LR) or Integrate Coupling Adapter
Short Reach (ICA SR2.0 and SR1.1) depends on the system configuration and how distant
they are physically.
For more information about link technologies, see “Coupling links” on page 199.
Parallel Sysplex is an enabling technology that allows highly reliable, redundant, and robust
IBM Z technology to achieve near-continuous availability. A Parallel Sysplex consists of one or
more (z/OS) operating system images that are coupled through one or more Coupling Facility
LPARs.
A correctly configured Parallel Sysplex cluster maximizes availability in the following ways:
Continuous availability: Changes can be introduced, such as software upgrades, one
image at a time, while the remaining images continue to process work. For more
information, see Parallel Sysplex Application Considerations, SG24-6523.
High capacity: 1 - 32 z/OS images in a Parallel Sysplex that is operating as a single
system.
Dynamic workload balancing: because it is viewed as a single logical resource, work can
be directed to any operating system image in a Parallel Sysplex cluster that has available
capacity.
Systems management: The architecture defines the infrastructure to satisfy client
requirements for continuous availability. It also provides techniques for achieving simplified
systems management consistent with this requirement.
Resource sharing: Several base z/OS components use CF shared storage. This
configuration enables sharing of physical resources with significant improvements in cost,
performance, and simplified systems management.
Single logical system: The collection of system images in the Parallel Sysplex is displayed
as a single entity to the operator, user, and database administrator. A single system view
means reduced complexity from operational and definition perspectives.
N-2 support: Multiple hardware generations (normally three) are supported in the same
Parallel Sysplex. This configuration provides for a gradual evolution of the systems in the
Parallel Sysplex without changing all of them simultaneously. Software support for multiple
releases or versions also is supported.
Note: Parallel sysplex coupling and timing links connectivity for IBM z17 (M/T 9175) is
supported to N-2 generation CPCs (IBM z17, IBM z16, and IBM z15).
Through state-of-the-art cluster technology, the power of multiple images can be harnessed
to work in concert on common workloads. The IBM Z Parallel Sysplex cluster takes the
commercial strengths of the platform to improved levels of system management, competitive
price performance, scalable growth, and continuous availability.
Consideration: IBM z17, IBM z16, and IBM z15 servers cannot coexist in the same
sysplex with IBM z14 or earlier generation systems.
CFCC Level 26
CFCC level 26 is delivered on the IBM z17 (M/T9175) with Driver 61 adds the following
features and capabilities:
Increased numbers of logical CHPIDs per coupling link port provides improved
configuration flexibility to make better use of your physical coupling link resources, and
allows the potential for improved utilization of coupling link capacity/throughput
– Improve physical coupling link virtualization capabilities from 4 to 8 CHPIDs per
physical port, for increased capacity and connectivity capabilities within either the
same sysplex or across different sysplexes – more logical connections per physical
link, more configuration flexibility, and potential for higher throughput on the physical
link
• IBM z15/z16 = 4 CHPIDs / port for CS5 and CL5
• IBM z17 = 8 CHPIDs / port for CS5, CL5, and CL6
Improved physical connectivity limits for long-reach coupling links, with more possible
adapters/ports per CEC
– Improve physical long-reach coupling connectivity for cross-site GDPS configurations
– IBM z15/z16 = 32 adapters, 64 ports
– IBM z17 = 64 adapters, 128 ports
– Short-reach connectivity limits remain unchanged at 48 adapters, 96 ports
Increased number of coupling CHPIDs per CEC enables the use of more CHPIDs per
coupling link port, and more long-reach coupling links
– Provide more coupling CHPIDs per CEC to allow these increased numbers of physical
links, and logical CHPIDs per link, to be used effectively
– IBM z15/z16 = 384 CHPIDs
– IBM z17 = 384 coupling CHPIDs (of all types) per CEC
Increased the number of ICP buffers per CHPID
– IBM z17 will always use 8 buffers
– Improve capacity/throughput for internal coupling channels
– HCD/IOCP update to allow 7 or 8 subchannels/devices per ICP CHPID (default to 8)
– ICP link type - (Max 64 ICP CHPIDs per CEC)
New higher-bandwidth 25Gb long-reach coupling links allow for more capacity/throughput
for long-distance coupling link connectivity
– Coupling Express3 CX6-LX with 10Gb/25Gb optics for long-reach coupling
• new CL6 link type for 25Gb, CL5 link type for 10Gb
On IBM z17, the ICA SR adapter adapter hardware will change from ICA SR 1.1 to a new
ICA SR 2.0 adapter
– ICA SR 2.0 for short-reach coupling
ICA SR 2.0 adapter short-reach coupling links remain link type CS5 and are fully
compatible with ICA SR 1.1 coupling links on IBM z15 and IBM z16
Non-disruptive system-managed copy process for lock structures allows lock structures to
be system-managed duplexed, re-duplexed, or reconfigured via a system-managed
rebuild, more quickly and in a way that is not disruptive to the customer’s ongoing
datasharing workload
Simplification of coupling facility configuration options via deprecation of support for
Dedicated GPs and Virtual Flash Memory for CF images
APAR OA65190 (IOCP), OA64114 (HCD) – z/OS 2.4 and higher required
Note: In IBM z17, DYNDISP=THIN option is the only available behavior for
shared-engine CF dispatching.
For additional information please refer to Considerations for Coupling Facility levels.
CFCC Level 25
CFCC level 25 is delivered on the IBM z16 (M/T3931 and M/T3932) with driver level 51 adds
the following features:
New cache residency time metrics for directory/data entries are available.
These new metrics allow exploiters (such as Db2) to provide direct, useful feedback on the
CF cache structure “cache effectiveness”. They also provide improved recommendations
for making structure sizing changes or retargeting work from specific table spaces or data
sets to other cache structures.
Consider the following points:
– The metrics show how long data entries or directory entries remain resident in the
cache structure from the time they are created until the time they are eventually
“reclaimed” out of existence.
– They provide moving weighted average directory entry and data area residency times,
in microseconds.
– They allow monitoring of effects of cache-unfriendly batch processes, such as image
copy, reorganization, and update-intensive workloads.
– Reclaims from all causes are included in the creation of directory entries or data areas,
“structure alter” contractions or reapportionments, incidental reclaims of data areas
that are caused by reclaim of a directory entry, and so on.
– Residency times are accounted for only at the time of reclaim (not while the cache
objects are still in use).
– Specific deletions of these objects do not factor into the residency time metrics.
– These metrics were implemented as new fields within the CF Storage Class controls:
They are retrieved by using the IXLCACHE READ_STGSTATS command or
IXLMG/IXLYAMDA services that are requesting CF Storage Class controls.
They are also available in CF structure memory dumps that capture CF Storage Class
controls.
– The metrics are included in Db2 Performance Manager statistics and used for
improved cache structure management (cache sizing, castout processing, reclaim
management, and so on). The inclusion of these metrics in sysplex SMF/RMF data is
not planned, but can be added later. APAR OA60650 is required on z/OS V2R4, and
V2R5 to support the new metrics.
New cache retry buffer support was added for IFCC retry idempotency:
Initially, the CF cache structure architecture was defined to be idempotent (commands can
be tried again after link glitches, such as IFCCs); therefore, no specific accommodations
were available for retrying, such as retry buffer support.
However, the list structure architecture was always recognized as nonidempotent, and a
rather sophisticated retry buffer mechanism was incorporated to allow z/OS to retrieve the
results of commands (even after link glitches occurred) so that such glitches always were
well recovered.
Over time, constructs were added to the cache and lock structure architecture that made
them become not perfectly retriable (nonidempotent), but retry buffers were not added to
the architecture to mitigate the lack of retriability:
– Cache structure serialization objects, such as castout locks and cache entry version
numbers
– Performance-optimized lock structure commands with no retry buffer support
z/OS software provided simple retry logic to provide IFCC recovery for these
nonidempotent commands, but inevitably cases existed ion which z/OS cannot provide
unambiguous command results to callers. Users might not handle this ambiguity well.
CF cache users who use of these nonidempotent constructs experienced occasional
customer problems based on it. The only approach that cleanly and completely addresses
the issue is to provide retry buffers for the small subset of cache and lock commands that
manipulate objects in a nonidempotent way, along with the accompanying transparent
z/OS retry buffer use. z/OS transparently provides all the required recovery support and
no user participation is needed.
Down-level systems can continue to use the “old” software retry support until they are
upgraded, while up-level systems that use the same CF structure can take full advantage
of the new retry buffers for improved IFCC recovery. APAR OA60275 is required on z/OS
V2R4, and V2R5.
New lock record data reserved entries for structure full recovery.
Some lock structure users use “special” lock structure locks to serialize their own
processing, such as management of open data sets and table space interest across the
sysplex. Not all locks have anything to do with serialization of database updates or user
database data or transactions.
When lock structures use up all of the modify lock “record data entries” that track held
locks, users might need to perform special back-out or recovery processing to recover
from this structure full condition. At times, that processing requires them to obtain more
“special” lock structure locks, which are needed to perform the recovery that can lead to a
paradoxical situation: They must use more “record data entries” to recover from being out
of record data entries.
CFCC level 25 on IBM z16 provided improved use support for handling of lock structure
“record data full” conditions by:
– Thresholding record data structure full conditions to occur when less than 100% full,
reserving a special “for emergency use only” pool of record data entries for critical
recovery purposes (user-specified threshold)
– Providing new APIs that allow exploiters to make use of this new reserved pool only
when needed for recovery actions, but not for normal database locking purposes
z/OS APAR OA60650 and VSAM RLS APAR OA62059 are required in z/OSV2R4, and
V2R5.
DYNDISP=ON|OFF is deprecated on IBM z16, keeping only THIN option for
shared-engine CF images, (also valid for IBM z17).
Coupling Facility images can run with shared or dedicated processors. Dedicated
processors are recommended for best performance and production use (continuous
polling model). Shared processors are recommended for test and development use in
which a CF image requires significantly less than one processor’s worth of capacity, which
encourages sharing CF processors across multiple CF images, or for less-performance
critical production usage.
In shared-processor mode, the CF can currently use several different Dynamic
Dispatching (DYNDISP) models:
– DYNDISP=OFF
LPAR time-slicing completely controls the CF processor; the processor polls the entire
time that it is dispatched to a CF image by LPAR. The CF image never voluntarily gives
up control of the shared processor. This option provides the least efficient sharing, and
worst shared-engine CF performance.
– DYNDISP=ON
An optimization over pure LPAR time-slicing; the CF image sets timer interrupts to give
the LPAR the initiative to redispatch it, and the CF image voluntarily gives up control of
the shared processor. This option provides the most efficient sharing, and better
shared-engine CF performance.
– DYNDISP=THIN
An interrupt-driven model in which the CF processor is dispatched in response to a set
of events that generate Thin Interrupts, runs until it runs out of work and then, gives up
control voluntarily. This option provides the most efficient sharing, and best overall
shared-engine CF performance.
DYNDISP=THIN support to use Thin Interrupts was available since zEC12, and proved
to be efficient and well-performing in several shared-engine coupling facility
configurations. IBM z15 made DYNDISP=THIN the default mode of operation for
shared-engine coupling facility images, but supported the other options OFF and ON
for continued “legacy” use.
Note: In IBM z16, DYNDISP=THIN option is the only available behavior for
shared-engine CF dispatching.
Specifying OFF or ON in CF commands and the CF configuration file are preserved for
compatibility; however, a warning message is issued to indicate that these options are no
longer supported and that DYNDISP=THIN behavior is to be used.
CFCC Level 24
CFCC level 24 is delivered on the IBM z15 (M/T 8561 and 8562) with driver level 41. CFCC
level 24 adds the following features:
CFCC Fair Latch Manager
This feature is an enhancement to the internals of the Coupling Facility (CFCC) dispatcher
to provide CF work management efficiency and processor scalability improvements, and
improve the “fairness” of arbitration for internal CF resource latches across tasks.
The tasks that are waiting for CF latches are not placed on the global suspend queue at
all; instead, they are placed on latch-specific waiter queues for the exact instance of the
latch they are requesting, and in the exact order in which they requested the latch. As a
result, the global suspend queue is much less heavily used, and thus is much less a
source of global contention or cache misses in the CF.
Also, when a latch is released, the specific latch’s latch waiter queue is used to transfer
ownership of the latch directly to the next request in line (or multiple requests, in the case
of a shared latch), and make that task (or tasks) ready to run, with the transferred latch
already held. No possibility exists of any unfairness or “cutters” in line between the time
that the latch is released versus when is obtained again.
For managing latches correctly for structures that are System-Managed (SM)
synchronous duplexing, it is now important for the CF to understand which of the duplexed
pair of requests operates as the “master” versus “slave” from a latching perspective, which
requires more SM duplexing setup information from z/OS.
z/OS XCF/XES toleration APAR support is required to provide this enhancement.
Message Path SYID Resiliency Enhancement
When a z/OS system IPLs, message paths are supposed to be deactivated by using
system reset, and their SYIDs are supposed to be cleared in the process. During the IPL,
z/OS then reactivates the message paths with a new SYID that represents the new
instance of z/OS that uses the paths.
On rare occasions, a message path might not be deactivated during system reset or IPL
processing, which leaves the message path active with the z/OS image’s OLD,
now-obsolete SYID. Because the path erroneously remained active, z/OS does not see
any need to reactivate it with a new, correct SYID.
From the perspective of the CF, the incorrect SYID persists and prevents delivery of
signals to the z/OS image that uses that message path.
With IBM z15, CFCC provides a new resiliency mechanism that transparently recovers for
this “missing” message path deactivate (if and when that situation ever occurs).
The CF provides more information to z/OS about every message path that appears active;
namely, the current SYID with which the message path is registered in the CF. Whenever
z/OS interrogates the state of the message paths to the CF, z/OS checks this SYID
information for currency and correctness. If an obsolete or incorrect SYID exists in the
message path for any reason, z/OS performs the following steps:
To support an upgrade from one CFCC level to the next, different levels of CFCC can be run
concurrently while the CF LPARs are running on different servers. CF LPARs that run on the
same server share the CFCC level.
IBM z17 servers (CFCC level 26) can coexist in a sysplex with CFCC levels 25, and 24,
nevertheless, the latest Coupling Facility Control Code and MCLs levels are always
recommended for best performance and availability:
On IBM z16 (MT 3931 and 3932): CFCC 25 - Service Level 02.51.2, Bundle S18 released
April 2023.
On IBM z15 (M/T 8561 and 8562): CFCC 24 - Service level 00.22, Bundle S48 released in
August 2021.
For a CF LPAR with dedicated processors, the CFCC is implemented by using the active wait
technique. This technique means that the CFCC is always running (processing or searching
for service) and never enters a wait state. Therefore, the CF Control Code uses all the
processor capacity that is available for the CF LPAR.
If the LPAR that is running the CFCC includes only dedicated processors (CPs or ICFs), the
use of all processor capacity (cycles) is not an issue. However, this configuration can be an
issue if the LPAR that is running the CFCC includes shared processors. On IBM z17, Thin
Interrupts is the only valid option for shared engines in a CF LPAR (Thin Interrupts is also the
only valid option on the IBM z16).
CF structure sizing changes are expected when moving to CFCC Level 2x. Always review the
CF structure size by using the CFSizer tool when changing CFCC levels.
For more information about the recommended CFCC levels, see the current exception letter
that is published on IBM Resource Link®.
The interrupt causes a shared logical processor CF partition to be dispatched by PR/SM (if it
is not already dispatched), which allows the request or signal to be processed in a more
timely manner. The CF relinquishes control when work is exhausted or when PR/SM takes
the physical processor away from the logical processor.
On IBM z17, the use of Coupling Thin Interrupts (DYNDISP=THIN) is now the only option that
is available for shared engines in a CF LPAR. Specification of OFF or ON in CF commands
and the CF configuration file will be preserved, for compatibility, but a warning message will
be issued to indicate that these options are no longer supported, and that DYNDISP=THIN
behavior will be used.
With IBM z17 and IBM z16, DYNDISP=THIN is the only mode of operation for CF images that
use shared processors.
This capability allows ICF engines to be shared by several CF images. In this environment, it
provides faster and far more consistent CF service times. It can also provide performance that
is reasonably close to dedicated-engine CF performance.
The use of Thin Interrupts allows a CF to run by using a shared processor while maintaining
good performance. The shared engine is allowed to be undispatched when no more work
exists, as in the past. The Thin Interrupt gets the shared processor that is dispatched when a
command or duplexing signal is presented to the shared engine.
This function saves processor cycles and is an excellent option to be used by a production
backup CF or a testing environment CF. This function is activated by default when a CF
processor is shared.
The CPs can run z/OS operating system images and CF images. For software charging
reasons, generally use only ICF processors to run CF images.
The “storage class memory” that is provided by Flash Express adapters is replaced with
memory that is allocated from main memory (VFM).
VFM helps improve availability and handling of paging workload spikes when running z/OS.
With this support, z/OS is designed to help improve system availability and responsiveness
by using VFM across transitional workload events, such as market openings and diagnostic
data collection.
z/OS also helps improve processor performance by supporting middleware use of pageable
large (1 MB) pages, and eliminates delays that can occur when collecting diagnostic data
during failures.
VFM also can be used in CF images to provide extended capacity and availability for
workloads that use IBM WebSphere MQ Shared Queues structures.
Ordered VFM memory reduces the maximum orderable memory for the model.
VFM provides physical memory DIMMs that are needed to support activation of all customer
purchased memory and HSA on a multiple drawer IBM z17 ME1 with one drawer that is done
for the following features:
Scheduled concurrent drawer upgrade, such as memory adds
Scheduled concurrent drawer maintenance, such N+1 repair
Concurrent repair of an out-of-service CPC drawer “fenced” during Activation (POR)
Note: All of these features can be done without VFM. However, all customer-purchased
memory is not available for use in most cases. Some work might need to be shut down or
not restarted.
The information is relocated during CDR in a manner that is identical to the process that was
used for expanded storage. VFM is much simpler to manage (HMC task) and no hardware
repair and verify (no cables and no adapters) are needed. Also, because this feature is part of
internal memory, VFM is protected by RAIM and ECC and can provide better performance
because no I/O to an attached adapter occurs.
Note: Use cases for Flash did not change (for example, z/OS paging and CF shared queue
overflow). Instead, they transparently benefit from the changes in the hardware
implementation.
No option is available for VFM plan ahead. The only option is to always include VFM plan
ahead when Flexible Memory option is selected.
system administrators that use their privileged rights for unauthorized access and many
others).
The IBM Secure Service Container (SSC) is a container technology through which you can
more quickly and securely deploy software appliances on IBM z17.
An IBM SSC partition is a specialized container for installing and running specific appliances.
An appliance is an integration of operating system, middleware, and software components
that work autonomously and provide core services and infrastructures that focus on usability
and security.
IBM SSC hosts most sensitive client workloads and applications. It acts as a highly protected
and secured digital vault, enforcing security by encrypting the entire stack: memory, network,
and data (both in-flight and at-rest). Applications that are running inside IBM SSC are isolated
and protected from outsider and insider threats.
IBM SSC combines hardware, software, and middleware and is unique to IBM Z platform.
Though it is called a container, it should not be confused with purely software open source
containers (such as Kubernetes or Docker).
IBM SSC is a part of the Pervasive Encryption concept that was introduced with IBM z14,
which is aimed at delivering best IBM Security® hardware and software enhancements,
services, and practices for 360-degree infrastructure protection.
– Horizontal workload isolation: Separation from the rest of the host environment.
IBM z17 technology provides built-in data encryption with excellent vertical scalability and
performance that protects against data breach threats and data manipulation by privileged
users. IBM SSC is a powerful IBM technology for providing the extra protection of the most
sensitive workloads.
The following IBM solutions and offerings, and more to come, can be deployed in an IBM SSC
environment:
IBM Hyper Protect Virtual Servers (HPVS) solution is available for running Linux-based
virtual servers with sensitive data and applications delivering a confidential computing
environment to address your top security concerns.
For more information, see this IBM Cloud® web page.
IBM Db2 Analytics Accelerator (IDAA) is a high-performance component that is tightly
integrated with Db2 for z/OS. It delivers high-speed processing for complex Db2 queries to
support business-critical reporting and analytic workloads. The accelerator transforms the
mainframe into a hybrid transaction and analytic processing (HTAP) environment.
For more information, see this IBM web page.
IBM Cloud Hyper Protect Data Base as a Service (DBaaS) for PostgreSQL or MongoDB
offers enterprise cloud database environments with high availability for sensitive data
workloads.
For more information, see this IBM Cloud web page.
IBM Cloud Hyper Protect Crypto Services is a key management service and cloud
hardware security module (HSM) that supports industry standards such as PKCS #11.
For more information, see this IBM Cloud web page.
IBM Security Guardium® Data Encryption (GDE) consists of a unified suite of products
that are built on a common infrastructure. These highly scalable solutions provide data
encryption, tokenization, data masking, and key management capabilities to help protect
and control access to data across the hybrid multicloud environment.
For more information, see this web page.
IBM Blockchain® platform can be deployed on an IBM z16 by using IBM SSC to host the
IBM Blockchain network.
For more information, see this web page.
IBM Hyper Protect Data Controller (formerly IBM Data Privacy Passports) provided
end-to-end, data-centric encryption and privacy to keep your data protected no matter
where it travels in your enterprise. It maintains the suitable use of data, revokes future
access at any time, and keeps an audit trail, which permits only authorized users to extract
value from your data.
Note: IBM Hyper Protect Data Controller 1.2.x was withdrawn from support as of
August 31, 2023.
Notes:
IBM z17 systems support PCIe+ I/O drawers only. I/O cage, I/O drawer, and PCIe I/O
drawer are not supported.
PCIe+ is an enhanced version of PCIe architecture which provides higher data rates,
improved power efficiency, improved error correction, and improved virtualization
capabilities. Throughout this chapter, the terms adapter and card refer to a PCIe I/O
feature that is installed in a PCIe+ I/O drawer.
The PCIe I/O infrastructure in IBM z17 ME1 consists of the following components:
PCIe+ Gen5 dual port fan-outs that support 32 GBps I/O bus for CPC drawer connectivity
to the PCIe+ I/O drawers. The I/O PCIe hubs support Gen5 x16 into the fanout and will
drive Gen4 x16 (Bifurcated to 2 at Gen4 x8) out of the fanout to the Gen4 in the PCIe+ I/O
drawers. It connects to the PCIe Interconnect.
Integrated Coupling Adapters Short Reach (ICA SR2.0), which are PCIe Gen4 features.
Although the card hardware is Gen4 capable it will remain a PCIe Gen3 2 port PCIe
Optical Coupling I/O hub card (150m short range transceivers). The ICA SR2.0 features
has two ports, each port supporting 8 GBps.
The 8U, 16-slot, and 2-domain PCIe+ I/O drawer for PCIe I/O features.
Two ports per feature (new build) with ultra high-speed, direct connection to Select
DS8000; works in tandem with FICON Express channels
Local area network (LAN) connectivity- Open System Adapter (OSA):
– The following features include two ports each:
• OSA-Express7S 1.2 GbE (FC 0454, LX and FC 0455, SX)
• OSA-Express7S 1000BASE-T (FC 0446 - Carry forward from z15 only)
– The following features have one port each:
• OSA-Express7S 1.2 GbE (FC 0456, LR and FC 0457, SR)
• OSA-Express7S 1.2 25 GbE (FC 0459, SR and FC 0460, LR)
Native PCIe features (plugged into the PCIe+ I/O drawer):
– Network Express 10G (FC 0524, SR and FC 0525, LR): two ports per feature
– Network Express 25G (FC 0526, SR and FC 0527, LR): two ports per feature
– Coupling Express3 Long Reach (CE3 LR): two ports per feature
– Crypto Express8S (single/dual HSM)
– Crypto Express7S (single/dual ports/HSM, carry forward)
The PCIe Generation 4 uses 128b/130b encoding for data transmission. This configuration
reduces the encoding overhead to approximately 1.54% versus the PCIe Generation 2
overhead of 20% that uses 8b/10b encoding.
The PCIe standard uses a low-voltage differential serial bus. Two wires are used for signal
transmission, and a total of four wires (two for transmit and two for receive) form a lane of a
PCIe link, which is full-duplex. Multiple lanes can be aggregated into a larger link width. PCIe
supports link widths of 1, 2, 4, 8, 12, 16, and 32 lanes (x1, x2, x4, x8, x12, x16, and x32).
The data transmission rate of a PCIe link is determined by the link width (numbers of lanes),
the signaling rate of each lane, and the signal encoding rule. The signaling rate of one PCIe
Generation 4 lane is 16 gigatransfers per second (GTps). This results in a total bandwidth of
32 GB/s for a 16-lane (x16) configuration, compared to 16 GB/s in Gen 3.
Note: I/O infrastructure for IBM z17 is implemented as a combination of PCIe Gen3 and
Gen4. The PU chip PCIe interface for IBM z17 is PCIe Generation 5 (x16 @32 GBps), but
the CPC I/O fan-out infrastructure provides external connectivity as PCIe Generation 4
@16GBps
For example, a PCIe Gen3 x16 link features the following data transmission rates:
The maximum theoretical data transmission rate per lane:
8 Gbps * 128/130 bit (encoding) = 7.87 Gbps=984.6 MBps
The maximum theoretical data transmission rate per link:
984.6 MBps * 16 (lanes) = 15.75 GBps
1 CDFP is short for 400 (CD in Roman numerals) Form factor Plugable designed for high performance computing.
Considering that the PCIe link works in full-duplex mode, the data throughput rate of a PCIe
Gen3 x16 link is 31.5 GBps (15.75 GBps in both directions).
Link performance: The link speeds do not represent the performance of the link. The
performance depends on many factors, including latency through the adapters, cable
lengths, and the type of workload.
PCIe Gen4 x16 links are used in IBM z17 servers for driving the PCIe+ I/O drawers, and for
coupling links (ICA SR 2.0) for CPC to CPC communications. See Figure 4-1.
IBM z16
Gen3 I/O FICON
RTMR
CPC Drawer PCIe Hub PCIe Gen3 I/O Fanout Drawer
Carrier I/O
Gen4 Card Drawer CE2 LR
PU RTMR
Switch Card
ICA SR zHyperLink1.1
Gen3 RTMR
IBM z17
I/O FICON
Gen4 RTMR
CPC Drawer PCIe Hub PCIe Gen4 I/O Fanout Drawer
Gen5 Carrier I/O
PU Card Drawer Network Express
RTMR
ICA SR 2.0
Switch Card
zHyperLink2.0
Gen4
Gen4
• Gen 5 in CPC, Gen4 across cables, Gen4 across I/O drawer RTMR = Retimer
• New I/O Drawer Switch Card, new retimers (RTMR)
4.2.1 Characteristics
The IBM z17 ME1 I/O subsystem provides great flexibility, high availability, and the following
excellent performance characteristics:
High bandwidth
IBM z17 servers use PCIe Gen4 protocol to drive PCIe+ I/O drawers and CPC to CPC
(coupling) connections. PCI Gen4 doubles the data rate of PCIe Gen3.
For more information about coupling link connectivity, see 4.6.4, “Parallel Sysplex
connectivity” on page 199.
Connectivity options:
– IBM z17 servers can be connected to an extensive range of interfaces, such as
FICON/FCP for SAN connectivity, OSA and Network Express features for LAN
connectivity, and zHyperLink Express for storage connectivity (low latency compared to
FICON).
– For CPC to CPC connections, IBM z17 servers use Integrated Coupling Adapter (ICA
SR2.0 and the Coupling Express3 Long Reach (CE3 LR).
Network Express Adapter
– The new Network Express adapter provides support for OSA, RoCE and Coupling LR.
A CHPID type parameter is used to distinguish between the Network Express (OSA /
RoCE) and Coupling LR features.
– A single port in the Network Express can simultaneously have two “personalities”
• The OSH CHPID type supports all legacy functions available with OSD, but
implements Enhanced QDIO (EQDIO) architecture while OSD uses QDIO
• NETH PFID type supports SMC-R RDMA for Linux native usage (TCP/IP, etc.)
• Each port can be configured to provide support for a single host protocol (EQDIO or
native PCIe) or a combination of host protocols
Concurrent I/O upgrade
– You can concurrently add I/O features to IBM z17 servers if unused I/O slot positions
are available in the PCIe+ I/O or CPC drawer (for ICA SR2.0)
Concurrent PCIe+ I/O drawer upgrade
– More PCIe+ I/O drawers can be installed concurrently if free frame slots for the PCIe+
I/O drawers and PCIe fan-outs in the CPC drawer are available
Dynamic I/O configuration
– Dynamic I/O configuration supports the dynamic addition, removal, or modification of
the channel path, control units, and I/O devices without a planned outage
Remote Dynamic I/O Activation
– Remote dynamic I/O activation is supported for CPCs running Stand-alone CFs, Linux
on Z and z/TPF. IBM z17 provides a remote Dynamic I/O capability for driving
hardware-only I/O configuration changes from a “driving” instance of z/OS Hardware
Configuration Definition (HCD) on one CPC to a remote “target” standalone Coupling
Facility CPC, to a CPC which hosts Linux on Z or to z/TPF images.
– This new support is applicable only when both, the driving CPC and the target CPC are
IBM z17 or IBM z16 with the required firmware support, and the driving systems z/OS
is at level 2.4 or higher with APAR OA65559
Pluggable optics:
– The following features include Small Form-Factor Pluggable (SFP) optics2:
• FICON Express32-4P
• FICON Express32S
• FICON Express16SA
• OSA Express7S 1.2
• OSA Express7S
• Network Express
2
SFP stands for Small Form-factor Pluggable, and it's a standardized format for optical transceivers used in network
communication.
• Couping Express3
These optics allow each channel to be individually serviced in a fiber optic module
failure. The traffic on the other channels on the same feature can continue to flow if a
channel requires servicing.
– The zHyperLink Express and ICA SR2.0 features uses fiber optics cable with MTP3
connector and the cable uses a CXP connection to the adapter. The CXP4 optics are
provided with the adapter.
Concurrent I/O card maintenance
Every I/O card that is plugged in a PCIe+ I/O drawer supports concurrent card
replacement during a repair action.
For more details on the maximum number for each specific feature, the number of ports and
definitions, please refer to Table 4-6 on page 184
3
Multifiber Termination Push-On.
4 For more information, see https://cw.infinibandta.org/document/dl/7157.
F
r
o
8U
n
t
PCIe I/O drawer
Front view
Figure 4-2 Rear and Front view of PCIe+ I/O drawer components
PCIe switch application-specific integrated circuits (ASICs) are used to fan out the host bus
from the CPC drawer through the PCIe+ I/O drawer to the individual I/O features. Maximum
16 PCIe I/O features (up to 32 channels) per PCIe+ I/O drawer are supported.
The PCIe+ I/O drawer is a one-sided drawer (all I/O cards on one side, in the rear of the
drawer) that is 8U high. The PCIe+ I/O drawer contains the 16 I/O slots for PCIe features, two
switch cards, and two power supply units (PSUs) to provide redundant power, as shown in
Figure 4-3 on page 176.
The PCIe+ I/O drawer slots numbers and Region Groups are shown in Figure 4-3.
Figure 4-3 PCIe+ I/O drawer slots numbers and Region Groups
Figure 4-4 IBM z17 I/O connectivity(Max43 feature with two PCIe+ I/O drawers represented
The PCIe switch card provides the fan-out from the high-speed x16 PCIe host bus to eight
individual card slots. The PCIe switch card is connected to the CPC drawer through a single
x16 PCIe Gen4 bus from a PCIe fan-out card (PCIe fan-out cards on IBM z17 have two PCIe
Gen4 x16 ports/busses/links).
In the PCIe+ I/O drawer, The PCIe+ I/O drawer supports concurrent add and replace I/O
features with which you can increase I/O capability as needed, depending on the CPC
drawer.
The PCIe slots in a PCIe+ I/O drawer are organized into two I/O domains.(Figure 4-2 on
page 175). The I/O feature cards that directly attach to the switch card constitute an I/O
domain. Each I/O domain supports up to eight features and is driven through a PCIe switch
card. Two PCIe switch cards always provide a backup path for each other through the passive
connection in the PCIe+ I/O drawer backplane.
During a PCIe fan-out card or cable failure, 16 I/O cards in two domains can be driven through
a single PCIe switch card. It is not possible to drive 16 I/O cards after one of the PCIe switch
cards is removed.
The two switch cards are interconnected through the PCIe+ I/O drawer board (Redundant I/O
Interconnect, or RII). In addition, switch cards in same PCIe+ I/O drawer are connected to
PCIe fan-outs across clusters in CPC drawer for higher availability.
The RII design provides a failover capability during a PCIe fan-out card failure. Both domains
in one of these PCIe+ I/O drawers are activated with two fan-outs. The Base Management
Cards (BMCs) are used for system control.
The domains and their related I/O slots are shown in Figure 4-3 on page 176.
Each I/O domain supports up to eight features (FICON, OSA, Crypto, and so on.) All I/O
cards connect to the PCIe switch card through the backplane board. The I/O domains and
slots are listed in Table 4-1.
Only the following PCIe I/O features can be carried forward for an upgrade to IBM z17
servers:
FICON Express32S (SX and LX)
FICON Express16SA (SX and LX)
OSA-Express7S GbE SX - OSD; from IBM z15 only
OSA-Express7S GbE LX - OSD; from IBM z15 only
OSA-Express7S 10 GbE SR - OSD; from IBM z15 only
OSA-Express7S 10 GbE LX - OSD; from IBM z15 only
OSA-Express7S 1000 Base-T - OSD; from IBM z15 only
OSA-Express7S 1.2 GbE SX - OSC,OSD
OSA-Express7S 1.2 GbE LX - OSC,OSD
OSA-Express7S 1.2 10 GbE SR - OSD
OSA-Express7S 1.2 10 GbE LR - OSD
OSA-Express7S 1.2 25 GbE SR - OSD
OSA-Express7S 1.2 25 GbE LR - OSD
Crypto Express7S(one or two ports/HSMs)
Note: On an IBM z17 system, only PCIe+ I/O drawers are supported. No older generation
drawers can be carried forward.
IBM z17 server supports the following PCIe I/O new features that are hosted in the PCIe+ I/O
drawers:
FICON Express32-4P (SX and LX)
OSA-Express7S 1.2 25 GbE (SR and LR)
OSA-Express7S 1.2 10 GbE (SR and LR)
OSA-Express7S 1.2 GbE (SX and LX)
Crypto Express8S (one or two HSMs)
Coupling Express3 Long Reach (CE3 LR) - 10Gb and 25Gb
zHyperLink Express2.0
Network Express SR 10G (SR and LR)
Network Express SR 25G (SR and LR)
The IBM z17 CPC drawer I/O infrastructure consists of the following features:
The PCIe Generation 4 fan-out cards: Two ports per card (feature) that connect to PCIe+
I/O drawers.
ICA SR2.0 fan-out cards: Two ports per card (feature) that connect to other (external)
CPCs.
The PCIe fan-outs cards are installed in the rear of the CPC drawers. Each CPC drawer
features 12 PCIe+ Gen4 fan-out slots.
The PCIe fan-outs and ICA SR2.0 fan-outs are installed in locations LG01 - LG12 at the rear
in the CPC drawers (see Figure 2-7 on page 29).
On the CPC drawer, two BMC/OSC cards are available, each being a combination of a Base
Management (BMC) card and an Oscillator Card (OSC) card. Each BMC/OSC card features
one PPS port and one ETS port (RJ45 Ethernet) for both PTP and NTP.
A 16x PCIe copper cable of 1.5 meters (4.92 feet) - 4.0 meters (13.1 feet) is used for
connection to the PCIe switch card in the PCIe+ I/O drawer. PCIe fan-out cards are always
plugged in pairs and provide redundancy for I/O domains within the PCIe+ I/O drawer.
Note: The PCIe+ fan-out is used exclusively for I/O and cannot be shared for any other
purpose.
To understand the maximum bandwidth of a PCIe Gen 4 device, you must know the number
of PCIe lanes that it supports. PCIe devices use “lanes” for transmitting and receiving data, so
the more lanes a PCIe device can use, the greater the bandwidth can be. The number of
lanes that a PCIe device supports is typically expressed like “x4” for 4 lanes, “x8” for 8 lanes,
and so on. Refer to Table 4-2.
Note: PCIe generations are backward compatible, so for example a PCIe Gen 3 device
connected to a PCIe Gen 4 system will function normally at PCIe Gen 3 speeds.
The ICA SR2.0 has been updated with new Gen4 re-timers. It will also use the new CXP16
Gen4 optical module. Although the card hardware is Gen4 capable it will remain a PCIe Gen3
2 port PCIe Optical Coupling I/O hub card (150m short range transceivers).
The card is designed to drive distances up to 150 meters (492 feet) with a link data rate of
eight GBps. ICA SR2.0 supports up to four channel-path identifiers (CHPIDs) per port and
eight subchannels (devices) per CHPID.
The coupling links can be defined as shared between images (z/OS) within a CSS. They also
can be spanned across multiple CSSs in a CPC. For ICA SR features, a maximum four
CHPIDs per port can be defined.
When STP5 (FC 1021) is available, ICA SR coupling links can be defined as timing-only links
to other IBM z17, IBM z16, or IBM z15 systems.
These two fan-out features are housed in the PCIe+ Gen4 I/O fan-out slot on the IBM z17
CPC drawers. Up 48 ICA SR2.0 features (up to 96 ports) are supported on an IBM z17 ME1
system.
OM3 fiber optic can be used for distances up to 100 meters (328 feet). OM4 fiber optic cables
can be used for distances up to 150 meters (492 feet). For more information, see the following
publications:
Planning for Fiber Optic Links, GA23-1409
9175 Installation Manual for Physical Planning, GC28-7049.
Table 4-3 Fan-out locations and their AIDs for the CPC drawer (IBM z17 ME1)
Fan-out CPC0 CPC1 CPC2 CPC3
locations Location A10 Location A15 Location A20 Location B10
AID (Hex) AID (Hex) AID (Hex) AID (Hex)
LG01 00 0C 18 24
LG02 01 0D 19 25
LG03 02 0E 1A 26
LG04 03 0F 1B 27
LG05 04 10 1C 28
LG06 05 11 1D 29
LG07 06 12 1E 2A
LG08 07 13 1F 2B
LG09 08 14 20 2C
LG10 09 15 21 2D
LG11 0A 16 22 2E
LG12 0B 17 23 2F
Fan-out slots
The fan-out slots are numbered LG01 - LG12 (from left to right), as listed in Table 4-3. All
fan-out locations and their AIDs for the CPC drawer are shown for reference only.
Important: The AID numbers that are listed in Table 4-3 are valid for a new build system
only. If a fan-out is moved, the AID follows the fan-out to its new physical location.
The AID assignment is listed in the PCHID REPORT that is provided for each new server or
for an MES upgrade on servers. Part of a PCHID REPORT for an IBM z17 is shown in
Example 4-1. In this example, four fan-out cards are installed at locations A10/LG05,
A10/LG06, A15/LG05, and A15/LG06 with AIDs 04, 05, 10 and 11, respectively.
Fan-out features that are supported by the IBM z17 are listed in Table 4-4, which includes the
feature type, feature code, and information about the link that is supported by the fan-out
feature.
PCIe+ Gen4 0315 PCIe I/O Copper N/A 4 m (13.1 ft.) 16 GBps
fan-out drawer conn.
ICA SR2.0 0216 Coupling link OM4 MTP 150 m (492 ft.) 8 GBps
6 Certain I/O features do not have external ports, such as Crypto Express.
A CHPID does not directly correspond to a hardware channel port. Instead, it is assigned to a
PCHID in the hardware configuration definition (HCD) or IOCP.
A PCHID REPORT is created for each new build server and for upgrades on servers. The
report lists all I/O features that are installed, the physical slot location, and the assigned
PCHID. A portion of a sample PCHID REPORT is shown in Example 4-2.
The PCHID REPORT that is shown in Example 4-2 includes the following components:
Feature Code 0216 (Integrated Coupling Adapters (ICA SR2.0) is installed in the CPC
drawer (location A10, slot LG06, and A15 LG06), and has AIDs 05 and 11 assigned.
Feature 0461 (FICON Express32S LX) is installed in PCIe+ I/O drawer 1: Location Z01B,
slot 02 with PCHIDs 100/D1 and 101/D2 assigned.
Two feature codes 0457 (OSA-Express7S 1.2 10 GbE SR) installed in PCIe+ I/O drawer 1
in slots 05 and 07, with PCHIDs 10C/D1 and 110/D1, respectively.
A resource group (RG) parameter also is shown in the PCHID REPORT for native PCIe
features. A balanced plugging of native PCIe features exists between four resource groups
(RG1, RG2, RG3, and RG4).
The preassigned PCHID number of each I/O port relates directly to its physical location (jack
location in a specific slot).
4.6 Connectivity
I/O channels are part of the CSS. They provide connectivity for data exchange between
servers, between servers and external control units (CUs) and devices, or between networks.
For more information about connectivity to external I/O subsystems (for example, disks), see
4.6.2, “Storage connectivity” on page 186.
For more information about communication to LANs, see 4.6.3, “Network connectivity” on
page 191.
Storage access
OSA-Express featuresb
At least one I/O feature (FICON) or one coupling link feature (ICA SR or CE LR) must be
present in the minimum configuration.
The following features are plugged into a PCIe+ I/O drawer and do not require the definition of
a CHPID and CHPID type:
Each Crypto Express (8S/7S) feature occupies one I/O slot, but does not include a PCHID
type. However, LPARs in all CSSs can access the features. Each Crypto Express adapter
can support up to 85 domains.
Each zHyperlink Express2.0 feature occupies one I/O slot but does not include a CHPID
type. However, LPARs in all CSSs can access the feature. The zHyperLink Express
adapter works as native PCIe adapter and can be shared by multiple LPARs. Each port
supports up to 127 Virtual Functions (VFs), with one or more VFs/PFIDs being assigned
to each LPAR. This support gives a maximum of 254 VFs per adapter.
Cables: All fiber optic cables, cable planning, labeling, and installation are client
responsibilities for new IBM z17 installations and upgrades. Fiber optic conversion kits and
mode conditioning patch cables are not orderable as features on IBM z17 servers. All other
cables must be sourced separately.
For more details on the cable specifications for these features, please refer to Appendix D,
“Channel options” on page 547.
FICON channels
IBM z17 supports the following FICON features:
FICON Express32-4P LX and SX - 4 Ports/Feature (FC 0387/0388 - NB8)
FICON Express32S LX and SX - 2 Ports/Feature (FC 0461/0462 - CF9)
FICON Express16SA LX and SX - 2 Ports/Feature (FC 0436/0437 - CF)
The FICON features provide connectivity between any combination of servers, directors,
switches, and devices (control units, disks, tapes, and printers) in a SAN.
Each FICON Express feature occupies one I/O slot in the PCIe+ I/O drawer. Each feature
includes two or four ports, each supporting an LC Duplex connector, with one PCHID and one
CHPID that is associated with each port.
Each FICON Express feature uses SFP+ optics that allow for concurrent repairing or
replacement for each SFP. The data flow on the unaffected channels on the same feature can
continue. A problem with one FICON Express port does not require replacement of a
complete feature.
Each FICON Express feature also supports cascading, which is the connection of two FICON
Directors in succession. This configuration minimizes the number of cross-site connections
and helps reduce implementation costs for disaster recovery applications, IBM
Geographically Dispersed Parallel Sysplex (GDPS), and remote copy.
IBM z17 servers support 32 K devices per FICON channel for all FICON features.
Each FICON Express channel can be defined independently for connectivity to servers,
switches, directors, disks, tapes, and printers, by using the following CHPID types:
CHPID type FC: The FICON, zHPF, and FCTC protocols are supported simultaneously.
CHPID type FCP: Fibre Channel Protocol that supports attachment to SCSI devices
directly or through Fibre Channel switches or directors.
7
zHyperLink feature operates with a FICON channel.
8
NB - New Build
9 CF- Carry forward
FICON channels (CHPID type FC or FCP) can be shared among LPARs and defined as
spanned. All ports on a FICON feature must be of the same type (LX or SX). The features are
connected to a FICON capable control unit (point-to-point or switched point-to-point) through
a Fibre Channel switch.
These FICON Express32S and FICON Express32-4P features support LC Duplex optical
transceivers.
FICON Express16SA
The FICON Express16SA feature is installed in the PCIe+ I/O drawer. Each of the two
independent ports is capable of 8 or 16 Gbps. The link speed depends on the capability of the
attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent
to users and applications.
For more information, see the FICON Express chapter in the IBM Z Connectivity Handbook,
SG24-5444.
More information about Fibre Channel Enddpoint Security can be found in 4.7.3, “IBM Fibre
Channel Endpoint Security” on page 206
bit errors, especially for connections across long distance, such as an inter-switch link (ISL) in
a GDPS Metro Mirror environment.
The FICON Express32-4P, FICON Express32S, and FICON Express16SA support FEC
coding on top of its 64b/66b data encoding for 16 and 32 Gbps connections. This design can
correct up to 11 bit errors per 2112 bits transmitted.
Therefore, while connected to devices that support FEC at 16 Gbps connections, the FEC
design allows FICON Express32-4P, FICON Express32S, and FICON Express16SA channels
to operate at higher speeds, over longer distances, with reduced power and higher
throughput. At the same time, the same reliability and robustness for which FICON channels
traditionally known are maintained.
With the IBM DS8870 or newer, IBM z17 servers can extend the use of FEC to the fabric
N_Ports for a completed end-to-end coverage of 32 Gbps FC links.
A static SAN routing policy normally assigns the ISL routes according to the incoming port
and its destination domain (port-based routing), or the source and destination ports pairing
(device-based routing).
The port-based routing (PBR) assigns the ISL routes statically that is based on “first come,
first served” when a port starts a fabric login (FLOGI) to a destination domain. The ISL is
round-robin that is selected for assignment. Therefore, I/O flow from same incoming port to
same destination domain always is assigned the same ISL route, regardless of the
destination port of each I/O.
This setup can result in some ISLs becoming overloaded while some are under-used. The
ISL routing table is changed whenever IBM Z server undergoes a power-on-reset (POR), so
the ISL assignment is unpredictable.
Device-based routing (DBR) assigns the ISL routes statically that is based on a hash of the
source and destination port. That I/O flow from same incoming port to same destination is
assigned to same ISL route. Compared to PBR, the DBR is more capable of spreading the
load across ISLs for I/O flow from the same incoming port to different destination ports within
a destination domain.
When a static SAN routing policy is used, the FICON director features limited capability to
assign ISL routes that are based on workload. This limitation can result in unbalanced use of
ISLs (some might be overloaded, while others are under-used).
The dynamic routing ISL routes are dynamically changed based on the Fibre Channel
exchange ID, which is unique for each I/O operation. ISL is assigned at I/O request time;
therefore, different I/Os from same incoming port to same destination port are assigned
different ISLs.
With FIDR, IBM z17 servers feature the following advantages for performance and
management in configurations with ISL and cascaded FICON directors:
FICON dynamic routing can be enabled by defining dynamic routing-capable switches and
control units in HCD. Also, z/OS implemented a health check function for FICON dynamic
routing.
zHPF on IBM z14 and newer servers were enhanced to allow all large write operations
(> 64 KB) at distances up to 100 kilometers (62 miles) to be run in a single round trip to the
control unit. This improvement avoids elongating the I/O service time for these write
operations at extended distances.
Starting with IBM z14, IBM Z servers can read this extra diagnostic data for all the ports that
are accessed in the I/O configuration and make the data available to an LPAR. For z/OS
LPARs that use FICON channels, z/OS displays the data with a new message and display
command. For Linux on IBM Z, z/VM, and VSEn V6.3.1 (21st Century Software), and LPARs
that use FCP channels, this diagnostic data is available in a new window in the SAN Explorer
tool.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to
use a single FCP channel as though each were the sole user of the channel. First introduced
with IBM z9® EC, this feature can be used with earlier FICON features that were carried
forward from earlier servers.
By using the FICON Express as an FCP channel with NPIV enabled, the maximum numbers
of the following aspects for one FCP physical channel are doubled:
Maximum number of NPIV hosts defined: Increased from 32 to 64
Maximum number of remote N_Ports communicated: Increased from 512 to 1024
Maximum number of addressable LUNs: Increased from 4096 to 8192
Concurrent I/O operations: Increased from 764 to 1528
For more information about operating systems that support NPIV, see “N_Port ID
Virtualization”.
IBM z15 and newer servers allow for the modification of these default assignments, which
also allows FCP channels to keep previously assigned WWPNs, even after being moved to a
different slot position. This capability can eliminate the need for reconfiguration of the SAN in
many situations, and is especially helpful during a system upgrade (FC 0099 - WWPN
Persistence).
IBM z14 and newer servers support up to three hops in a cascaded FICON SAN
environment. This support allows clients to more easily configure a three- or four-site disaster
recovery solution.
For more information about the FICON multi-hop, see the FICON Multihop: Requirements
and Configurations white paper at the IBM Techdocs Library website.
The zHyperLink Express2.0 feature (FC 0351) provides a low latency direct connection
between IBM z17 and DS8000 storage system.
The zHyperLink Express2.0 is the result of new business requirements that demand fast and
consistent application response times. It dramatically reduces latency by interconnecting the
IBM z17 directly to I/O Bay of the DS8k by using PCIe Gen3 x 8 physical link (up to
150 meters [492 feet]). A new transport protocol is defined for reading and writing IBM CKD
data records11, as documented in the zHyperLink interface specification.
On IBM z17, zHyperLink Express2.0 card is a PCIe Gen4 adapter with updated Gen4 retimer,
which is installed in the PCIe+ I/O drawer. HCD definition support was added for new PCIe
function type with PORT attributes.
The zHyperLink Express2.0 is virtualized as a native PCIe adapter and can be shared by
multiple LPARs. Each port can support up to 127 Virtual Functions (VFs), with one or more
VFs/PFIDs being assigned to each LPAR. This configuration gives a maximum of 254 VFs
per adapter.
OSA-Express7S features
The OSA-Express7S 1.2 25 GbE LR feature supports the use of an industry standard
small form factor (SFP+) LC Duplex connector. Ensure that the attaching or downstream
device includes an LR transceiver. The transceivers at both ends must be the same (LR to
LR).
The OSA-Express7S 1.2 25 GbE LR feature does not support auto-negotiation to any
other speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting this feature to the selected device.
are being reused, a pair of Mode Conditioning Patch cables is required, with one cable for
each end of the link.
The supported OSA-Express7S features are listed in Table 4-6 on page 184.
The OSA-Express7S 10 GbE SR feature does not support auto-negotiation to any other
speed and runs in full duplex mode only.
A 50 or a 62.5 µm multimode fiber optic cable that ends with an LC Duplex connector is
required for connecting each port on this feature to the selected device.
Note: On IBM z17, the OSA-Express7S 1000BASE-T Ethernet feature can no longer
be configured as CHPID type OSC and is OSD only.
On IBM z17 CHPID type OSC is only supported on the OSA-Express7S 1.2 GbE
SX/LX features.
A single port on the Network Express feature can simultaneously have two functionalities:
OSH channel, which is a new OSH CHPID type for OSA-style I/O, which will support all
legacy functions available with OSD, but uses the EQDIO architecture while OSD uses
QDIO. This includes providing support for the network interface of a single OS instance to
operate in promiscuous mode.
NETH PFID for SMC-R RDMA or Linux native usage.
Each port on the adapter can be configured to provide support for a single host protocol
(EQDIO or native PCIe) or a combination of both and can be independently (de)configured.
If there is a requirement for legacy QDIO architecture (CHPID type OSD) OSA-Express7S 1.2
adapters should be configured.
The Network Express SR 10G feature supports the use of an industry standard small form
factor (SFP+) LC Duplex connector. Ensure that the attaching or downstream device
includes an SR transceiver. The transceivers at both ends must be the same (SR to SR).
The Network Express SR 10G feature does not support auto-negotiation to any other
speed and runs in full duplex mode only.
A 9 µm single-mode fiber optic cable that ends with an LC Duplex connector is required for
connecting this feature to the selected device.
SMC-D
SMC-D was used with the introduction of the Internal Shared Memory (ISM) virtual PCI
function. ISM is a virtual PCI network adapter that enables direct access to shared virtual
memory, which provides a highly optimized network interconnect for IBM Z intra-CPC
communications.
SMC-D maintains the socket-API transparency aspect of SMC-R so that applications that
use TCP/IP communications can benefit immediately without requiring any application
software or IP topology changes. SMC-D completes the overall SMC solution, which
provides synergy with SMC-R.
SMC-R and SMC-D use shared memory architectural concepts, which eliminate the TCP/IP
processing in the data path, yet preserves TCP/IP Qualities of Service for connection
management purposes.
ISM is defined by the FUNCTION statement with a virtual CHPID (VCHID) in HCD/IOCDS.
Identified by the PNETID parameter, each ISM VCHID defines an isolated, internal virtual
network for SMC-D communication, without any hardware component required. Virtual
adapters are defined by virtual function (VF) statements. Multiple LPARs can access the
same virtual network for SMC-D data exchange by associating their VF with same VCHID.
Applications that use HiperSockets can realize network latency and CPU reduction benefits
and performance improvement by using the SMC-D over ISM.
IBM z17 servers support up to 32 ISM VCHIDs per CPC. Each VCHID supports up to 255
VFs, with a total maximum of 8,000 VFs.
The initial version of SMC was limited to TCP/IP connections over the same layer 2 network;
therefore, it was not routable across multiple IP subnets. The associated TCP/IP connection
was limited to hosts within a single IP subnet that requires the hosts to have direct access to
the same physical layer 2 network (that is, the same Ethernet LAN over a single VLAN ID).
The scope of eligible TCP/IP connections for SMC was limited to and defined by the single IP
subnet.
SMC Version 2 (SMCv2) provides support for SMC over multiple IP subnets for SMC-D and
SMC-R and is referred to as SMC-Dv2 and SMC-Rv2. SMCv2 requires updates to the
underlying network technology. SMC-Dv2 requires ISMv2 and SMC-Rv2 requires RoCEv2.
The SMCv2 protocol is downward compatible, which allows SMCv2 hosts to continue to
communicate with SMCv1 down-level hosts.
Although SMCv2 changes the SMC connection protocol enabling multiple IP subnet support,
SMCv2 does not change how user TCP socket data is transferred, which preserves the
benefits of SMC to TCP workloads.
TCP/IP connections that require IPsec are not eligible for any form of SMC.
HiperSockets
The HiperSockets function of IBM z17 servers provides up to 32 high-speed virtual LAN
attachments.
HiperSockets eliminates the need to use I/O subsystem operations and traverse an external
network connection to communicate between LPARs in the same IBM z17 CPC.
HiperSockets offers significant value in server consolidation when connecting many virtual
servers. It can be used instead of certain coupling link configurations in a Parallel Sysplex.
Traffic can be IPv4 or IPv6, or non-IP, such as AppleTalk, DECnet, IPX, NetBIOS, or SNA.
Layer 2 support helps facilitate server consolidation, and can reduce complexity and simplify
network configuration. It also allows LAN administrators to maintain the mainframe network
environment similarly to non mainframe environments.
Packet forwarding decisions are based on Layer 2 information instead of Layer 3. The
HiperSockets device can run automatic MAC address generation to create uniqueness within
and across LPARs and servers. The use of Group MAC addresses for multicast is supported,
and broadcasts to all other Layer 2 devices on the same HiperSockets networks.
Datagrams are delivered only between HiperSockets devices that use the same transport
mode. A Layer 2 device cannot communicate directly to a Layer 3 device in another LPAR
network. A HiperSockets device can filter inbound datagrams by VLAN identification, the
destination MAC address, or both.
HiperSockets Layer 2 is supported by Linux on IBM Z, and by z/VM for Linux guest use.
IBM z17 supports the HiperSockets Completion Queue function that is designed to allow
HiperSockets to transfer data synchronously (if possible) and asynchronously, if necessary.
This feature combines ultra-low latency with more tolerance for traffic peaks.
With the asynchronous support, data can be temporarily held until the receiver has buffers
that are available in its inbound queue during high volume situations. The HiperSockets
Completion Queue function requires the following minimum OS support releases12:
z/OS V2.4 with PTFs
Linux Minimum distributions:
– SUSE SLES 16.1 (Post GA)
– SUSE SLES 15.6 (GA)
– SUSE SLES 12.5 (Post GA)
– Red Hat RHEL 10.0 (Post GA)
– Red Hat RHEL 9.4
– Red Hat RHEL 8.10
– Red Hat RHEL 7.9 (Post GA)
– Canonical Ubuntu 24.04 LTS (Post GA)
– Canonical Ubuntu 22.04 LTS (Post GA)
– Canonical Ubuntu 20.04 LTS (Post GA)
VSEn V6.3.1 (21st Century Software)
z/VM V7.3 with maintenance
The z/VM virtual switch function transparently bridges a guest virtual machine network
connection on a HiperSockets LAN segment. This bridge allows a single HiperSockets guest
virtual machine network connection to communicate directly with the following systems:
Other guest virtual machines on the virtual switch
External network hosts through the virtual switch OSA UPLINK port
This section describes coupling link features supported in a Parallel Sysplex in which an IBM
z17 can participate.
Coupling links
The type of coupling link that is used to connect a CF to an operating system LPAR is
important. The link performance significantly affects response times and coupling processor
usage. For configurations that extend over large distances, the time that is spent on the link
can be the largest part of the response time.
IBM z16, and IBM z15 support the following coupling link types:
Integrated Coupling Adapter Short Reach (ICA SR1.1 and ICA SR) links connect directly
to the CPC drawer and are intended for short distances between CPCs of up to
150 meters (492 feet).
Coupling Express2 Long Reach (CE2 LR) adapters for IBM z16 and Coupling Express
Long Reach (CE LR) are in the PCIe+ drawer and support unrepeated distances of up to
10 km (6.2 miles) or up to 100 km (62.1 miles) over qualified WDM services.
Internal Coupling (IC) links are for internal links within a CPC.
12
Minimum OS support for IBM z17 can differ. For more information, see Chapter 7, “Operating systems support” on
page 261.
Note: Parallel Sysplex supports connectivity between systems that differ by up to two
generations (n-2). For example, an IBM z17 can participate in an IBM Parallel Sysplex
cluster with IBM z16, and IBM z15 systems.
Only Integrated Coupling Adapter Short Reach 2.0 (ICA SR2.0) Feature Code 0216 and
Coupling Express3 Long Reach (CE3 LR) Feature Codes 0498 and 0499 are supported on
IBM z17.
Figure 4-5 shows the supported Coupling Link connections for the IBM z17. Only ICA SR and
CE LR links are supported on IBM z17, IBM z16, and IBM z15 systems.
The coupling link options are listed in Table 4-10. Also listed is the coupling link support for
each IBM Z platform. Restrictions on the maximum numbers can apply, depending on the
configuration. Always check with your IBM support team for more information.
The maximum number of combined external coupling links (active CE LR, ICA SR links) is
160 per IBM z17 ME1 system. IBM z17 systems support up to 512 coupling CHPIDs per
CPC. An IBM z17 coupling link support summary is shown in Table 4-7.
For more information about distance support for coupling links, see IBM Z End-to-End
Extended Distance Guide, SG24-8047.
An IC link is a fast coupling link that uses memory-to-memory data transfers. IC links do not
have PCHID numbers, but do require CHPIDs.
IC links are enabled by defining CHPID type ICP. A maximum of 64 IC links can be defined on
an IBM z16.
ICA SR2.0 are two-port, short-distance coupling features that allow the supported IBM Z to
connect to each other. ICA SR2.0 use coupling channel type: CS6.
The ICA SR2.0 uses PCIe Gen4 technology, with x16 lanes that are bifurcated into x8 lanes
for coupling. The ICA SR2.0 is designed to drive distances up to 150 m (492 feet) and
supports a link data rate of 8 GBps. It is designed to support up to four CHPIDs per port and
eight subchannels (devices) per CHPID. I/O drawer13. It allows the supported IBM Z to
connect to each other over extended distance. The ICA SR2.0 (FCs 0216) is a two-port card
that uses coupling channel type CS5.
Note: Previous versions of the ICA SR (FC 0172), introduced with the IBM z13, ICA SR1.1
(FC 0176) introduced with IBM z15 are not available on IBM z17.
For more information, see IBM Z Planning for Fiber Optic Links (FICON/FCP, Coupling Links,
and Open System Adapters), GA23-1409. This publication is available in the Library section
of Resource Link (log-in required).
13
PCIe+ I/O drawer, FC 4011 on IBM z17 and FC 4023 on IBM z16, and FC 4021 on IBM z15) is installed in a
19-inch rack. PCIe+ I/O Drawers contains and can host up to 16 PCIe I/O features (adapters). FCs 4023 and 4011
are not carried forward to IBM z17.
14
PCIe+ I/O drawer FC 4011 on IBM z17 and FC 4023 IBM z16, and FC 4021 on IBM z15) is installed in a 19-inch
rack. PCIe+ I/O Drawers contains and can host up to 16 PCIe I/O features (adapters). FCs 4023 and 4011 are not
carried forward to IBM z17.
Coupling Express3 LR is designed to support up to four CHPIDs per port, 32 buffers (that is,
32 subchannels) per CHPID. The Coupling Express3 LR feature installs in the PCIe+ I/O
drawer on IBM z17.
For more information, see IBM Z Planning for Fiber Optic Links (FICON/FCP, Coupling Links,
Open Systems Adapters, and zHyperLink Express), GA23-1409. This publication is available
in the Library section of Resource Link (log-in required).
Migration considerations
Upgrading from previous generations of IBM Z systems in a Parallel Sysplex to IBM z17
servers in that same Parallel Sysplex requires proper planning for coupling connectivity.
Planning is important because of the change in the supported type of coupling link adapters
and the number of available fan-out slots of the IBM z17 CPC drawers.
The ICA SR fan-out features provide short-distance connectivity to another IBM z17, IBM
z16, or IBM z15 server.
The CE LR adapter provides long-distance connectivity to another IBM z17, IBM z15, or
IBM z14 server.
Notes:
The new ICA SR2.0 in IBM z17 adapter is fully compatible with ICA SR and ICA SR1.1
in IBM z15 and IBM z16
Only Coupling Express3 LR 10Gb (CHPID type CL5) is fully compatible with Coupling
Express LR and Coupling Express2 LR in IBM z15 and IBM z16.
The Coupling Express3 LR 25Gb (CHPID type CL6) is not compatible with previous
generation of CE LR features and can not be use to connect to IBM z15 or IBM z16
The IBM z17 fan-out slots in the CPC drawer provide coupling link connectivity through the
ICA SR fan-out cards. In addition to coupling links for Parallel Sysplex, the fan-out cards
provide connectivity for the PCIe+ I/O drawer (PCIe+ Gen4 fan-out).
To migrate from an older generation machine to an IBM z17 without disruption in a Parallel
Sysplex environment requires that the older machines are no more than n-2 generation
(namely, at least IBM z15) and that they carry enough coupling links to connect to the existing
systems while also connecting to the new machine. This requirement is necessary to allow
individual components (z/OS LPARs and CFs) to be shut down and moved to the target
machine and continue to be connected to the remaining systems.
It is beyond the scope of this book to describe all possible migration scenarios. Always
consult with subject matter experts to help you to develop your migration strategy.
The use of the coupling links to exchange STP messages has the following advantages:
By using the same links to exchange STP messages and CF messages in a Parallel
Sysplex, STP can scale with distance. Servers that are exchanging messages over short
distances (ICA SR links), can meet more stringent synchronization requirements than
servers that exchange messages over long distance (CE3 LR links), with distances up to
100 kilometers (62 miles)15. This advantage is an enhancement over the IBM Sysplex
Timer implementation, which does not scale with distance.
Coupling links provide the connectivity that is necessary in a Parallel Sysplex. Therefore, a
potential benefit can be realized of minimizing the number of cross-site links that is
required in a multi-site Parallel Sysplex.
Between any two servers that are intended to exchange STP messages, configure each
server so that at least two coupling links exist for communication between the servers. This
configuration prevents the loss of one link from causing the loss of STP communication
between the servers. If a server does not have a CF LPAR, timing-only links can be used to
provide STP connectivity.
15 10 km (6.2 miles) without DWDM extenders; 100 km (62 miles) with certified DWDM equipment.
The Crypto Express8S represents the newest generation that was introduced with the IBM
z16, the rest of the paragraph is applicable for both generations (7S and 8S).
These coprocessors are Hardware Security Modules (HSMs) that provide high-security
cryptographic processing as required by banking and other industries. These features provide
a secure programming and hardware environment wherein crypto processes are performed.
A Crypto Express (2 HSM) feature includes two IBM PCIe Cryptographic Co-processors
(PCIeCC); a Crypto Express (1 HSM) feature includes one PCIeCC per feature. For
availability reasons, a minimum of two features is required. Up to 30 Crypto Express (2 HSM)
features are supported. The maximum number of the 1 HSM features is 16. A Crypto Express
feature occupies one I/O slot in a PCIe+ I/O drawer.
Each adapter can be configured as a Secure IBM CCA coprocessor, a Secure IBM Enterprise
PKCS #11 (EP11) coprocessor, or as an accelerator.
These Crypto Express features provide domain support for up to 85 logical partitions.
The accelerator function is designed for maximum-speed Secure Sockets Layer and
Transport Layer Security (SSL/TLS) acceleration, rather than for specialized financial
applications for secure, long-term storage of keys or secrets. The Crypto Express feature can
also be configured as one of the following configurations:
The Secure IBM CCA coprocessor includes secure key functions with emphasis on the
specialized functions that are required for banking and payment card systems. It is
optionally programmable to add custom functions and algorithms by using User Defined
Extensions (UDX).
Payment Card Industry (PCI) PIN® Transaction Security (PTS) Hardware Security Module
(HSM) (PCI-HSM), is available for Crypto Express7S and newer in CCA mode. PCI-HSM
mode simplifies compliance with PCI requirements for hardware security modules.
The Secure IBM Enterprise PKCS #11 (EP11) coprocessor implements an
industry-standardized set of services that adheres to the PKCS #11 specification v2.20
and more recent amendments. It was designed for extended FIPS and Common Criteria
evaluations to meet industry requirements
This cryptographic coprocessor mode introduced the PKCS #11 secure key function.
TKE feature: The Trusted Key Entry (TKE) Workstation feature is required for
supporting the administration of the Crypto Express7S and Crypto Express8S when
configured as an Enterprise PKCS #11 coprocessor or managing the CCA mode
PCI-HSM.
When the Crypto Express PCI Express adapter is configured as a secure IBM CCA
co-processor, it still provides accelerator functions. However, up to 3x better performance for
those functions can be achieved if the Crypto Express PCI Express adapter is configured as
an accelerator.
CCA enhancements include the ability to use triple-length (192-bit) Triple-DES (TDES) keys
for operations, such as data encryption, IBM PIN processing, and key wrapping to strengthen
security. CCA also extended the support for the cryptographic requirements of the German
Banking Industry Committee Deutsche Kreditwirtschaft (DK).
Several features that support the use of the AES algorithm in banking applications also were
added to CCA. These features include the addition of AES-related key management features
and the AES ISO Format 4 (ISO-4) PIN blocks as defined in the ISO 9564-1 standard. PIN
block translation is supported and use of AES PIN blocks in other CCA callable services.
IBM continues to add enhancements as AES finance industry standards are released.
More about the cryptographic capabilities of the IBM z17 can be found in Chapter 6,
“Cryptographic features” on page 221.
IBM Fibre Channel Endpoint Security is designed to provide a means to help ensure the
integrity and confidentiality of all data that flows on FC links between trusted entities within
and across data centers. The trusted entities are IBM z17 and the IBM Storage subsystem
(select IBM DS8000 storage systems). No application or middleware changes are required.
Fibre Channel Endpoint Security supports all data in-flight from any operating system.
IBM Z Feature Code 1146, Endpoint Security Enablement turns on the Fibre Channel
Endpoint Security panels on the HMC so setup can be done.
Based tightly on the Fibre Channel–Security Protocol-2 (FC-SP-2) standard, which provides
various means of authentication and essentially maps IKEv2 constructs for security
association management and derivation of encryption keys to Fibre Channel Extended Link
Services, the IBM Fibre Channel Endpoint Security implementation uses the IBM solution for
key server infrastructure in the storage system (for data at-rest encryption).
IBM Security Guardium Key Lifecycle Manager acts as a trusted authority for key generation
operations and as an authentication server. It provides shared secret key generation in a
relationship between an FC initiator (IBM Z server) and the IBM Storage target. The solution
implements an authentication and key management solution that is called IBM Secure Key
Exchange (SKE), as illustrated in Figure 4-6 on page 207.
IBM Z
Before establishing the connection, each link must be authenticated, and if successful, then it
becomes a trusted connection. A policy sets the rules, for example, enforcing trusted
connections only. If the link goes down, the authentication process starts again. The secure
connection can be enabled automatically if both the IBM Z and IBM Storage endpoints are
encryption-capable.
Data in-flight (from and to IBM Z and IBM Storage servers) is encrypted when it leaves either
endpoint (source), and then it is decrypted at the destination. Encryption and decryption are
done at the FC adapter level. The operating system that is running on the host (IBM Z server)
is not involved in Fibre Channel Endpoint Security related operations. Tools are provided at
the operating system level for displaying information about the encryption status.
The following prerequisites must be met for this optional priced feature:
FICON Express32-4P LX/SX (FC0387/FC0388), FICON Express32S LX/SX
(FC0461/FC0462) or FICON Express16SA LX/SX (FC0436/0437) for both link encryption
and endpoint authentication
DS8910, DS8890 (Power 9 based) or newer with 32GFC (encryption) Host Bus Adapter or
16GFC (authentication-only) Host Bus Adapter
Key server: IBM Security Guardium Key Lifecycle Manager (ISGKLM)
Latest version: 4.2, minimum: IBM Security Key Lifecycle Manager V3.0.1
IBM z17 Endpoint Security Enablement: FC 1146
CPACF enablement (FC 3863)
For more information and implementation details, see IBM Fibre Channel Endpoint Security
for IBM DS8900F and IBM Z, SG24-8455 and IBM DS8000 Encryption for Data at Rest,
Transparent Cloud Tiering, and Endpoint Security (DS8000 Release 10.0), REDP-4500.
Given the increasing importance of providing the highest level of data protection to IBM Z
clients, IBM intends to require the use of IBM Fibre Channel Endpoint Security for all
FICON connected devices starting with the release of IBM zNext+1. This direction will
require investment by IBM Infrastructure teams, FICON storage vendors and IBM Z clients
as an important step towards continuing to secure the most mission critical workloads. In
support of this direction, all new FICON-connected storage systems introduced after
December 31, 2024, will be required to support IFCES to connect to zNext+1.
The Coupling Express3 Long Reach (CE LR) is installed in the PCIe+ I/O drawer and is the
native PCIe feature in the IBM z17 and belongs to a Resource Group managed by the IFP.
The native PCIe features should be ordered in pairs for redundancy. The features are
assigned to one of the four resource groups (RGs) that are running on the IFP according to
their physical location in the PCIe+ I/O drawer, which provides management functions and
virtualization functions.
If two features of the same type are installed, one always is managed by resource group 1
(RG 1) or resource group 3 (RG3); the other feature is managed by resource group 2 (RG 2)
or resource group 4 (RG 4). This configuration provides redundancy if one of the features or
resource groups needs maintenance or fails.
The IFP and RGs support the following infrastructure management functions:
Firmware update of adapters and resource groups
Error recovery and failure data collection
Diagnostic and maintenance tasks
The channel subsystem directs the flow of information between I/O devices and main storage.
It allows data processing to proceeded concurrently with I/O processing, which relieves data
processors (central processor (CP) and Integrated Facility for Linux [IFL]) of the task of
communicating directly with I/O devices.
The channel subsystem includes subchannels, I/O devices that are attached through control
units, and channel paths between the subsystem and control units. For more information
about the channel subsystem, see 5.1.1, “Multiple logical channel subsystems”.
The design of IBM Z servers offers considerable processing power, memory size, and I/O
connectivity. In support of the larger I/O capability, the CSS structure is scaled up by
introducing the multiple logical channel subsystem (LCSS) since IBM z990, and multiple
subchannel sets (MSS) since IBM z9.
An overview of the channel subsystem for IBM z17 servers is shown in Figure 5-1. IBM z17
ME1 systems are designed to support up to six logical channel subsystems, each with four
subchannel sets and up to 256 channels.
All channel subsystems are defined within a single configuration, which is called I/O
configuration data set (IOCDS). The IOCDS is loaded into the hardware system area (HSA)
during a central processor complex (CPC) power-on reset (POR) to start all of the channel
subsystems.
On IBM z17 A01 systems, the HSA is pre-allocated in memory with a fixed size of 884 GB,
which is in addition to the customer purchased memory. This fixed size memory for HSA
eliminates the requirement for more planning of the initial I/O configuration and planning for
future I/O expansions.
CPC drawer repair: The HSA can be moved from one CPC drawer to a different drawer in
an enhanced availability configuration as part of a concurrent CPC drawer repair (CDR)
action.
The following objects are always reserved in the IBM z17 ME1 HSA during POR, regardless
of whether they are defined in the IOCDS for use:
Six LCSSs
The introduction of multiple LCSSs enabled an IBM Z server to have more than one channel
subsystems logically, while each logical channel subsystem maintains the same manner of
I/O processing. Also, a logical partition (LPAR) is now attached to a specific logical channel
subsystem, which makes the extension of multiple logical channel subsystems not apparent
to the operating systems and applications. The multiple image facility (MIF) in the structure
enables resource sharing across LPARs within a single LCSS or across the LCSSs.
The multiple LCSS structure extended the IBM Z servers’ total number of I/O connectivity to
support a balanced configuration for the growth of processor and I/O capabilities.
Note: The phrase channel subsystem has same meaning as logical channel subsystem in
this section, unless otherwise stated.
Subchannels
A subchannel provides the logical appearance of a device to the program and contains the
information that is required for sustaining a single I/O operation. Each device is accessible by
using one subchannel in a channel subsystem to which it is assigned according to the active
IOCDS of the IBM Z server.
In z/Architecture, the first subchannel set of an LCSS can have 63.75 K subchannels (with
0.25 K reserved), with a subchannel set ID (SSID) of 0. By enabling the multiple subchannel
sets, extra subchannel sets are available to increase the device addressability of a channel
subsystem.
For more information about multiple subchannel sets, see 5.1.2, “Multiple subchannel sets”
on page 212.
Channel paths
A channel path provides a connection between the channel subsystem and control units that
allows the channel subsystem to communicate with I/O devices. Depending on the type of
connections, a channel path might be a physical connection to a control unit with I/O devices,
such as FICON, or an internal logical control unit, such as HiperSockets.
Each channel path in a channel subsystem features a unique 2-digit hexadecimal identifier
that is known as a channel-path identifier (CHPID), which ranges 00 - FF. Therefore, a total of
A port on an I/O feature card features a unique physical channel identifier (PCHID) according
to the physical location of this I/O feature adapter, and the sequence of this port on the
adapter.
In addition, a port on a fan-out adapter has a unique adapter identifier (AID), according to the
physical location of this fan-out adapter, and the sequence of this port on the adapter.
A CHPID is assigned to a physical port by defining the corresponding PCHID or AID in the I/O
configuration definitions.
Control units
A control unit provides the logical capabilities that are necessary to operate and control an
I/O device. It adapts the characteristics of each device so that it can respond to the standard
form of control that is provided by the CSS.
A control unit can be housed separately or can be physically and logically integrated with the
I/O device, channel subsystem, or within the IBM Z server.
I/O devices
An I/O device provides external storage, a means of communication between
data-processing systems, or a means of communication between a system and its
environment. In the simplest case, an I/O device is attached to one control unit and is
accessible through one or more channel paths that are connected to the control unit.
Each subchannel has a unique four-digit hexadecimal number 0x0000 - 0xFFFF. Therefore, a
single subchannel set can address and access up to 64 K I/O devices.
The IBM z17 ME1 systems support four subchannel sets for each logical channel subsystem.
The IBM z17 ME1 system can access a maximum of 255.74 K devices for a logical channel
subsystem and a logical partition and the programs that are running on it.
Note: Do not confuse the multiple subchannel sets function with multiple channel
subsystems.
Subchannel number
The subchannel number is a four-digit hexadecimal number 0x0000 - 0xFFFF, which is
assigned to a subchannel within a subchannel set of a channel subsystem. Subchannels in
each subchannel set are always assigned subchannel numbers within a single range of
contiguous numbers.
With the subchannel numbers, a program that is running on an LPAR (for example, an
operating system) can specify all I/O functions relative to a specific I/O device by designating
a subchannel that is assigned to the I/O devices.
Normally, subchannel numbers are used only in communication between the programs and
the channel subsystem.
Device number
A device number is an arbitrary number 0x0000 - 0xFFFF, which is defined by a system
programmer in an I/O configuration for naming an I/O device. The device number must be
unique within a subchannel set of a channel subsystem. It is assigned to the corresponding
subchannel by channel subsystem when an I/O configuration is activated. Therefore, a
subchannel in a subchannel set of a channel subsystem includes a device number together
with a subchannel number for designating an I/O operation.
The device number provides a means to identify a device that is independent of any
limitations that are imposed by the system model, configuration, or channel-path protocols.
A device number also can be used to designate an I/O function to a specific I/O device.
Because it is an arbitrary number, it can easily be fit into any configuration management and
operating management scenarios. For example, a system administrator can set all of the
z/OS systems in an environment to device number 1000 for their system RES volumes.
With multiple subchannel sets, a subchannel is assigned to a specific I/O device by the
channel subsystem with an automatically assigned subchannel number and a device number
that is defined by user. An I/O device can always be identified by an SSID with a subchannel
number or a device number. For example, a device with device number AB00 of subchannel
set 1 can be designated as 1AB00.
Normally, the subchannel number is used by the programs to communicate with the channel
subsystem and I/O device, whereas the device number is used by a system programmer,
operator, and administrator.
For the extra subchannel sets enabled by the MSS facility, each has 65535 subchannels
(64 K minus one) for specific types of devices. These extra subchannel sets are referred as
alternative subchannel sets in z/OS.
Also, a device that is defined in an alternative subchannel set is considered a special device,
which normally features a special device type in the I/O configuration.
The use of another subchannel set for these special devices helps reduce the number of
devices in the subchannel set 0, which increases the growth capability for accessing more
devices.
This configuration allows the users of Metro Mirror (formerly PPRC) secondary devices that
are defined by using the same device number and a new device type in an alternative
subchannel set to be used for IPL, an I/O definition file (IODF), and stand-alone memory
dump volumes, when needed.
D IOS,CONFIG(ALL)
IOS506I 14.26.53 I/O CONFIG DATA 606
ACTIVE IODF DATA SET = SYS9.IODF63
CONFIGURATION ID = ITSO EDT ID = 01
TOKEN: PROCESSOR DATE TIME DESCRIPTION
SOURCE: PAVO 25-02-17 11:35:59 SYS9 IODF63
ACTIVE CSS: 3 SUBCHANNEL SETS CONFIGURED: 0, 1, 2, 3
CHANNEL MEASUREMENT BLOCK FACILITY IS ACTIVE
SUBCHANNEL SET FOR PPRC PRIMARY: INITIAL = 0 ACTIVE = 0
HYPERSWAP FAILOVER HAS OCCURRED: NO
LOCAL SYSTEM NAME (LSYSTEM): PAVO
HARDWARE SYSTEM AREA AVAILABLE FOR CONFIGURATION CHANGES
PHYSICAL CONTROL UNITS 8061
CSS 0 - LOGICAL CONTROL UNITS 4027
SS 0 SUBCHANNELS 59510
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 1 - LOGICAL CONTROL UNITS 4005
SS 0 SUBCHANNELS 59218
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 2 - LOGICAL CONTROL UNITS 4025
SS 0 SUBCHANNELS 59410
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 3 - LOGICAL CONTROL UNITS 4026
SS 0 SUBCHANNELS 60906
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 4 - LOGICAL CONTROL UNITS 4043
SS 0 SUBCHANNELS 61266
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
CSS 5 - LOGICAL CONTROL UNITS 4088
SS 0 SUBCHANNELS 65280
SS 1 SUBCHANNELS 65535
SS 2 SUBCHANNELS 65535
SS 3 SUBCHANNELS 65535
ELIGIBLE DEVICE TABLE LATCH COUNTS
0 OUTSTANDING BINDS ON PRIMARY EDT
Although a shared channel path can be shared by LPARs within a same LCSS, a spanned
channel path can be shared by LPARs within and across LCSSs.
By assigning the same CHPID from different LCSSs to the same channel path (for example, a
PCHID), the channel path can be accessed by any LPARs from these LCSSs at the same
time. The CHPID is spanned across those LCSSs. The use of spanned channels paths
decreases the number of channels that are needed in an installation of IBM Z servers.
A sample of channel paths that are defined as dedicated, shared, and spanned is shown in
Figure 5-3.
Channel spanning is supported for internal links (HiperSockets and IC links) and for specific
types of external links. External links that are supported on IBM z17 ME1 systems include
FICON Express 32G, FICON Express32S, FICON Express16SA, OSA-Express7S 1.2,
OSA-Express7S, Network Express, and Coupling Links.
LPAR name
The LPAR name is defined as partition name parameter in the RESOURCE statement of an
I/O configuration. The LPAR name must be unique across the server.
MIF image ID
The MIF image ID is defined as a parameter for each LPAR in the RESOURCE statement of
an I/O configuration. It ranges 1 - F, and must be unique within an LCSS. However, duplicates
are allowed in different LCSSs.
If a MIF image ID is not defined, an arbitrary ID is assigned when the I/O configuration
activated. The IBM z17 server supports a total of 85 LPARs that can be defined.
Each LCSS of an IBM z17 ME1 system can support the following numbers of LPARs:
LCSS0 to LCSS4 support 15 LPARs each, and the MIF image ID is 1 - F.
LCSS5 supports 10 LPARs, and the MIF image IDs are 1 - A.
LPAR ID
The LPAR ID is defined by a user in an image activation profile for each LPAR. It is a 2-digit
hexadecimal number 00 - 7F. The LPAR ID must be unique across the server.
Although it is arbitrarily defined by the user, an LPAR ID often is the CSS ID concatenated to
its MIF image ID, which makes the value more meaningful for the system administrator. For
example, an LPAR with LPAR ID 1A defined in that manner means that the LPAR is defined in
LCSS1, with the MIF image ID A.
Note: Specific functions might require specific levels of an operating system, PTFs, or
both.
DPU
CPU CPU
PCI e PCI e
IO Drawer IO Drawer
Switch Switch
IO Card IO Card
(16X) (16X)
I BM ASI C
I O Firmware
2-ports 4-ports
SAN / Ethernet SAN / Ethernet
IBM z17 DPU encompasses a comprehensive refactoring of the I/O subsystem. The goals of
this effort are to deliver improved IBM Z platform efficiencies, to improve peak I/O start rates
and reduce latencies, to provide focused per port recovery for the most common types of
failures, to improve recurring networking costs for customers by providing integrated RoCE
SMC-R and OSA support, to provide single port serviceability for all DPU managed I/O
adapters, and to reduce dependence on the PCI support partition by providing physical
function support for PCIe Native use cases.
The following protocols are supported and will run on the DPU:
Legacy Mode FICON
HPF (High Performance FICON)
FCP (SCSI over fiber channel)
OSA (Open Systems Adapter)
OSA-ICC (Open Systems Adapter - Integrated Console Controller)
Physical function support for Native Ethernet exploitation.
This support also allows a port to be shared between a PCIe Native protocol and OSA.dndn
This chapter also introduces the principles of cryptography and describes the implementation
of cryptography in the hardware and software architecture of IBM Z. It also describes the
features that IBM z17 offers.
The IBM z17 uses quantum-safe technologies to help protect your business-critical
infrastructure and data from potential attacks.
The IBM z17 delivers a transparent and consumable approach that enables extensive
(pervasive) encryption of data in flight and at rest, with the goal of substantially simplifying
data security and reducing the costs that are associated with protecting data while achieving
compliance mandates.
Naming: The IBM z17, Machine Type 9175 (M/T 9175), Model ME1 is further identified as
IBM z17, unless otherwise specified.
IBM z16 introduced the new PCI Crypto Express8S feature (that can be managed by a new
Trusted Key Entry (TKE) workstation) together with a further improved CPACF Coprocessor.
In addition, the IBM Common Cryptographic Architecture (CCA) and the IBM Enterprise
PKCS #11 (EP11) Licensed Internal Code (LIC) were enhanced.
The new features support new standards and meet the following compliance requirements:
Payment Card Industry (PCI) Hardware Security Module (HSM) certification to strength
the cryptographic standards for attack resistance in the payment card systems area.
PCI HSM certification is available for Crypto Express8S, Crypto Express7S, and Crypto
Express6S.
National Institute of Standards and Technology (NIST) through the Federal Information
Processing Standard (FIPS) standard to implement guidance requirements.
Common Criteria EP11 EAL4.
German Banking Industry Commission (GBIC).
Visa Format Preserving Encryption (VFPE) for credit card numbers.
Enhanced public key Elliptic Curve Cryptography (ECC) for users such as Chrome,
Firefox, and Apple’s iMessage.
Accredited Standards Committee X9 Inc Technical Report-34 (ASC X9 TR-34)
For the HSM, IBM z17 uses a new adapter released as a break in replacement to the current
design. The new adapter fix end of life issues on multiple components and provides a fix for
the battery power circuit. No firmware update will be included and the adapter is backward
compatible to the current design. The Crypto Express8S adapter is used by IBM z17 with the
following enhancements:
• COP: HMAC support / acceleration via hardware
• SHA3, SHAKE improved ICV, OCV handling
• COP: KM-XTS performance improvement
• True Random Number Generator (TRNG) - Entropy Speed-up and Enhancements
All IBM z16 and IBM z17 enhancements are described in this chapter.
IBM z16and IBM z17 include standard cryptographic hardware and optional cryptographic
features for flexibility and growth capability. IBM has a long history of providing hardware
cryptographic solutions. This history stretches from the development of the Data Encryption
Standard (DES) in the 1970s to the Crypto Express tamper-sensing and tamper-responding
programmable features.
Crypto Express meets the US Government’s highest security rating of FIPS 140-3 Level 4
certification1. It also meets several other security ratings, such as the Common Criteria for
Information Technology Security Evaluation, the PCI HSM criteria, and the criteria for German
Banking Industry Commission (formerly known as Deutsche Kreditwirtschaft evaluation).
The cryptographic functions include the full range of cryptographic operations that are
necessary for local and global business and financial institution applications. User Defined
Extensions (UDX) allow you to add custom cryptographic functions to the functions that IBM
z17 systems offer.
Also, it is necessary to ensure that a message cannot be corrupted (message integrity), while
ensuring that the sender and the receiver really are the persons who they claim to be. Over
time, several methods were used to achieve these objectives, with more or less success.
Many procedures and algorithms for encrypting and decrypting data were developed that are
increasingly complicated and time-consuming.
It must be impossible for the owner of the data or the sender of the message to deny
authorship. Nonrepudiation ensures that both sides of a communication know that the
other side agreed to what was exchanged, and not someone else. This specification
implies a legal liability and contractual obligation, which is the same as a signature on a
contract.
These goals should all be possible without unacceptable overhead to the communication. The
goal is to keep the system secure, manageable, and productive.
The basic data protection method is achieved by encrypting and decrypting the data, while
hash algorithms, message authentication codes (MACs), digital signatures, and certificates
are used for authentication, data integrity, and nonrepudiation.
When encrypting a message, the sender transforms the clear text into a secret text. Doing so
requires the following main elements:
The algorithm is the mathematical or logical formula that is applied to the key and the
clear text to deliver a ciphered result, or to take a ciphered text and deliver the original
clear text.
The key ensures that the result of the encrypting data transformation by the algorithm is
only the same when the same key is used. That decryption of a ciphered message results
only in the original clear message when the correct key is used. Therefore, the receiver of
a ciphered message must know which algorithm and key must be used to decrypt the
message.
In other words, the security of a cryptographic system should depend on the security of the
key, so the key must be kept secret. Therefore, the secure management of keys is the primal
task of modern cryptographic systems.
6.2.3 Keys
The keys that are used for the cryptographic algorithms often are sequences of numbers and
characters, but can also be any other sequence of bits. The length of a key influences the
security (strength) of the cryptographic method. The longer the used key, the more difficult it
is to compromise a cryptographic algorithm.
For example, the DES (symmetric key) algorithm uses keys with a length of 56 bits,
Triple-DES (TDES) uses keys with a length of 112 bits, and Advanced Encryption Standard
(AES) uses keys of 128, 192, or 256 bits. The asymmetric key RSA algorithm (named after its
inventors Rivest, Shamir, and Adleman) uses keys with a length of 1024, 2048, or 4096 bits.
In modern cryptography, keys must be kept secret. Depending on the effort that is made to
protect the key, keys are classified into the following levels:
A clear key is a key that is transferred from the application in clear text to the cryptographic
function. The key value is stored in the clear (at least briefly) somewhere in unprotected
memory areas. Therefore, the key can be made available to someone under specific
circumstances who is accessing this memory area.
This risk must be considered when clear keys are used. However, many applications exist
where this risk can be accepted. For example, the transaction security for the (widely
used) encryption methods Secure Sockets Layer (SSL) and Transport Layer Security
(TLS) is based on clear keys.
The value of a protected key is stored only in clear in memory areas that cannot be read by
applications or users. The key value does not exist outside of the physical hardware,
although the hardware might not be tamper-resistant. The principle of protected keys is
unique to IBM Z. For more information, see 6.4.2, “CPACF protected key” on page 234.
For a secure key, the key value does not exist in clear format outside of a special hardware
device (HSM), which must be secured and tamper-resistant. A secure key is protected
from disclosure and misuse, and can be used for the trusted execution of cryptographic
algorithms on highly sensitive data. If used and stored outside of the HSM, a secure key
must be encrypted with a master key, which is created within the HSM and never leaves
the HSM.
Because a secure key must be handled in a special hardware device, the use of secure
keys usually is far slower than the use of clear keys, as shown in Figure 6-1 on page 226.
protection
secure
key
protected
key
clear
key
speed
Figure 6-1 Three levels of protection with three levels of speed
6.2.4 Algorithms
The following algorithms of modern cryptography are differentiated based on whether they
use the same key for the encryption of the message as for the decryption:
Symmetric algorithms use the same key to encrypt and decrypt data. The function that is
used to decrypt the data is the opposite of the function that is used to encrypt the data.
Because the same key is used on both sides of an operation, it must be negotiated
between both parties and kept secret. Therefore, symmetric algorithms are also known as
secret key algorithms.
The main advantage of symmetric algorithms is that they are fast and therefore can be
used for large amounts of data, even if they are not run on specialized hardware. The
disadvantage is that the key must be known by both sender and receiver of the messages,
which implies that the key must be exchanged between them. This key exchange is a
weak point that can be attacked.
Prominent examples for symmetric algorithms are DES, TDES, and AES.
Asymmetric algorithms use two distinct but related keys: the public key and the private
key. As the names imply, the private key must be kept secret, whereas the public key is
shown to everyone. However, with asymmetric cryptography, it is not important who sees
or knows the public key. Whatever is done with one key can be undone by the other key
only.
For example, data that is encrypted by the public key can be decrypted by the associated
private key only, and vice versa. Unlike symmetric algorithms, which use distinct functions
for encryption and decryption, only one function is used in asymmetric algorithms.
Depending on the values that are passed to this function, it encrypts or decrypts the data.
Asymmetric algorithms are also known as public key algorithms.
Asymmetric algorithms use complex calculations and are relatively slow (about 100 - 1000
times slower than symmetric algorithms). Therefore, such algorithms are not used for the
encryption of bulk data.
Because the private key is never exchanged between the parties in communication, they
are less vulnerable than symmetric algorithms. Asymmetric algorithms mainly are used for
authentication, digital signatures, and for the encryption and exchange of secret keys,
which in turn are used to encrypt bulk data with a symmetric algorithm.
Examples for asymmetric algorithms are RSA and the elliptic curve algorithms.
One-way algorithms are not cryptographic functions. They do not use keys, and they can
scramble data only, not de-scramble it. These algorithms are used extensively within
cryptographic procedures for digital signing and tend to be developed and governed by
using the same principles as cryptographic algorithms. One-way algorithms are also
known as hash algorithms.
The most prominent one-way algorithms are the Secure Hash Algorithms (SHA).
The cryptographic hardware that is supported on IBM z17 is shown in Figure 6-2. These
features are described in this chapter.
IBM
z17
CPC Drawer PU DCM: Each PU
implements the
CPACF function
Crypto Express8S
(2port in this picture)
PCIe I/O
drawers
Implemented in every processor unit (PU) or core in a central processor complex (CPC) is a
cryptographic coprocessor that can be used2 for cryptographic algorithms that uses clear
keys or protected keys. For more information, see 6.4, “CP Assist for Cryptographic
Functions” on page 231.
The Crypto Express8S adapter is an HSM that is placed in the PCIe+ I/O drawer of IBM z17.
It also supports cryptographic algorithms by using secret keys. For more information, see 6.5,
“Crypto Express8S” on page 236.
Finally, a TKE workstation is required for entering keys in a secure way into the Crypto
Express8S HSM, which often also is equipped with smart card readers. For more information,
see 6.6, “Trusted Key Entry workstation” on page 253.
The feature codes and purpose of the cryptographic hardware features that are available for
IBM z17 are listed in Table 6-1.
This feature is a prerequisite to use CPACF (except for SHA-1, SHA-224, SHA-256,
SHA-384, and SHA-512) and the PCIe Crypto Express features.
These features are optional. The 2-port feature contains two IBM 4770 PCIe
Cryptographic Coprocessors (HSMs), which can be independently defined as
Coprocessor or Accelerator. New feature. Not supported on previous generations
IBM Z systems.
A TKE, smart card Reader and latest available level smart cards are required to operate
the Crypto adapter card in EP11 mode.
These features are optional. The 2-port feature contains two IBM 4770 PCIe
Cryptographic Coprocessors (HSMs), which can be independently defined as
Coprocessor or Accelerator. New feature. Not supported on previous generations
IBM Z systems.
A TKE, Smart Card Reader and latest available level smart cards are required to
operate the Crypto adapter card in EP11 mode.
Carry forward from IBM z16. This feature contains two IBM 4769 PCIe Cryptographic
Coprocessors (HSMs), which can be independently defined as Coprocessor or
Accelerator.
Carry forward from IBM z16. This feature contains one IBM 4769 PCIe Cryptographic
Coprocessor (HSM), which can be defined as Coprocessor or Accelerator.
Feature Description
Code
Included with the TKE tower workstation FC 0058 and the TKE rack-mounted
workstation FC 0057 for IBM z17. Earlier versions of TKE features (feature codes: 0087,
0088, 0085, and 0086) also can be upgraded to TKE 10.1 LIC, adding FC 0851
(IBM 4770 PCIeCC) if the TKE is assigned to an IBM z17 and manages Crypto
Express8S.
The stand-alone crypto adapter is required for TKE upgrade from FC 0086 and FC 0088
TKE tower, or FC 0085 and FC 0087 TKE Rack Mount when carry forward these
features to IBM z17.
TKE Tower FC 0088 can be carried forward to IBM z17. It requires IBM 4770 PCIeCC
(FC 0851) for compatibility with TKE LIC 10.1 (FC 0883) and for managing Crypto
Express8S (FC 0144 = FC 0088 + FC 0851 + FC 0883).
TKE Rack Mount FC 0087 can be carried forward to IBM z17. It requires IBM 4770
PCIeCC (FC 0851) for compatibility with TKE LIC 10.1 (FC 0883) and for managing
Crypto Express8S (FC 0145 = FC 0087 + FC 0851 + FC 0883).
TKE Rack Mount FC 0085 can be carried forward to IBM z17. It requires IBM 4770
PCIeCC (FC 0851) for compatibility with TKE LIC 10.1 (FC 0883) and for managing
Crypto Express8S (FC 0233 = FC 0085 + FC 0851 + FC 0883).
TKE Tower FC 0086 can be carried forward to IBM z17. It requires IBM 4770 PCIeCC
(FC 0851) for compatibility with TKE LIC 10.1 (FC 0883) and for managing Crypto
Express8S (FC 0234 = FC 0086 + FC 0851 + FC 0883).
Feature Description
Code
Access to information in the smart card is protected by a PIN. One feature code
includes two smart card readers, two cables to connect to the TKE workstation, and 20
smart cards.
Access to information in the smart card is protected by a PIN. Carry forward with
existing cards (non-FIPS).
This card allows the TKE to support zones with EC 521 key strength (EC 521 strength
for Logon Keys, Authority Signature Keys, and EP11 signature keys).
Carry forward only to IBM z17. Ten smart cards are included.
a. The maximum number of combined features of all types cannot exceed 60 HSMs on an IBM
z17 ME1 (any combination of single and dual HSM Crypto Express features). Therefore, the
maximum number for Feature Code 0908 is 30; for all other (single HSM) types, it is 16 for an
IBM z17 system.
A TKE includes support for AES encryption algorithm with 256-bit master keys and key
management functions to load or generate master keys to the cryptographic coprocessor.
If the TKE workstation is chosen to operate the Crypto Express8S adapter in an IBM z17,
TKE workstation with the TKE 10.1 LIC is required. For more information, see 6.6, “Trusted
Key Entry workstation” on page 253.
Important: Products that include any of the cryptographic feature codes contain
cryptographic functions that are subject to special export licensing requirements by the US
Department of Commerce. It is your responsibility to understand and adhere to these
regulations when you are moving, selling, or transferring these products.
To access and use the cryptographic hardware devices that are provided by IBM z17, the
application must use an application programming interface (API) that is provided by the
operating system. In z/OS, the Integrated Cryptographic Service Facility (ICSF) provides the
APIs and is managing the access to the cryptographic devices, as shown in Figure 6-3 on
page 231.
APPLICATION
APPLICATION
Crypto
z17
z/OS Software
LPAR x
LPAR y
LPAR z
ICSF
Software Crypto
(clear key)
TKE
LPAR Trusted Key Entry
(secure key)
Hypervisor HW/SW
HSM Crypto
(secure key & clear key )
ICSF is a software component of z/OS. ICSF works with the hardware cryptographic features
and the Security Server (IBM Resource Access Control Facility [IBM RACF®] element) to
provide secure, high-speed cryptographic services in the z/OS environment. ICSF provides
the APIs by which applications request the cryptographic services, and from the CPACF and
the Crypto Express features.
ICSF transparently routes application requests for cryptographic services to one of the
integrated cryptographic engines (CPACF or a Crypto Express feature), depending on
performance or requested cryptographic function. ICSF is also the means by which the
secure Crypto Express features are loaded with master key values, which allows the
hardware features to be used by applications.
The cryptographic hardware that is installed in IBM z7 determines the cryptographic features
and services that are available to the applications.
The users of the cryptographic services call the ICSF API. Some functions are performed by
the ICSF software without starting the cryptographic hardware features. Other functions result
in ICSF going into routines that contain proprietary IBM Z crypto instructions. These
instructions are run by a CPU engine and result in a work request that is generated for a
cryptographic hardware feature.
z17 PU Core
CPACF supports pervasive encryption. Simple policy controls allow businesses to enable
encryption to protect data in mission-critical databases without the need to stop the database
or re-create database objects. Pervasive encryption includes z/OS Dataset Encryption, z/OS
Coupling Facility Encryption, z/VM encrypted hypervisor paging, and z/TPF transparent
database encryption, which use performance enhancements in the hardware.
The CPACF offers a set of symmetric cryptographic functions that enhances the encryption
and decryption performance of clear key operations. These functions are for SSL, virtual
private network (VPN), and data-storing applications that do not require FIPS 140-2 Level 4
security.
CPACF is designed to facilitate the privacy of cryptographic key material when used for data
encryption through key wrapping implementation. It ensures that key material is not visible to
applications or operating systems during encryption operations. For more information, see
6.4.2, “CPACF protected key” on page 234.
The CPACF feature provides hardware acceleration for the following cryptographic services:
Symmetric ciphers:
– DES
– Triple-DES
– AES-128
– AES-192
– AES-256 (all for clear and protected keys)
Elliptic curves cryptography (ECC):
– ECDSA, ECDH, support for the NIST P256, NIST P386, NIST P521
– EdDSA for Ed25519, Ed448 Curves
– ECDH for X25519, X448 Curves
– Key generation for NIST, Ed, and X curves
Hashes/MACs:
– SHA-1
– SHA-224 (SHA-2 or SHA-3 standard)
– SHA-256 (SHA-2 or SHA-3 standard)
– SHA-384 (SHA-2 or SHA-3 standard)
– SHA-512 (SHA-2 or SHA-3 standard)
– SHAKE-128
– SHAKE-256
– GHASH
Random number generator:
– PRNG (3DES based)
– DRNG (NIST SP-800-90A SHA-512 based)
– TRNG (true random number generator
These functions are provided as problem-state z/Architecture instructions that are directly
available to application programs. These instructions are known as Message-Security Assist
(MSA). When enabled, the CPACF runs at processor speed for every CP, IFL, and zIIP.
For more information about MSA instructions, see z/Architecture Principles of Operation,
SA22-7832.
For activating these functions, the CPACF must be enabled by using Feature Code (FC) 3863,
which is available at no charge. Support for hashing algorithms SHA-1, SHA-256, SHA-384,
and SHA-512 always is enabled.
The following tools might benefit from the throughput improvements for IBM z17 CPACF:
Db2/IMS encryption tool
Db2 built-in encryption
z/OS Communication Server: IPsec/IKE/AT-TLS
z/OS System SSL
z/OS Network Authentication Service (Kerberos)
DFDSS Volume encryption
z/OS Java SDK
The IBM z17 hardware includes the implementation of algorithms as hardware synchronous
operations. This configuration holds the PU processing of the instruction flow until the
operation completes.
For the SHA hashing algorithms and the random number generation algorithms, only clear
keys are used. For the symmetric encryption and decryption DES and AES algorithms and
clear keys, protected keys also can be used. On IBM z17, protected keys require a Crypto
Express adapter that is running in CCA mode.
For more information, see 6.5.2, “Crypto Express8S as a CCA coprocessor” on page 240.
The hashing algorithms SHA-1, SHA-2, and SHA-3 support for SHA-224, SHA-256,
SHA-384, and SHA-512, are enabled on all systems and do not require the CPACF
enablement feature. For all other algorithms, the no-charge CPACF enablement feature
(FC 3863) is required.
The CPACF functions are implemented as processor instructions and require operating
system support for use. Operating systems that use the CPACF instructions include z/OS,
z/VM, VSEn V6.3.1 – 21st Century Software, z/TPF, and Linux on IBM Z.
Clear keys process faster than secure keys because the process is done synchronously on
the CPACF. Protected keys blend the security of Crypto Express8S, or Crypto Express7S
3 PCIeCC: IBM PCIe Cryptographic Coprocessor is the Hardware Security Module (HSM).
coprocessors and the performance characteristics of the CPACF. This process allows it to run
closer to the speed of clear keys.
CPACF facilitates the continued privacy of cryptographic key material when used for data
encryption. In Crypto Express8S, or Crypto Express7S coprocessors, a secure key is
encrypted under a master key. However, a protected key is encrypted under a wrapping key
that is unique to each LPAR.
Because the wrapping key is unique to each LPAR, a protected key cannot be shared with
another LPAR. By using key wrapping, CPACF ensures that key material is not visible to
applications or operating systems during encryption operations.
CPACF code generates the wrapping key and stores it in the protected area of the hardware
system area (HSA). The wrapping key is accessible only by firmware. It cannot be accessed
by operating systems or applications.
DES/T-DES and AES algorithms are implemented in CPACF code with the support of
hardware assist functions. Two variations of wrapping keys are generated: one for
DES/T-DES keys and another for AES keys.
Wrapping keys are generated during the clear reset each time an LPAR is activated or reset.
No customizable option is available at Support Element (SE) or Hardware Management
Console (HMC) that permits or avoids the wrapping key generation. This function flow for the
Crypto Express8S, and Crypto Express7S adapters is shown in Figure 6-5.
Figure 6-5 CPACF key wrapping for Crypto Express8S, and Crypto Express7S
The CPACF Wrapping Key and the Transport Key for use with Crypto Express8S, and Crypto
Express7S are in a protected area of the HSA that is not visible to operating systems or
applications.
If a Crypto Express coprocessor (CEX8C, or CEX7C) is available, a protected key can begin
its life as a secure key. Otherwise, an application is responsible for creating or loading a clear
key value and then, uses the PCKMO instruction to wrap the key. ICSF is not called by the
application if the CEXxC is not available.
A new segment in the profiles of the CSFKEYS class in IBM RACF restricts which secure
keys can be used as protected keys. By default, all secure keys are considered not eligible to
be used as protected keys. The process that is shown in Figure 6-5 on page 235 considers a
secure key as the source of a protected key.
The source key in this case is stored in the ICSF Cryptographic Key Data Set (CKDS) as a
secure key, which was encrypted under the master key. This secure key is sent to CEX8C, or
CEX7C, to be deciphered and then, sent to the CPACF in clear text.
At the CPACF, the key is wrapped under the LPAR wrapping key, and is then returned to ICSF.
After the key is wrapped, ICSF can keep the protected value in memory. It then passes it to
the CPACF, where the key is unwrapped for each encryption or decryption operation.
The protected key is designed to provide substantial throughput improvements for a large
volume of data encryption and low latency for encryption of small blocks of data. A
high-performance secure key solution, also known as a protected key solution, requires the
ICSF HCR7770 as a minimum release.
CPACF Enhancements
Adding several HMAC algorithms to KMAC
– KMAC supports taking a clear key and converting it to a protected key using the
PCKMO instruction
Added new function codes for optimized AES-XTS functionality to KM
Each Crypto Express8S PCI Express adapter (HSM) is available in one of the following
configurations:
Secure IBM CCA coprocessor (CEX8C) for FIPS 140-2 Level 4 certification. This
configuration includes secure key functions. It is optionally programmable to deploy more
functions and algorithms by using UDX.
For more information, see 6.5.2, “Crypto Express8S as a CCA coprocessor” on page 240.
A TKE workstation is required to support the administration of the Crypto Express8S when
it is configured in CCA mode when in full Payment Card Industry (PCI)-compliant mode for
the necessary certificate management in this mode. The TKE is optional in all other use
cases for CCA.
Secure IBM Enterprise PKCS #11 (EP11) coprocessor (CEX8P) implements an
industry-standardized set of services that adheres to the PKCS #11 specification V2.20
and more recent amendments. It was designed for extended FIPS and Common Criteria
evaluations to meet public sector requirements. This new cryptographic coprocessor
mode introduced the PKCS #11 secure key function.
For more information, see 6.5.3, “Crypto Express8S as an EP11 coprocessor” on
page 247.
A TKE workstation is always required to support the administration of the Crypto
Express7S when it is configured in EP11 mode.
Accelerator (CEX8A) for acceleration of public key and private key cryptographic
operations that are used with SSL/TLS processing.
For more information, see 6.5.4, “Crypto Express8S as an accelerator” on page 248.
These modes can be configured by using the SE. The PCIe adapter must be configured
offline to change the mode.
Attention: Switching between configuration modes erases all adapter secrets. The
exception is when you are switching from Secure CCA to accelerator, and vice versa.
The Crypto Express8S feature does not include external ports and does not use optical fiber
or other cables. It does not use channel path identifiers (CHPIDs), but requires one slot in the
PCIe I/O drawer and one physical channel ID (PCHID) for each PCIe cryptographic adapter.
Removal of the feature or adapter zeroizes its content. Access to the PCIe cryptographic
adapter is controlled through the setup in the image profiles on the SE.
Adapter: Although PCIe cryptographic adapters include no CHPID type and are not
identified as external channels, all logical partitions (LPARs) in all channel subsystems can
access the adapter. In IBM z17, up to 85 LPARs are supported per adapter (HSM).
Accessing the adapter requires a setup in the image profile for each partition. The adapter
must be in the candidate list.
Each IBM z17 ME1 supports up to 60 HSMs in total (combination of Crypto Express8S (1 or 2
HSMs), and Crypto Express7S (1 or 2 HSMs). Crypto Express7S (1 or 2 ports) are not
orderable for a new build IBM z17 A01 system, but can be carried forward from an IBM z16 or
IBM z15 by using an MES. Configuration information for Crypto Express8S is listed in
Table 6-2.
The concept of dedicated processor does not apply to the PCIe cryptographic adapter.
Whether configured as a coprocessor or an accelerator, the PCIe cryptographic adapter is
made available to an LPAR. It is made available as directed by the domain assignment and
the candidate list in the LPAR image profile. This availability is not changed by the shared or
dedicated status that is given to the PUs in the partition.
The definition of domain indexes and PCIe cryptographic adapter numbers in the candidate
list for each LPAR must be planned to allow for nondisruptive changes. Consider the following
points:
Operational changes can be made by using the Change LPAR Cryptographic Controls
task from the SE, which reflects the cryptographic definitions in the image profile for the
partition. With this function, adding and removing the cryptographic feature without
stopping a running operating system can be done dynamically.
The same usage domain index can be defined more than once across multiple LPARs.
However, the PCIe cryptographic adapter number that is coupled with the usage domain
index that is specified must be unique across all active LPARs.
The same PCIe cryptographic adapter number and usage domain index combination can be
defined for more than one LPAR (up to 85 for IBM z17). For example, you might define a
configuration for backup situations. However, only one of the LPARs can be active at a time.
For more information, see 6.5.5, “Managing Crypto Express8S” on page 249.
4
SHA-3 was standardized by NIST in 2015. SHA-2 is still acceptable and no indication exists that SHA-2 is
vulnerable or that SHA-3 is more or less vulnerable than SHA-2.
Several of these algorithms require a secure key and must run on an HSM. Some of these
algorithms can also run with a clear key on the CPACF. Many standards are supported only
when Crypto Express8S is running in CCA mode. Others are supported only when the
adapter is running in EP11 mode.
The three modes for Crypto Express8S are described next. For more information, see 6.7,
“Cryptographic functions comparison” on page 256.
UDX is supported under a special contract through an IBM or approved third-party service
offering. The Crypto Cards website directs your request to an IBM representative for your
geographic location. A special contract is negotiated between IBM and you for the
development of the UDX code by IBM according to your specifications and an agreed-upon
level of the UDX.
A UDX toolkit for IBM Z is tied to specific versions of the CCA code and the related host code.
UDX is available for the Crypto Express8S, and Crypto Express7S (Secure IBM CCA
coprocessor mode only) features. An UDX migration is no more disruptive than a normal
Microcode Change Level (MCL) or ICSF release migration.
In IBM z17, up to four UDX files can be imported. These files can be imported from a USB
media stick or an FTP server. The UDX configuration window is updated to include a Reset to
IBM Default button.
Consideration: CCA features a new code level starting with z13 systems, and the UDX
clients require a new UDX.
On IBM z17 ME1, Crypto Express8S is delivered with CCA 8.4 firmware and Crypto
Express7S with CCA7.6 firmware. A new set of cryptographic functions and callable services
is provided by the IBM CCA LIC to enhance the functions that secure financial transactions
and keys. The Crypto Express8S includes the following features:
Greater than 16 domains support up to 85 LPARs on IBM z17 ME1.
PCI PIN Transaction Security (PTS) HSM Certification that is available to IBM z17 in
combination with CEX8S, or CEX7S, features.
VFPE support, which was introduced with z13/z13s systems.
AES PIN support for the German banking industry.
PKA Translate UDX function into CCA.
Verb Algorithm Currency.
CCA improvements
With CCA 8.0, the following CCA improvements for Quantum Safe algorithms were made:
CCA Quantum Safe Algorithm enhancements:
– Updated support for Dilithium signatures:
• Round 2: Level 2 (6 5) and 3 (8 7)
• Round 3: Level 3 (6 5) and 5 (8 7)
– Added support for Kyber key encapsulation in Round 2: Level 5 (1024)
Quantum Safe protected key support for CCA
Host Firmware and CCA now employ a hybrid scheme combining ECDH and Kyber to
accomplish a quantum safe transport key exchange for protected key import.
CCA 8.4
On z17 and Crypto Express8S, CCA 8.4 content adds updates for Quantum Safe Algorithms:
Standardized support conforming to official NIST specifications.
Support for ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism, NIST FIPS
203) and ML-DSA (Module-Lattice-Based Digital Signature Algorithm, NIST FIPS 204) key
generation and use with digital signatures (ML-DSA) and Key Encapsulation (ML-KEM) in
the CCA API.
ML-KEM Key Sizes 768 + 1024.
ML-DSA (6,5) & (8,7).
Also on z17, CCA releases 8.4 and 7.6 add support for RSA keys with modulus bit lengths of
up to 8192 bits.
RSA key length 8192 is above the safety margin for AES-192 bit key transport (ignoring
QC considerations)
RSA key length extension is NOT expected to add safety margin for Post Quantum
Computing considerations
Support available in various CCA services including:
– PKA Key token Build and Generate – CSNDPKB, CSNDPKG
– Digital Signature Generate & Verify – CSNDDSG, CSNDDSV
– RSA encryption/decryption with CSNDPKE, CSNDPKD
CCA 8.2
CCA 8.2 content, released 01/2024 for z16, is described in this blog post and adds support for
the following features:
TR-31 Import / Export of AES K0-B and K1-B Key Blocks, enhancing key exchange with
popular payment networks;
Import RSA AES Key Wrapped Objects, enhancing key exchange with common cloud
environments;
New CCA service: Multi-MAC Scheme (CSNBMMS), adding a rich financial service
interface for verified PIN change;
RSA-OAEP v2.1 updates, adding flexibility with hash options for asymmetric encryption.
CCA 8.0
The following CCA 8.0 improvements were made:
Performance enhancement for mixed workloads: better performance when one partition
focuses on RSA/ECC and another partition focuses on AES/DES/TDES or financial
operations
Hardware accelerated key unwrap for AES wrapped keys
– Trusted Key Entry workstation (TKE) controlled selection of WRAPENH3 as the default
TDES key token wrapping method for easier management.
CCA 7.1
The following CCA 7.1 improvements were made:
Supported curves:
– NIST Prime Curves: P192, P224, P256, P384, and P521
– Brainpool Curves: 160, 192, 224, 256, 320, 384, and 512
Support in the CCA coprocessor for Edwards curves ED25519 (128-bit security strength)
and ED448 (224-bit security strength)
Although ED25519 is faster, ED448 is more secure. Practically though, 128-bit security
strength is very secure.
Edwards curves are used for digitally signing documents and verifying those signatures.
Edwards curves are less susceptible to side channel attacks when compared to Prime and
Brainpool curves.
ECC Protected Keys
Crypto Express8S and Crypto Express7S provide support in CCA coprocessors to take
advantage of fast DES and AES data encryption speeds in CPACF while maintaining high
levels of security for the secure key material. The key remains encrypted and the key
encrypting key never appears in host storage.
When CCA ECC services are used, ICSF can now take advantage of ECC support in
CPACF (protected key support) for the following curves:
– Prime: P256, P384, P521
– Edwards: ED25519, ED448
CPACF can achieve much faster crypto speeds compared to the coprocessor.
The translation to protected key happens automatically after the attribute is set in the key
token. No application change is required.
New signatures
Support for the Cryptographic Suite for Algebraic Lattices signatures algorithm with the
largest key sizes (MODE=3):
– Public Key size: 1760 bytes
– Private Key Size: 3856 bytes
– Signature Size: 3366 bytes
Lattice-based cryptographic keys are protected by the 256-bit AES MK. The lattice-based
key has a security strength of 128 bits.
TR-31 for Hash-based Message Authentication Code (HMAC)
HMAC keys are used to verify the integrity and authenticity of a message. This support
provides a standard method of exchanging HMAC keys with a partner that uses symmetric
key techniques. The key is exchanged in the standard TR-31 key block format, which can
be used by any crypto system that supports the standard.
Starting with IBM z14, the IBM Z crypto architecture can support up to 256 domains in an
adjunct processor (AP) with the AP extended addressing (APXA) facility that is installed. As
such, the Crypto Express adapters are enhanced to handle 256 domains.
The IBM Z firmware provides up to 85 domains for IBM z17 to customers (to match the
current LPAR maximum). Customers can map individual LPARs to unique crypto domains or
continue to share crypto domains across LPARs.
• When using new Quantum Safe Algorithms and sharing a KDS in a sysplex, ensure
all ICSF PTFs are installed on all systems.
Tip: All supported levels of ICSF automatically detect what hardware cryptographic
capabilities are available where it is running; then, it enables functions accordingly.
No toleration of new hardware is necessary. If you want to use new capabilities,
ICSF support is necessary.
Compliance with the PCI-HSM standard is valuable for customers, particularly those
customers who are in the banking and finance industry. This certification is important to
clients for the following fundamental reasons:
Compliance is increasingly becoming mandatory
The requirements in PCI-HSM make the system more secure.
If you are a bank, acquirer, processor, or other participant in the payment card systems, the
card brands can impose requirements on you if you want to process their cards. One set of
requirements they are increasingly enforcing is the PCI standards.
The card brands work with PCI in developing these standards, and they focused first on the
standards they considered most important, particularly the PCI Data Security Standard
(PCI-DSS). Some of the other standards were written or required later, and PCI-HSM is one
of the last standards to be developed. In addition, the standards themselves were increasing
the strength of their requirements over time. Some requirements that were optional in earlier
versions of the standards are now mandatory.
In general, the trend is for the card brands to enforce more of the PCI standards and to
enforce them more rigorously. The trend in the standards is to impose more and stricter
requirements in each successive version. The net result is that companies subject to these
requirements can expect that they eventually must comply with all of the requirements.
The result of these requirements is that applications and procedures often must be updated
because they used some of the things that are now prohibited. Although this issue is
inconvenient and imposes some costs, it does increase the resistance of the systems to
attacks of various kinds. Updating a system to use PCI-HSM compliant HSMs is expected to
reduce the risk of loss for the institution and its clients.
One of the classic examples is a 16-digit credit card number. After VFPE is used to encrypt a
credit card number, the resulting cipher text is another 16-digit number. This process helps
older databases contain encrypted data of sensitive fields without having to restructure the
database or applications.
VFPE allows customers to add encryption to their applications in such a way that the
encrypted data can flow through their systems without requiring a massive redesign of their
application. In our example, if the credit card number is VFPE-encrypted at the point of entry,
the cipher text still behaves as a credit card number. It can flow through business logic until it
meets a back-end transaction server that can VFPE-decrypt it to get the original credit card
number to process the transaction.
Note: VFPE technology forms part of Visa, Inc.’s, Data Secure Platform (DSP). The use of
this function requires a service agreement with Visa. You must maintain a valid service
agreement with Visa when you use DSP/FPE.
This support includes PIN method APIs, PIN administration APIs, new key management
verbs, and new access control points support that is needed for DK-defined functions.
6 Always check the latest information about security certification status for your specific model.
UDX is integrated into the base CCA code to support translating an external RSA CRT key
into new formats. These formats use tags to identify key components. Depending on which
new rule array keyword is used with the PKA Key Translate callable service, the service TDES
encrypts those components in CBC or ECB mode. In addition, AES CMAC support is
delivered.
In EP11, keys can now be generated and securely wrapped under the EP11 Master Key. The
secure keys never leave the secure coprocessor boundary decrypted.
The secure IBM Enterprise PKCS #11 (EP11) coprocessor runs the following tasks:
Encrypt and decrypt (AES, DES, TDES, and RSA)
Sign and verify (DSA, RSA, and ECDSA)
Generate keys and key pairs (DES, AES, DSA, ECC, and RSA)
HMAC (SHA1, SHA2 or SHA3 [SHA224, SHA256, SHA384, and SHA512])
Digest (SHA1, SHS2 or SHA3 [SHA224, SHA256, SHA384, and SHA512])
Wrap and unwrap keys
Random number generation
Get mechanism list and information
Attribute values
Key Agreement (Diffie-Hellman)
Note: The function extension capability through UDX is not available to the EP11.
When defined in EP11 mode, the TKE workstation is required to manage the Crypto Express
features.
Enterprise PKCS #11 (EP11) with IBM z16 provided the following updates:
Quantum Safe Algorithm enhancements provide:
– Updated support for Dilithium signatures:
• Round 2: Level 2 (6 5) and 3 (8 7)
• Round 3: Level 2 (4 4), 3 (6 5) and 5 (8 7)
– Add support for Kyber key encapsulation: Round 2: Level 3 (768) and 5 (1024)
Quantum Safe protected key support for EP11
Host Firmware and EP11 now use a hybrid scheme that combines ECDH and Kyber to
accomplish a quantum safe transport key exchange for protected key import.
Quantum Safe host firmware management support for EP11
Host Firmware and EP11 now use a hybrid scheme for authenticating management
functions that are started from the SE/HMC.
EP11 for all of CEX8S (5.9.x), and CEX7S (4.9.x)):
– Support for HSM backed Hierarchical Deterministic Wallets for Bitcoin (BIP 0032 and
SLIP 0010)
– Hash collision resistant Schnorr signature scheme BSI TR 03111, two variants:
• Plain BSI TR 03111
• With compressed keys and signing party’s public key as extra input
– Support for Edwards and Montgomery elliptic curves: EdDSA (Ed25519, Ed448) and
ECDH (X25519, X448) (8s and 7s)
– RSA OAEP with SHA 2 and SHA 3 (8s and 7s only)
– Extensive IBM Cloud Crypto support:
• Domains fully manageable by clients without cloud admins assistance
• Do not Disturb: Actively prohibit cloud administrators from domain management
– HSM internal re encrypt support for block-based cipher modes
EP11 for CEX8S (5.9.x) only
– Three new compliance modes: FIPS2021, FIPS2024, and Administrative FIPS2021
(first of its kind)
– Enhanced concurrent update support now includes kernel modules
– Enhanced maximum performance for digest and random number generation
– Allow for regular extractable keys to be tagged as protected key exportable
Enterprise PKCS #11 (EP11) with IBM z17 provides the following updates for Quantum Safe
Algorithms:
Standardized support conforming to official NIST specifications
Support for Module-Lattice-Base (ML, aka CRYSTALS) cryptography:
– ML-KEM (Key-Encapsulation Mechanism, FIPS 203), Key Sizes: 512, 768, 1024
– ML-DSA (Digital Signature Algorithm, FIPS 204), Strength Levels: (4,4), (6,5), (8,7)
If required for compliance reasons, the FIPS 140-2 certified version of EP11 can be used for
selected Crypto Express8S and Express7S adapters.
For Ethereum support, three enabling enhancements for EP11 are available for Crypto
Express8S and Express7S adapters on z17:
Pairing-based BLS signature support
BLS12-381 pairing-friendly elliptic curve
EIP2333 for deterministic hierarchical wallets
FIPS 140-2 certification is not relevant to the accelerator because it operates with clear keys
only. The function extension capability through UDX is not available to the accelerator.
The functions that remain available when the Crypto Express8S feature is configured as an
accelerator are used for the acceleration of modular arithmetic operations. That is, the RSA
cryptographic operations are used with the SSL/TLS protocol. The following operations are
accelerated:
PKA Decrypt (CSNDPKD) with PKCS-1.2 formatting
PKA Encrypt (CSNDPKE) with zero-pad formatting
Digital Signature Verify
The RSA encryption and decryption functions support key lengths of 512 - 4,096 bits in the
Modulus-Exponent (ME) and Chinese Remainder Theorem (CRT) formats.
Each Crypto Express8S FC 0908 includes two IBM 4770 PCIe Cryptographic Coprocessors
(PCIeCC), which is a hardware security module (HSM); FC 0909 includes one IBM 4770
PCIeCC. The adapters are available in the following configurations:
IBM Enterprise Common Cryptographic Architecture (CCA) Coprocessor (CEX8C)
IBM Enterprise Public Key Cryptography Standards #11 (PKCS) Coprocessor (CEX8P)
IBM Crypto Express8S Accelerator (CEX8A)
During the feature installation, the PCI-X adapter is configured by default as the CCA
coprocessor.
The configuration of the Crypto Express8S adapter as EP11 coprocessor requires a TKE
workstation (FC 0057/0058) with TKE 10.1 (FC 0883) LIC. The same requirement applies to
CCA mode for a full PCI-compliant environment.
The Crypto Express8S feature does not use CHPIDs from the channel subsystem pool.
However, the Crypto Express8S feature requires one slot in a PCIe I/O drawer, and one
PCHID for each PCIe cryptographic adapter.
For enabling an LPAR to use a Crypto Express8S adapter, the following cryptographic
resources in the image profile must be defined for each partition:
Usage domain index
Control domain index
PCI Cryptographic Coprocessor Candidate List
PCI Cryptographic Coprocessor Online List
This task is accomplished by using the Customize/Delete Activation Profile task, which is in
the Operational Customization Group, from the HMC or from the SE. Modify the
cryptographic initial definition from the Crypto option in the image profile, as shown in
Figure 6-6.
Important: After this definition is modified, any change to the image profile requires a
DEACTIVATE and ACTIVATE of the logical partition for the change to take effect.
Therefore, this cryptographic definition is disruptive to a running system.
domains that you want to access (including this partition’s own control domain) from this
partition.
Control and Usage Domain
Identifies the cryptographic coprocessor domains that are assigned to the partition for all
cryptographic coprocessors that are configured on the partition. The usage domains
cannot be removed if they are online. The numbers that are selected must match the
domain numbers that are entered in the Options data set when you start this partition
instance of ICSF.
The same usage domain index can be used by multiple partitions, regardless to which
CSS they are defined. However, the combination of PCIe adapter number and usage
domain index number must be unique across all active partitions.
Cryptographic Candidate list
Identifies the cryptographic coprocessor numbers that can be accessed by this logical
partition. From the list, select the coprocessor numbers (in the range 0 - 15) that identify
the PCIe adapters to be accessed by this partition.
Cryptographic Online list
Identifies the cryptographic coprocessor numbers that are automatically brought online
during logical partition activation. The numbers that are selected in the online list must
also be part of the candidate list.
After they are activated, the active partition cryptographic definitions can be viewed from the
HMC. Select the CPC, and click View LPAR Cryptographic Controls in the CPC
Operational Customization window. The resulting window displays the definition of Usage and
Control domain indexes, and PCI Cryptographic candidate and online lists, as shown in
Figure 6-7 on page 252.
Operational changes can be made by using the Change LPAR Cryptographic Controls task,
which reflects the cryptographic definitions in the image profile for the partition. With this
function, the cryptographic feature can be added and removed dynamically, without stopping
a running operating system.
For more information about the management of Crypto Express8S, see IBM z17
Configuration Setup, SG24-8960.
The TKE contains a combination of hardware and software. A mouse, keyboard, flat panel
display, PCIe adapter, and a writable USB media to install the TKE Licensed Internal Code
(LIC) are included with the system unit. The TKE workstation requires an IBM 4770 crypto
adapter.
A TKE workstation is part of a customized solution for the use of the Integrated Cryptographic
Service Facility for z/OS (ICSF for z/OS) or Linux for IBM Z. This program provides a basic
key management system for the cryptographic keys of an IBM z17 system that has Crypto
Express features installed.
The TKE provides a secure, remote, and flexible method of providing Master Key Part Entry,
and to remotely manage PCIe cryptographic coprocessors. The cryptographic functions on
the TKE run by one PCIe cryptographic coprocessor. The TKE workstation communicates
with the IBM Z system through a TCP/IP connection. The TKE workstation is available with
Ethernet LAN connectivity only. Up to 10 TKE workstations can be ordered.
TKE FCs 0057 and 0058 can be used to control any supported Crypto Express feature on
IBM z17, IBM z16, IBM z15 systems, and the Crypto adapters on older, still supported
systems.
The TKE 10.1 LIC (FC 0883) feature requires a 4770 HSM. The following features are
supported:
Managing the Crypto Express8S HSMs (CCA normal mode, CCA PCI mode, and EP11)
Quantum Safe Cryptography (QSC) used when:
– TKE authenticates Crypto Express8S HSMs
– Deriving a Transport Key between TKE’s HSM and target Crypto Express8S HSM
– On-demand HSM dual validation check.
Domain groups limitations All HSM in group must:
– Support QSC (can include Crypto Express8S HSMs only)
– Not support QSC (cannot include Crypto Express8S HSMs)
Configuration migration tasks support:
– Can collect and apply data to a Crypto Express8S HSM
– Can apply data from a pre-Crypto Express8S HSM.
New default wrapping method for the Crypto Express8S HSM
New AES DUKPT key attribute on AES DKYGENKY parts
Tip: For more information about handling a TKE, see the TKE Introduction video.
Each LPAR in the same system that uses a domain that is managed through a TKE
workstation connection is a TKE host or TKE target. An LPAR with a TCP/IP connection to the
TKE is referred to as the TKE host; all other partitions are TKE targets.
The cryptographic controls that are set for an LPAR through the SE determine whether the
workstation is a TKE host or a TKE target.
Smart card readers from FC 0885 or FC 0891 can be carried forward. Smart cards can be
used on TKE 10.1 with these readers. Access to and use of confidential data on the smart
card are protected by a user-defined PIN. Up to 990 other smart cards can be ordered for
backup (the extra smart card feature code is FC 0900). When one feature code is ordered, 10
smart cards are included. The order increment is 1 - 99 (10 - 990 blank smart cards).
If smart cards with applets that are not supported by the new smart card reader are reused,
new smart cards on TKE 8.1 or later must be created and the content from the old smart
cards to the new smart cards must be copied. The new smart cards can be created and
copied on a TKE 8.1 system. If the copies are done on TKE 9.0, the source smart card must
be placed in an older smart card reader from feature code 0885 or 0891.
A new smart card for the Trusted Key Entry (TKE) allows stronger Elliptic Curve Cryptography
(ECC) levels. More TKE Smart Cards (FC 0900, packs of 10, FIPS certified blanks) require
TKE 9.1 LIC or up.
Note: Several options for ordering the TKE with or without ordering Keyboard, Mouse, and
Display are available. Ask your IBM Representative for more information about which
option is the best option for you.
The TKE 10.1 LIC requires the 4770 crypto adapter. The TKE 9.x and TKE 8.x workstations
can be upgraded to the TKE 10.1 tower workstation by purchasing a 4770 crypto adapter.
The Omnikey Cardman 3821 smart card readers can be carried forward to any TKE 10.0
workstation. Smart cards 45D3398, 74Y0551, and 00JA710 can be used on TKE 10.1.
When performing a MES upgrade from TKE 8.x, or TKE 9.x to a TKE 10.1 installation, the
following steps must be completed:
1. Save Upgrade Data on an old TKE to USB memory to save client data.
2. Replace the TKE crypto adapter with the 4770 crypto adapter.
3. Upgrade the firmware to TKE 10.1.
4. Install the Frame Roll to apply Save Upgrade Data (client data) to the TKE 10.1 system.
5. Run the TKE Workstation Setup wizard.
Note: If your IBM z17 includes only Crypto Express7S, you can use TKE V9.2, which
requires the 4768 cryptographic adapter.
For more information about TKE hardware support, see Table 6-3. For some functions,
requirements must be considered; for example, the characterization of a Crypto Express
adapter in EP 11 mode always requires the use of a TKE.
Attention: The TKE is unaware of the CPC type where the host crypto module is installed.
That is, the TKE does not consider whether a Crypto Express is running on IBM z17, IBM
16, or IBM z15 system. Therefore, the LIC can support any CPC where the coprocessor is
supported, but the TKE LIC must support the specific crypto module.
Offers UDX - X - -
RSA functions - X X X
The KVM hypervisor, which is offered supported Linux distributions. For more information
about minimal and recommended distribution levels, see the Tested platforms for Linux web
page of the IBM IT infrastructure website.
Linux on IBM Z:
See 6.8.1, “Crypto Express8S Toleration” on page 258
– The support statements for IBM z17 also cover the KVM hypervisor on distribution
levels that have KVM support.
For more information about the minimum required and recommended distribution levels,
see the IBM Z website.
Linux on IBM Z:
See 6.8.1, “Crypto Express8S Toleration” on page 258
For more information about the minimum that is required and recommended distribution
levels, see the IBM Z website.
For more information about the software support levels for cryptographic functions, see
Chapter 7, “Operating systems support” on page 261.
Because this information is subject to continuous updating, for the most current information
check the hardware fix categories for IBM z17 Model ME1: IBM.Device.Server.IBM
z17-9175.*.
Support of IBM z17 functions depends on the operating system and version and release.
End of service operating systems: Operating system levels that are no longer in service
are not covered in this publication.
z/TPF V1R1
The use of specific features depends on the operating system. In all cases, program
temporary fixes (PTFs) might be required with the operating system level that is indicated.
Check the z/OS fix categories, or the subsets of the 9175DEVICE (IBM z17 ME1), and PSP
buckets for z/VM.
Important: Starting with z17 Model PSP buckets are no longer available. See the following
IBM Support Web page:
https://www.ibm.com/support/pages/psp-bucket-information-ibm-z-products
The fix categories are continuously updated. They contain the latest information about
maintenance, and contain installation information, hardware and software service levels,
service guidelines, and cross-product dependencies.
For more information about Linux on IBM Z distributions and KVM hypervisor, see the Linux
distributor’s support information.
Features and functions that are available on previous servers, but no longer supported by IBM
z17 servers, are not documented here.
For more information about supported functions that are based on operating systems, see
7.3, “IBM z17 features and function support overview” on page 268. Tables are built by
function and feature classification to help you determine, by a quick scan, what is supported
and the minimum operating system level that is required.
7.2.1 z/OS
z/OS Version 2 Release 4 is the earliest release that supports IBM z17 servers. Consider the
following points:
Service support for z/OS Version 2 Release 4 is a fee-based extension for defect support
(for up to three years), and can be obtained by ordering IBM Software Support Services -
Service Extension for z/OS V2.R4.
IBM z17 capabilities differ depending on the z/OS release. Toleration support is provided
on z/OS V2R4. Basic exploitation is provided by z/OS V2R5 and exploitation support is
provided on z/OS V3R1 and later only1.
Fixes are grouped into the following categories (for more information about IBM Fix
Categories, see this IBM Support web page):
Base support is provided by PTFs identified by:
IBM.Device.Server.z17-9175.RequiredService
Fixes that are required to run z/OS on the IBM z17 servers and must be installed before
migration.
Use of many functions is provided by PTFs identified by:
IBM.Device.Server.z17-9175.Exploitation
Fixes that are required to use the capabilities of the IBM z17. They must be installed only
if you use the function.
Recommended service is identified by:
IBM.Device.Server.z17-9175.RecommendedService
Fixes that are recommended to run z/OS on the IBM z17. These fixes also are listed in the
Recommended Service section of the hardware PSP bucket. They represent fixes that
were recommended by IBM Service. It is recommended that you review and install these
PTFs.
1 Use support for select features by way of PTFs. Toleration support for new hardware might also require PTFs.
All information needed to upgrade z/OS to support the IBM z17 is provided in the z/OS IBM
z17 Upgrade Workflow.
For more information about supported functions and their minimum required support levels,
see 7.3, “IBM z17 features and function support overview” on page 268.
7.2.2 z/VM
IBM z17 support is provided with PTFs for z/VM 7.3 and 7.4 including PTFs for IOCP, HCD,
and HLASM2. IBM z17 support is requires PTFs for z/VM 7.3 and z/VM 7.43.
Compatibility:
– z/VM support for host / guests on IBM z17 at the z16 functional level with limited
exploitation of new functions (some transparent, where applicable)
– Support available as PTFs applicable concurrently at IBM z17 general availability
– Includes PTFs for IOCP, HCD, and HLASM
– IBM z17 support is provided with PTFs for z/VM 7.3 and 7.4
2
IBM Z HLASM - High Level Assembler Language
3 For z/VM 7.4 GA date - September 2024. See the IBM z/VM continuous delivery page.
4
Provides RoCE and TCP/IP support over the same Network Express adapter, no separate RoCE Express card
required
5 EQDIO - Enhanced Queued Direct Input/Output
z/VM Compatibility Support enables guest use for several additional facilities:
Embedded Artificial Intelligence Acceleration
– Designed to reduce the overall time required to execute CPU operations for neural
networking processing functions, and help support real-time applications, such as
fraud detection.
Compliance-ready CPACF Counters support
– Provides a means for guests to track crypto compliance and instruction usage.
Breaking Event Address Register (BEAR) Enhancement Facility;
– Facilitates debugging wild branches.
Vector Packed Decimal Enhancements 2
– New instructions intended to provide performance improvements.
Reset DAT protection Facility
– Provides a more efficient way to disable DAT protection, such as during copy-on-write
or page change tracking operations.
RoCE Express3 adapter
– Allows guests to use Routable RoCE, Zero Touch RoCE, and SMC-R V2 support.
Guest Enablement for the CEX8S crypto adapter and assorted crypto enhancements
– Includes Quantum Safe API Guest Exploitation Support that is available to dedicated
guests.
CPU/Core topology location information within z/VM monitor data
– Provides a better picture of the system for diagnostic and tuning purposes.
Consolidated Boot Loader for guest IPL from SCSI
6 A z/VM VSwitch supporting Network Express OSH does not currently support z/OS guests exploiting an EQDIO
uplink port. In the interim, clients will be required to use either a guest-attached OSH device or existing functionality
available with OSA-Express7S adapters.
7 https://www.ibm.com/docs/en/zvm/7.4?topic=performance-zvm-data-pump
For more information about supported functions and their minimum required support levels,
see 7.3, “IBM z17 features and function support overview” on page 268.
7.2.3 z/TPF
IBM z17 support is provided by z/TPF V1R1 with PTFs. For more information about
supported functions and their minimum required support levels, see 7.3, “IBM z17 features
and function support overview” on page 268.
7.2.4 VSEn
IBM z17 VSE support is provided by VSEn V6.3.1 – 21st Century Software and later, with the
following considerations:
VSE runs in z/Architecture mode only
VSE supports 64-bit real and virtual addressing
For more information about supported functions and their minimum required support levels,
see 7.3, “IBM z17 features and function support overview” on page 268.
The service levels of SUSE, Red Hat, and Ubuntu releases that are supported at the time of
this writing are listed in Table 7-2.
For more information about supported Linux distributions on IBM Z servers, see the Tested
platforms for Linux page of the IBM IT infrastructure website.
IBM is working with Linux distribution Business Partners to provide further use of selected
IBM z17 functions in future Linux on IBM Z distribution releases.
Information about Linux on IBM Z refers exclusively to the suitable distributions of SUSE,
Red Hat, and Ubuntu.
The tables in this section list but do not specifically mark all the features that require fixes that
are required by the corresponding operating system for toleration or use.
Maximum processor unit (PUs) per system image 208 200 200 80b 80b
Dynamic PU add Y Y Y Y Y
Program-directed re-IPL Y na na Y Y
Transactional Executiond Y Y Y Y Ye
Flexible Capacity Y Y Y Y Y
CF level 26 Enhancements Y Yh Yh na na
HiperDispatch Optimization Y Y Y Ye Ye
ICSF Enhancements Y Y Y na na
The supported base CPC functions for VSEn V6.3.1, z/TPF, and Linux on IBM Z are listed in
Table 7-4.
Table 7-4 Supported base CPC functions for z/VSE, z/TPF, and Linux on IBM Z
Functiona z/TPF VSEn Linux on
V1R1 V6.3.1b IBM Zc
Dynamic PU add N Y Y
Program-directed re-IPL N Y Y
HiperDispatch Y N Y
AI accelerator exploitation N N Yj
Note: z/OS V2R4 support ended as of September 2024. No new function is provided for
the use of the new hardware features (toleration support only). Extended (fee-based)
support for z/OS V2.R4 can be ordered from IBM.
Table 7-5 Supported coupling and clustering functions for z/OS and z/VM
Functiona z/OS z/OS z/OS z/VM z/VM
V3R1 V2R5 V2R4 V7R4 V7R3
In addition to operating system support that is listed in Table 7-5, Server Time Protocol is
supported on z/TPF V1R1 and Linux on IBM Z. Also, CFCC Level 23, Level 24, Level 25, and
Level 26 are supported for z/TPF V1R1.
Table 7-6 Supported storage connectivity functions for z/OS and z/VM
Functiona z/OS z/OS z/OS z/VM z/VM
R3V1 V2R5 V2R4 V7R4 V7R3
The supported storage connectivity functions for VSEn V6.3.1n, z/TPF, and Linux on IBM Z
are listed in Table 7-7.
Table 7-7 Supported storage connectivity functions for z/TPF, and Linux on IBM Z
Functiona z/TPF VSEn Linux on
V1R1 V6.3.1b IBM Zc
Table 7-8 Supported network connectivity functions for z/OS and z/VM
Functiona z/OS z/OS z/OS z/VM z/VM
V3R1 V2R5 V2R4 V7R4 V7R3
HiperSockets
HiperSocketsc Y Y Y Y Y
The supported network connectivity functions for VSEn V6.3.1, z/TPF, and Linux on IBM Z are
listed in Table 7-9.
Table 7-9 Supported network connectivity functions for VSEn V6.3.1, z/TPF, and Linux on IBM Z
Functiona z/TPF VSEn Linux on
V1R1 V6.3.1b IBM Zc
HiperSockets
HiperSocketse N Y Y
e. On IBM z17 as for z16, the CHPID statement of HiperSockets devices requires the keyword
VCHID. Therefore, the IOCP definitions must be migrated to support the HiperSockets
definitions (CHPID type IQD). VCHID specifies the virtual channel identification number that
is associated with the channel path (valid range is 7C0 - 7FF). VCHID is not valid on IBM Z
servers before IBM z13.
f. Applicable to guest operating systems.
g. Support pending Linux Distribution adoption
h. Shared Memory Communications - Direct Memory Access.
i. SMC-R and SMC-D are supported on Linux kernel; see:
https://linux-on-z.blogspot.com/p/smc-for-linux-on-ibm-z.html
j. Linux support for NETD Virtual Function pending distribution partners adoption and testing.
NETD FID supports both NETH and ETH (TCP/IP) Linux drivers. No need for OSA/QDIO
driver.
Note: No new function is provided for leveraging the new HW features (toleration support
only). Although extended (fee-based) support for z/OS V2.R4 can be obtained, support for
z/OS V2.R4 is not covered extensively in this document.
Crypto Express8S Y Y Y Yb Yb
Crypto Express7S Y Y Y Yb Yb
The IBM z17 supported cryptography functions for VSEn V6.3.1 - 21st Century Link Software,
z/TPF, and Linux on IBM Z are listed in Table 7-11.
Table 7-11 Supported cryptography functions for z/TPF, VSEn V6.3.1, and Linux on IBM Z
Functiona z/TPF VSEn Linux on
V1R1 V6.3.1b IBM Zc
Crypto Express8S Y Y Y
Crypto Express7S Y Y Y
VSEn V6.3.1 g VSE Turbo Dispatcher can use up to 4 CPs, and tolerates up to
10-way LPARs
Linux on IBM Z SUSE Linux Enterprise Server 12 and later: 256 CPs or IFLs.
Red Hat RHEL 7 and later: 256 CPs or IFLs.
Ubuntu 20.04.1 LTS and later: 256 CPs or IFLs.
KVM Hypervisor The KVM hypervisor is offered with the following Linux distributions
-- 256CPs or IFLs--:
SLES 12 SP5 and later
RHEL 7.9 and later
Ubuntu 20.04.1 LTS and later
Dynamic PU add
Planning an LPAR configuration includes defining reserved PUs that can be brought online
when extra capacity is needed. Operating system support is required to use this capability
without an IPL; that is, nondisruptively. This support is available in z/OS for some time.
The dynamic PU add function enhances this support by allowing you to dynamically define
and change the number and type of reserved PUs in an LPAR profile, which removes any
planning requirements. The new resources are immediately made available to the operating
system and in the case of z/VM, to its guests.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
z/OS can take advantage of this support and nondisruptively acquire and release memory
from the reserved area. z/VM V7R1 and later can acquire memory nondisruptively and
immediately make it available to guests.
z/VM virtualizes this support to its guests, which now also can increase their memory
nondisruptively if supported by the guest operating system. Currently, releasing memory from
z/VM is supported on z/VM V7.2 with PTFs8. Releasing memory from the z/VM guest
depends on the guest’s operating system support.
Linux on IBM Z also supports acquiring and releasing memory nondisruptively. This feature is
enabled for SUSE Linux Enterprise Server 12 and RHEL 7.9 and later releases.
The Capacity Provisioning Manager, which is a feature that was first available with z/OS
V1R9, interfaces with z/OS Workload Manager (WLM) and implements capacity provisioning
policies. Several implementation options are available, from an analysis mode that issues
only guidelines, to an autonomic mode that provides fully automated operations.
Program-directed re-IPL
Program directed re-IPL allows an operating system to re-IPL without operator intervention.
This function is supported for SCSI and IBM extended count key data (IBM ECKD) devices.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
IOCP
All IBM Z servers require a description of their I/O configuration. This description is stored in
I/O configuration data set (IOCDS) files. The I/O configuration program (IOCP) allows for the
8
z/VM Dynamic Memory Downgrade (releasing memory from z/VM LPAR) made available with PTFs for APAR
VM66271. For more information, see: http://www.vm.ibm.com/newfunction/#dmd
creation of the IOCDS file from a source file that is known as the I/O configuration source
(IOCS).
The IOCS file contains definitions of LPARs and channel subsystems. It also includes detailed
information for each channel and path assignment, control unit, and device in the
configuration.
IOCP for IBM z17 provides support for the following features:
IBM z17 Base machine definition
PCI function adapter for zHyperLink (HYL)
PCI function Network Express adapter (CX6)
New hardware (announced with Driver 61)
IOCP Dynamic I/O support for stand-alone CF, Linux on Z and z/TPF, running on IBM z16
and IBM z17 CPCs.
For more information, see IBM Dynamic Partition Manager (DPM) Guide, SB10-7188
z/OS
HiperDispatch
In z/OS, the IEAOPTxx keyword HIPERDISPATCH defaults to YES when it is running on an IBM
z17, IBM z16, or IBM z15 system.
The use of SMT on IBM z17 systems requires that HiperDispatch is enabled on the operating
system. For more information, see “Simultaneous multithreading” on page 288.
Also, any LPAR that is running with more than 64 logical processors is required to operate in
HiperDispatch Management Mode.
HiperDispatch on IBM z17 systems uses chip and CPC drawer configuration to improve the
cache access performance. It optimizes the system PU allocation with Chip/cluster/drawer
cache structure on IBM Z servers.
The base support for IBM z17 is provided by PTFs that are identified by:
IBM.device.server.IBM z17-9175.requiredservice
PR/SM on IBM z17 servers preferentially assigns memory for a system in one CPC drawer
that is striped across the clusters of that drawer to take advantage of the lower latency
memory access in a drawer. Also, PR/SM tries to consolidate storage onto drawers with the
most processor entitlement.
With HiperDispatch enabled, PR/SM seeks to assign logical processors of a partition to the
smallest number of PU chips within a drawer in cooperation with operating system to optimize
shared cache usage.
PR/SM automatically keeps a partition’s memory and logical processors on the same CPC
drawer where possible. This arrangement looks simple for a partition, but it is a complex
optimization for multiple logical partitions because some must be split among processors
drawers.
All IBM z17 processor types can be dynamically reassigned except IFPs.
To use HiperDispatch effectively, WLM goal adjustment might be required. Review the WLM
policies and goals and update them as necessary.
WLM policies can be changed without turning off HiperDispatch. A health check is provided to
verify whether HiperDispatch is enabled on a system image.
z/TPF
z/TPF on IBM z17 can use more processors immediately without reactivating the LPAR or
IPLing the z/TPF system.
When z/TPF is running in a shared processor configuration, the achieved MIPS is higher
when z/TPF uses a minimum set of processors.
In low-use periods, z/TPF minimizes the processor footprint by compressing TPF workload
onto a minimal set of I-streams (engines), which reduces the effect on other LPARs and
allows the entire CPC to operate more efficiently.
As a consequence, z/OS and z/VM experience less contention from the z/TPF system when
the z/TPF system is operating at periods of low demand.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
zIIP support
zIIPs do not change the model capacity identifier of IBM z17 servers. IBM software product
license charges that are based on the model capacity identifier are not affected by the
addition of zIIPs.
No changes to applications are required to use zIIPs. They can be used by the following
applications:
Db2 V8 and later for z/OS data serving for applications that use data Distributed Relational
Database Architecture (DRDA) over TCP/IP, such as data serving, data warehousing, and
selected utilities.
z/OS XML services.
z/OS CIM Server.
z/OS Communications Server for network encryption (Internet Protocol Security [IPsec])
and for large messages that are sent by HiperSockets.
IBM GBS Scalable Architecture for Financial Reporting.
IBM z/OS Global Mirror (formerly XRC) and System Data Mover.
IBM z/OS Container Extensions.
IBM OMEGAMON® XE on z/OS, OMEGAMON XE on Db2 Performance Expert, and Db2
Performance Monitor.
Any Java application that uses the current IBM SDK.
Java IBM Semeru Runtime offloading enablement for DLC models that use Integrated
Accelerator for AI.
WebSphere Application Server V5R1 and later, and products that are based on it, such as
WebSphere Portal, WebSphere Enterprise Service Bus (WebSphere ESB), and
WebSphere Business Integration (WBI) for z/OS.
CICS/TS V2R3 and later.
Db2 UDB for z/OS Version 8 and later.
IMS Version 8 and later.
zIIP Assisted HiperSockets for large messages.
z/OSMF (z/OS Management Facility).
IBM z/OS Platform for Apache Spark.
IBM Watson® Machine Learning for z/OS.
z/OS System Recovery Boost.
Approved third-party vendor products.
The use of a zIIP is transparent to application programs. The supported operating systems
are listed in Table 7-3 on page 269.
On IBM z17 servers, the zIIP processor is designed to run in SMT mode, with up to two
threads per processor. This function is designed to help improve throughput for zIIP
workloads and provide appropriate performance measurement, capacity planning, and SMF
accounting data. zIIP support is available on all currently supported z/OS versions.
Use the PROJECTCPU option of the IEAOPTxx parmlib member to help determine whether zIIPs
can be beneficial to the installation. Setting PROJECTCPU=YES directs z/OS to record the
amount of eligible work for zIIPs in SMF record type 72 subtype 3.
The field APPL% IIPCP of the Workload Activity Report listing by WLM service class
indicates the percentage of a processor that is zIIP eligible. Because of the zIIP’s lower price
as compared to a CP, even a utilization as low as 10% can provide cost benefits.
This feature enables software to indicate to the hardware the beginning and end of a group of
instructions that must be treated in an atomic way. All of their results occur or none occur, in
true transactional style. The execution is optimistic.
The hardware provides a memory area to record the original contents of affected registers
and memory as instruction execution occurs. If the transactional execution group is canceled
or must be rolled back, the hardware transactional memory is used to reset the values.
Software can implement a fallback capability.
This capability increases the software’s efficiency by providing a way to avoid locks (lock
elision). This advantage is of special importance for speculative code generation and highly
parallelized applications.
TX is used by IBM Java virtual machine (JVM) and might be used by other software. The
supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on page 270.
Added or increased IBM software costs are not incurred by using System Recovery Boost.
With the IBM z17 system, more Recovery process boost scenarios are supported that allow
the customer to define some boost granularity (see Table 7-14 on page 288). For more
information, see IBM Z System Recovery Boost, REDP-5563.
9
Statement of Direction: In a future IBM Z hardware system family, the transactional execution and constrained
transactional execution facility will no longer be supported. Users of the facility on current servers should always
check the facility indications before use.
Simultaneous multithreading
SMT is the hardware capability to process up to two simultaneous threads in a single core,
which shares the resources of the core, such as cache, translation lookaside buffer (TLB),
and execution resources. This sharing improves system capacity and efficiency by reducing
processor delays, which increases the overall throughput of the system.
Note: For zIIPs and IFLs, SMT must be enabled on z/OS, z/VM, or Linux on IBM Z
instances. An operating system with SMT support can be configured to dispatch work to a
thread on a zIIP (for eligible workloads in z/OS) or an IFL (for z/VM) core in single-thread or
SMT mode.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
10 SMT is also enabled (not user configurable) by default for SAPs.
An operating system that uses SMT controls each core and is responsible for maximizing its
throughput and meeting workload goals with the smallest number of cores. In z/OS, consider
HiperDispatch cache optimization when you must choose the two threads to be dispatched in
the same processor.
HiperDispatch attempts to dispatch guest virtual CPUs on the same logical processor on
which they ran. PR/SM attempts to dispatch a vertical low logical processor in the same
physical processor. If that process is not possible, it attempts to dispatch it in the same node,
or then the same CPC drawer where it was dispatched before to maximize cache reuse.
From the perspective of an application, SMT is transparent and no changes are required in
the application for it to run in an SMT environment, as shown in Figure 7-1.
MT Ignorant
z/OS z/VM
PR/SM Hypervisor MT Aware
z/OS
The use of SMT on z/OS requires enabling HiperDispatch, and defining the processor view
(PROCVIEW) control statement in the LOADxx parmlib member and the MT_ZIIP_MODE
parameter in the IEAOPTxx parmlib member.
The PROCVIEW statement is defined for the life of IPL, and can have the following values:
CORE: This value specifies that z/OS should configure a processor view of core, in which a
core can include one or more threads. The number of threads is limited by IBM z17 to two
threads. If the underlying hardware does not support SMT, a core is limited to one thread.
CPU: This value is the default. It specifies that z/OS should configure a traditional processor
view of CPU and not use SMT.
CORE,CPU_OK: This value specifies that z/OS should configure a processor view of core (as
with the CORE value) but the CPU parameter is accepted as an alias for applicable
commands.
When PROCVIEW CORE or CORE,CPU_OK are specified in z/OS that is running on an IBM z17,
HiperDispatch is forced to run as enabled, and you cannot disable HiperDispatch. The
PROCVIEW statement cannot be changed dynamically; therefore, you must re-IPL after
changing it to make the new setting effective.
The MT_ZIIP_MODE parameter in the IEAOPTxx controls zIIP SMT mode. It can be 1 (the
default), where only one thread can be running in a core, or 2, where up two threads can be
running in a core. If PROCVIEW CPU is specified, the MT_ZIIP_MODE is always 1. Otherwise, the
use of SMT to dispatch two threads in a single zIIP logical processor (MT_ZIIP_MODE=2) can be
changed dynamically by using the SET OPT=xx setting in the IEAOPTxx parmlib. Changing the
MT mode for all cores can take some time to complete.
PROCVIEW CORE requires DISPLAY M=CORE and CONFIG CORE to display (Example 7-1) the core
states and configure an entire core. With the introduction of Multi-Threading support for
SAPs, a maximum of 88 logical SAPs can be used. RMF is updated to support this change by
implementing page break support in the I/O Queuing Activity report that is generated by the
RMF Post processor.
CPC ND = 009175.ME1.IBM.02.0000000310A9
CPC SI = 9175.710.IBM.02.00000000000310A9
Model: ME1
CPC ID = 00
CPC NAME = XXXXXX
LP NAME = XXXXXXXX LP ID = 1
CSS ID = 0
MIF ID = 1
The default in z/VM is multithreading disabled. Dynamic SMT enables dynamically varying the
active threads per core. The number of active threads per core can be changed dynamically
without a system outage and potential capacity gains going from no SMT to SMT-2 (one to
two threads per core) can be achieved dynamically.
z/VM V7R4 and V7R3 support up to 40 multithreaded cores (80 threads) for IFLs, and each
thread is treated as an independent processor. z/VM dispatches virtual IFLs on the IFL logical
processor so that the same or different guests can share a core. Each core has a single
dispatch vector, and z/VM attempts to place virtual sibling IFLs on the same dispatch vector
to maximize cache reuses.
z/VM guests have no awareness of SMT, and cannot use it directly. z/VM SMT use does not
include guest support for multithreading. The value of this support for guests is that the
first-level z/VM host of the guests can achieve higher throughput from the multi-threaded IFL
cores.
The following minimum releases of Linux on IBM Z distributions are supported on IBM z16
(native SMT support):
SUSE:
– SLES 16
– SLES 15 SP6 with service
– SUSE SLES 12 SP5 with service
Red Hat:
– Red Hat RHEL 9.4
– Red Hat RHEL 8.1with service
– Red Hat RHEL 7.9 with service
Ubuntu:
– Ubuntu 24.04 LTS
– Ubuntu 22.04 LTS
– Ubuntu 20.04.1 LTS with service
The KVM hypervisor is supported on the same Linux on IBM Z distributions in this list.
For more information about the most current support, see the Linux on IBM Z Tested
platforms website.
Single-instruction multiple-data
The SIMD feature introduces a new set of instructions to enable parallel computing that can
accelerate code with string, character, integer, and floating point data types. The SIMD
instructions allow many operands to be processed with a single complex instruction.
IBM z17 is equipped with a set of instructions to improve the performance of complex
mathematical models and analytic workloads through vector processing and complex
instructions, which can process numerous data with a single instruction. This set of
instructions, which is known as SIMD, enables more consolidation of analytic workloads and
business transactions on IBM Z servers.
SIMD on IBM z17 has support for enhanced math libraries that provide performance
improvements for analytical workloads by processing more information with a single CPU
instruction.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270. Operating System support includes the following features11:
Enablement of vector registers.
A math library with an optimized and tuned math function (Mathematical Acceleration
Subsystem [MASS]) that can be used in place of some of the C standard math functions. It
includes a SIMD vectorized and non-vectorized version.
A specialized math library, which is known as Automatically Tuned Linear Algebra
Software (ATLAS), that is optimized for the hardware.
IBM Language Environment® for C runtime function enablement for ATLAS.
DBX to support the disassembly of the new vector instructions, and to display and set
vector registers.
XML SS exploitation to use new vector processing instructions to improve performance.
MASS and ATLAS can reduce the time and effort for middleware and application developers.
IBM provides compiler built-in functions for SIMD that software applications can use as
needed, such as for using string instructions.
Code must be developed to use the SIMD functions. Applications with SIMD instructions
abend if they run on a lower hardware level system that do not support SIMD. Some
mathematical function replacement can be done without code changes by including the scalar
MASS library before the standard math library.
Because the MASS and standard math library include different accuracies, assess the
accuracy of the functions in the context of the user application before deciding whether to use
the MASS and ATLAS libraries.
The SIMD functions can be disabled in z/OS partitions at IPL time by using the MACHMIG
parameter in the LOADxx member. To disable SIMD code, use the MACHMIG VEF
11 The features that are listed here might not be available on all operating systems that are listed in the tables.
hardware-based vector facility. If you do not specify a MACHMIG statement, which is the default,
the system not limited in its use of the Vector Facility for z/Architecture (SIMD).
Important: These mnemonics may collide with the names of Assembler macro instructions
you have.
It is safer to assemble using an OPTABLE option which matches the current highest target
hardware level. If you code Assembly Language macros, you should compare the list of new
instructions to the names of your Assembler macros.
An Tool will be available with the PTF availability from IBM Support website. If a conflict is
identified, then either:
1. Rename your affected macros
2. Specify a separate assembler OPCODE table – PARM=,ASMAOPT, or ‘*PROCESS
OPTABLE’ in the source
Use a coding technique that permits both use of a new instruction and a macro with the same
name in an assembly such as HLASM’s mnemonic tag (:MAC :ASM).
See the APAR documentation for a link to the new mnemonics and what to do in the case of
clashes.
Attention: We normally generally recommend that rather than using the default of
OPTABLE(UNI) which will immediately pick up any new instructions, it is safer to assemble
using an OPTABLE option which matches the current highest target hardware level. This
firstly ensures that any accidental attempt to use a newer instruction will get an error, and
secondly means that if support for a new hardware level is added by maintenance, this will
not impact existing programs at all, so it will only be necessary to check for conflicting
macros at the point when new hardware is to be exploited by using a new OPTABLE level.
IBM z17 uses COBOL optimization for Hexadecimal Floating Point (HFP) <--> Binary Coded
Decimal (BCD) conversion, and Numeric Editing, and Zoned Decimal operations, introduced
with IBM z16.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270. For more information, see 7.5.8, “z/OS XL C/C++ considerations” on page 342.
Out-of-order execution
Out-of-order (OOO) execution yields significant performance benefits for compute-intensive
applications by reordering instruction execution, which allows later (newer) instructions to be
run ahead of a stalled instruction, and reordering storage accesses and parallel storage
accesses. OOO maintains good performance growth for traditional applications.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270. For more information, see “3.4.3, “Out-of-Order execution” on page 86.
z/OS DFSORT takes advantage of Z Sort, and provides users with significant performance
boosts for their sort workloads. With z/OS DFSORT's Z Sort algorithm, clients can see batch
sort job elapsed time improvements of up to 20 - 30% (depending on record size) and CPU
time improvements of up to 40% compared to older IBM Z systems w/o integrated SORT
accelerator.
The function is used on z/OS V3R1, z/OS V2R5, and enabled on z/OS V2R4 with PTFs for
APAR PH03207.
The sort jobs must meet certain eligibility criteria. For the full list and other considerations,
see the DFSORT User Guide for PH03207.
If changed extended counter sets are being used, then any programs relying upon those
counters are impacted, and must be updated accordingly.
If unchanged extended counter sets are using, no changes are necessary. This new support
is available for z/OS V2.4 and higher.
Restriction: z/OS does not collect CPU MF data when running as a guest of z/VM.
Capture on pre-IBM z17 server to determine your current LSPR workload. Capture again on
IBM z17 server afterwards.
Having both values this can allow you to validate your achieved IBM z17 processor
performance, and provide insights for new features and functions.
Tip: Additional information are available on the Washington Systems Center website:
https://www.ibm.com/support/pages/node/6354583
The supported operating systems are listed in Table 7-3 on page 269.
For more information about this function, see this IBM Support web page.
For more information about the CPU Measurement Facility, see this IBM Support web page.
For more information, see “12.2, “IBM z17 Large System Performance Reference ratio” on
page 492.
This process will correlate a z/OS user ID that issued the BCPii request with a Hardware
Management Consoles (HMC) user ID. The correlated HMC user ID will then be used to
determine if the ‘user’ is authorized to issue the request.
Another V2 API, HWIREST2, will be introduced which only supports the JWT authorization
scheme.
Restriction: This new function only supported with C and HLASM interfaces. There is no
REXX interface available. Also this support is only available on z/OS V3.1 and higher.
HWIREST2 has the advantage of enabling z/OS BCPii to register an ENF 68 listener on the
user’s behalf. HWIREST, the initial V2 API, makes the user application responsible for
registering an ENF 68 listener exit. HWIREST supports both FACILITY class and JWT
authorization scheme for classical CEC non-HMC target operations. Supports C, HLASM,
and REXX.
Local SE and any target SE or HMC, must be at IBM z17. HMC managing the local SE must
be at IBM z17.
Workload instrumentation data can be collected with the new IEASYSxx WORKINST
parameter and has the following values:
SYSTEM is the default, and will collect data if the system supports it.
YES Data collection enabled
NO Data collection disabled
The SCRT has changed and SMF 70.1 records will have a new data section which report on
the workload class usage.
Note: The support is available on z/OS V2.5 and higher via APAR OA66812, OA65240,
OA65242, OA66596, and OA66937. z/VM PTFs are required to enable workload
classification for z/OS as a guest.
IBM intends reporting products to make use of service class/reporting class resource
consumption data in SMF 72.3 to provide workload-level granularity in reporting out the CPU,
I/O, and memory power.
Note: The Exploitation support is available on z/OS V3.1 and higher via APAR OA63265
and OA66018.
The z/OS System command D M,CPU has been modified in order to support IBM z17
replacement capacity.
The z/OS Data Gatherer stores new metrics in SMF 70.1 and Monitor III measurement table
ERBCPCDB. Also z/OS SCRT processes enhanced SMF 70.1 for TFP-HW.
Note: This enhancement is available for V2.5 and higher and requires exploitation support
via APAR OA66402, OA66054,OA63265, and OA66938.
The support is intended to assist with current and future regulations and demands to report
power metrics at a more granular level.
IBM intends reporting products to make use of service class/reporting class resource
consumption data (in SMF 72.3) to provide workload-level granularity in reporting out the
CPU, I/O, and memory power.
Restriction: THis support is only available on V3.1 and higher and requires APAR
OA63265 and OA66018.
Starting with z/OS V2.R5 onwards allows 2 GB LFAREA to exceed the 4 TB limit, up to 16 TB.
Attention: All online real storage over 4 TB is part of the 2 GB LFAREA, in addition to what
was specified in LFAREA. That means only 4TB are available for 4-KB and 1-MB frames.
Real memory is available for 2 GB pages only. Applications that make use of 2 GB frames
should be reviewed to use more frames if applicable (for example, Java and Db2).
Calculation Example
Lets assume we have 8 TB defined to the LPAR. Our IEASYSxx member contains the
following LFAREA definition: LFAREA=(1M=1024,2G=512)
Note: Any machine having a single LPAR consuming more than the amount of memory
plugged within a drawer will inherently reduce the performance of the LPAR due to the
cross-drawer communication.
This would be 16 TB for IBM z17, 10TB for z15/z16, and 8TB for z14.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
ICSF Enhancements
The Clear key HMAC acceleration via CP Assist for Cryptographic Functions (CPACF) has
been implemented. ICSF changes clear key HMAC requests internally to use the new CPACF
instruction when running on IBM z17.
zDNN is an IBM standard C library that provides APIs to facilitate access to the zAIU, which is
an on-chip deep learning inference accelerator.
zDNN will be available on both z/OS and Linux on Z distributions. Linux on Z will also facilitate
zCX exploitation of AIU by z/OS clients using supported Linux AI frameworks.
For Software fallback, all fall back scenario would be NN model executing ops on AIU back to
non-AIU HW (standard CPU ops). Exploiting applications should provide software-based
implementation of DL primitives. Most z/OS exploitation will be via IBM Deep Learning
Compiler, which will generate NN model optimized library which can switch between AIU and
normal CPU based models.
Note: See IBM Z Deep Neural Network Library Programming Guide and Reference
The ONNX model feature would not require any changes from a data scientists perspective
(i.e., no model changes). Action on the model deployment may be needed (i.e., to trigger a
model recompile that targets the AIU). SQL Data Insights is a new feature, and requires
explicit exploitation steps. These are brand new types of built-in SQL functions that will drive
AI under the covers to uncover hidden relationships in Db2 data.
zDNN is that deep learning library. zAIU has complex data layout requirements for the tensor
to enhance performance characteristics or operations. zDNN formats the tensor appropriately
on behalf of the caller, using an optimized approach.
zDNN library provides a set of APIs that an exploiter will use to drive the desired
request.zDNN is available on both z/OS and Linux on Z. Including support for Linux on Z is
important as acceleration can be enabled in frameworks for z/OS via zCX.
Important: zDNN is supported on IBM z16, but there are new APIs and IBM z17-aware
optimizations which require IBM z17.
IBM z17 has updated Neural Network Processor Assist (NNPA) instructions.
Restriction: V2.5 and higher is required and via APAR OA66863. When using z/VM live
guest relocation, ensure the relocation group members to be at least z16 in order to use
the IBM z17 functionality.
As in the past, the z/OS XL C/C++ compiler, included as a priced feature of z/OS, will not be
updated with the support mentioned above.
XL C/C++ supports up to z15 HW instructions with ARCH(13).
Programs compiled with XL C/C++ will run on IBM z17.
Tip: Watch for availability of the next release of Open XL C/C++ compiler at this website:
https://www.ibm.com/support/pages/ibm-open-xl-cc-and-xl-cc-zos-documentation-li
brary
AI accelerator exploitation
With the IBM z17 Integrated Accelerator for AI, customers can benefit from the acceleration of
AI operations, such as fraud detection, customer behavior predictions, and streamlining of
supply chain operations; all in real time. AI accelerators are designed to deliver AI inference in
real time, at large scale and rate, with no transaction left behind so that customers can
instantly derive the valuable insights from their data.
The AI capability is applied directly into the running transaction, shifting the traditional
paradigm of applying AI to the transactions that were completed. This innovative technology
can be used for intelligent IT workloads placement algorithms and contribute to better overall
system performance. The co-processor is driven by the new Neural Networks Processing
Assist (NNPA) instruction.
IBM Virtual Flash Memory is designed to help improve availability and handling of paging
workload spikes when running z/OS. With this support, z/OS is designed to help improve
system availability and responsiveness by using VFM across transitional workload events,
such as market openings, and diagnostic data collection. z/OS also helps improve processor
performance by supporting middleware use of pageable large (1 MB) pages.
VFM can help organizations meet their most demanding service level agreements and
compete more effectively. VFM is easily configurable, and provides rapid time to value.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
z/OS
GSF support allows an area of storage to be identified such that an Exit routine assumes
control if a reference is made to that storage. GSF is managed by new instructions that define
Guarded Storage Controls and system code to maintain that control information across
undispatch and redispatch.
z/VM
GSF is designed to improve the performance of garbage-collection processing by various
languages, in particular Java.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
Through enhanced hardware features (based on DAT table entry bit) and specific software
requests to obtain memory areas as nonexecutable, areas of memory can be protected from
unauthorized execution. A Protection Exception occurs if an attempt is made to fetch an
instruction from an address in such an element or if an address in such an element is the
target of an execute-type instruction.
z/OS
To use IEP, Real Storage Manager (RSM) is enhanced to request nonexecutable memory
allocation. Use new keyword EXECUTABLE=YES|NO on STORAGE OBTAIN or IARV64 to indicate
whether memory that is to be used contains executable code. Recovery Termination Manager
(RTM) writes LOGREC record of any program-check that results from IEP.
IEP support is for z/OS V2.R4 and later running on IBM z17 with APARs OA51030 and
OA51643 installed.
z/VM
Guest exploitation support for the Instruction Execution Protection Facility is provided with
APAR VM65986.
The supported operating systems are listed in Table 7-3 on page 269 and Table 7-4 on
page 270.
Secure Boot
Secure Boot verification ensures that the Linux distribution kernel comes from an official
provider and was not compromised. If the signature of the distribution cannot be verified, the
process of booting the operating system is stopped.
For more information, see Chapter B, “IBM Integrated Accelerator for zEnterprise Data
Compression” on page 525.
Each PU chip includes one on-chip compression unit, which is designed to replace the
zEnterprise Data Compression (zEDC) Express PCIe feature that is available on IBM z14 and
earlier.
Note: The zEDC Express feature that is available on older systems is not carried forward
to IBM z17.
The IBM Integrated Accelerator for zEDC maintains software compatibility with zEDC
Express use cases. For more information, see Integrated Acceleratorfor zEnterprise Data
Compression.
All data interchange with zEDC compressed data remains compatible as IBM z17 and zEDC
capable machines coexist (accessing same data). Data that is compressed and written with
zEDC can be read and decompressed by IBM z17 well into the future.
Function support for the IBM Integrated Accelerator for zEDC is listed in Table 7-3 on
page 269 and Table 7-4 on page 270.
For more information, see Chapter B, “IBM Integrated Accelerator for zEnterprise Data
Compression” on page 525.
Important: This new long distance coupling link adapter (CL6) can only be connected to
CL6 links. CL5 can be used to communicate with pre-IBM z17 servers. CL5 can be used
as fallback from CL6, if necessary.
Note: The exploitation support is available via APAR OA64478, OA64591, OA64362,
OA64114.
CFCC Level 26
CFCC Level 26 is delivered on IBM z17 servers with driver level 61 and introduces the
following enhancements:
In z13 GA2, we delivered Asynchronous CF Duplexing for CF lock structures (with DB2
exploitation), which addresses the first problem, but for DB2 lock structures only (with DB2
V12/V13 exploitation only)
Successful – it provided simplex-like service times for a duplexed lock structure.
Need a more general solution for all of the critical data-sharing lock structures.
No other exploiters have supported this Asynchronous Duplexing mechanism (IMS IRLM,
VSAM RLS, GRS)
Nothing we have delivered yet addresses the second problem of establishing duplexing (or
doing a system-managed rebuild) in a non-disruptive way.
The CF and z/OS will make use of the existing PLSO “push” command for lock structures
(currently used in the Asynchronous Duplexing process) in a novel way to allow duplexing to
be established (or re-established after a failure), or a system-managed rebuild process to be
performed, in a non-disruptive way.
z/OS creates an empty secondary copy of the structure and binds it to the existing primary
in an asynchronous duplexing relationship
– All subsequent commands that update the primary structure instance will have those
updates “pushed” to the secondary structure instance
– At this point, the secondary copy is marked as not usable for failover purposes
z/OS then invokes a new long-running CF structure copy process to “push” all current
contents of the lock structure from primary to secondary; these pushes can interleave and
overlap with pushes generated by ongoing mainline locking commands, fully ordered and
serialized by normal primary structure latching and sequence number generation
processes
When the long-running CF structure copy process has completed all copy activity, the
structure transitions into the desired final state, which can either be simplex mode,
synchronous SM duplexing mode, or asynchronous duplexing mode
– In the case of duplexing, z/OS now marks the secondary copy as usable for failover
purposes
It’s expected that the new PLSO-based copy process be faster than existing
software-based SM copy processes, but even if it isn’t, the non-disruptiveness of the copy
process is a major advantage
– Avoids disruption of the client’s data-sharing workload while starting/restarting CF
Duplexing or SM rebuilding a structure (a long-standing client pain point)
APAR OA65820 (XCF/XES) – z/OS 2.5 and higher
CFCC Level 25
CFCC Level 25 is delivered on IBM z16 servers with driver level 51 and introduces the
following enhancements:
Scalability Improvements
Processing and dispatching enhancements that result in meaningful scaling of effective
throughput up to the limit of 16 ICF processors.
Request latency/performance improvements
CFCC and coupling link firmware and hardware improvements to reduce link latency.
Elimination of VSAM RLS orphaned cast-out lock problems and improved VSAM RLS
Structure Full recovery processing.
Addresses reoccurring problems that are encountered by installations running VSAM RLS
data sharing.
Retry Buffer support that is used on list and lock commands is extended to nonidempotent
cache commands and optimized lock commands.
The new support also allows connectors to lock structures to specify a percentage of
record data entries to be reserved. These reserved entries are off limits to normal
requests to the coupling facility and can be used only if a new keyword is used on lock
requests that generate record data entries.
Cache residency time metrics
The CF calculates in microseconds by way of a moving weighted average the elapsed
time a data area or directory entry resides in a storage class before it is reclaimed. XES
returns this information on an IXLCACHE REQUEST=READSTGSTATS and IXLMG
STRNAME=strname,STGCLASS=stgclass request.
DYNDISP=ON|OFF is deprecated
For CFCC Level 25, DYNDISP=THIN is the only available behavior for shared-engine CF
dispatching.
Specifying OFF or ON in CF commands and the CF configuration file is preserved for
compatibility, but a warning message is issued to indicate that these options are no longer
supported, and that DYNDISP=THIN behavior is to be used.
Before you begin the migration process, install the compatibility and coexistence PTFs. A
planned outage is required when you upgrade the CF or CF LPAR to CFCC Level 25.
CFCC Level 24
CFCC Level 24 is delivered on IBM z16 servers with driver level 41. CFCC Level 24
introduced the following enhancements:
CFCC Fair Latch Manager
This enhancement to the internals of the Coupling Facility (CFCC) dispatcher provides CF
work management efficiency and processor scalability improvements. It also improves the
“fairness” of arbitration for internal CF resource latches across tasks
CFCC Message Path Resiliency
CF Message Paths use a z/OS-provided system identifier (SYID) to uniquely identify
which z/OS system image (and instance of that system image) is sending requests over a
message path to the CF. With IBM z15, we are providing a new resiliency mechanism that
transparently recovers for this “missing” message path deactivate (if and when that
deactivation ever occurs).
During path initialization, the CF provides more information to z/OS about every message
path that appears active, including the SYID for the path. Whenever z/OS interrogates the
state of the message paths to the CF, z/OS checks this SYID information for currency and
correctness, and if incorrect, gather diagnostic information and reactivates the path to
correct the problem.
CF monopolization avoidance
z/OS takes advantage of current CF support in CFLEVEL 24 (IBM z15 T01/T02) to deliver
improved z/OS support for handling CF monopolization.
With IBM z15 T01/T02, the CF dispatcher monitors in real-time the number of CF tasks
that have a command assigned to them for a specific structure on a structure-by-structure
basis.
When the number of CF tasks that is used by any structure exceeds a model-dependent
CF threshold, and a global threshold on the number of active tasks also is exceeded, the
structure is considered to be “monopolizing” the CF. z/OS is informed of this
monopolization.
New support in z/OS observes the monopolization state for a structure, and starts to
selectively queue and throttle incoming requests to the CF on a structure-specific basis.
Other requests for other “non-monopolizing” structures and workloads are unaffected.
z/OS dynamically manages the queue of requests for the “monopolizing” structures to limit
the number of active CF requests (parallelism) to them, and monitors the CF’s
monopolization state information so as to observe the structure becoming
“non-monopolized” again, so that request processing can eventually revert back to a
non-throttled mode of operation.
The overall goal of z/OS anti-monopolization support is to protect the ability of ALL
well-behaved structures and workloads to access the CF, and get their requests
processed in the CF in a timely fashion. At the same time, it implements queuing and
throttling mechanisms in z/OS to hold back the specific abusive workloads that are
causing problems for other workloads.
z/OS XCF/XES use of APAR support is required to provide this function.
CFCC Change Shared-Engine CF Default to DYNDISP=THIN
Coupling Facility images can run with shared or dedicated processors. Shared processor
CFs can operate with different Dynamic Dispatching (DYNDISP) models:
– DYNDISP=OFF: LPAR timeslicing controls the CF processor.
– DYNDISP=ON: An optimization over pure LPAR timeslicing, in which the CFCC code
manages timer interrupts to share processors more efficiently.
– DYNDISP=THIN: An interrupt-driven model in which the CF processor is dispatched in
response to a set of events that generate Thin Interrupts.
Thin Interrupt support was available since zEC12/zBC12,. It is proven to be efficient and
well-performing in numerous different test and customer shared-engine coupling facility
configurations.
Therefore, IBM z15 made DYNDISP=THIN the default mode of operation for coupling facility
images that use shared processors.
For more information about CFCC code levels, see the Parallel Sysplex page of the IBM IT
infrastructure website.
For more information about the latest CFCC code levels, see the current exception letter that
is published on Resource Link website (login is required).
CF structure sizing changes are expected when upgrading from a previous CFCC Level to
CFCC Level 26. In fact, CFLEVEL 26 can have more noticeable CF structure size increases
associated with it, especially for smaller structures, because of task-related memory
increases that are associated with the increased number of CF tasks in CFLEVEL 26.
Alternatively, the batch SIZER utility also can be used for re-sizing your CF structures as
needed. Make sure to update CFRM Policy INITISIZE or SIZE values as needed.
For more information, see 4.6.4, “Parallel Sysplex connectivity” on page 199.
Coupling Express LR and Coupling Express2 LR will not be available on IBM z17, neither as
carry-forward or as new build
this CL6 only connects to CL6; Use ONLY for connections to other IBM z17 CPCs using
CL6.
Other than the higher bandwidth and new link type, CL6 looks and behaves much the
same as CL5, except that in the HCD definitions:
Also, because of its higher bandwidth, CL6 operates in a higher path selection “preference
tier” than CL5; other things being equal, selection of CL6 CHPIDs is preferred over CL5
Important: For using CE3 LR, APAR OA64478 (DG), OA64362 (XCF/XES), OA64591
(IOS), OA65190 (IOCP), OA64114 (HCD) are required – z/OS 2.4 and higher.
For more information, see “Coupling Thin Interrupts” on page 107. The supported operating
systems are listed in Table 7-5 on page 272.
Asynchronous CF Duplexing for lock structures requires the following software support:
z/OS V3R1
z/OS V2R5, V2R4
z/VM V7R4, V7R3
Db2 V12 with PTFs for APAR PI66689
IRLM V2.R3 with PTFs for APAR PI68378
The supported operating systems are listed in Table 7-5 on page 272.
Instead of performing XI signals synchronously on every cache update request that causes
them, data managers can “opt in” for the CF to perform these XIs asynchronously (and then
synchronize them with the CF at or before the transaction is completed). Data integrity is
maintained if all XI signals complete by the time transaction locks are released.
The feature enables faster completion of cache update CF requests, especially with the
cross-site distance that is involved. It also provides improved cache structure service times
and coupling efficiency. It requires specific data manager use or participation, which is not
transparent to the data manager. No SMF data changes were made for CF monitoring and
reporting.
This function refers exclusively to the z/VM dynamic I/O support of ICA coupling links.
Support is available for the CS5 CHPID type in the z/VM dynamic commands, including the
change channel path dynamic I/O command.
Specifying and changing the system name when entering and leaving configuration mode are
also supported. The supported operating systems are listed in Table 7-5 on page 272.
zHyperlink Express
IBM z14 introduced IBM zHyperLink Express as a brand new IBM Z input/output (I/O) channel
link technology since FICON. The zHyperLink Express 1.1 feature is available with new IBM
z16 systems and is designed to help bring data close to processing power, increase the
scalability of transaction processing, and lower I/O latency.
zHyperLink Express is designed for up to 5x lower latency than High-Performance FICON for
IBM Z (zHPF) by directly connecting the IBM Z central processor complex (CPC) to the I/O
Bay of the DS8000 (DS8880 or later). This short distance (up to 150 m [492.1 feet]), direct
connection is intended to speed Db2 for z/OS transaction processing and improve active log
throughput.
The improved performance of zHyperLink Express allows the Processing Unit (PU) to make a
synchronous request for the data that is in the DS8000 cache. This process eliminates the
undispatch of the running request, the queuing delays to resume the request, and the PU
cache disruption.
Support for zHyperLink Writes can accelerate Db2 log writes to help deliver superior service
levels by processing high-volume Db2 transactions at speed. IBM zHyperLink Express
requires compatible levels of DS8000/F hardware, firmware R8.5.1 or later, and Db2 12 with
PTFs.
The supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on
page 273.
FICON Express32-4P
FICON Express32-4P four port feature is available on IBM z17. It has four PCHID/CHPIDs
(ports) and supports a link data rate of 32 gigabits per second (Gbps) and auto negotiation to
16 and 8 Gbps for synergy with switches, directors, and storage devices. With support for
native FICON, High-Performance FICON for Z (zHPF), and Fibre Channel Protocol (FCP),
the IBM z17 server enables you to position your SAN for even higher performance, which
helps you to prepare for an end-to-end 16 Gbps infrastructure to meet the lower latency and
increased bandwidth demands of your applications.
FICON Express32S
FICON Express32S (available for IBM z17 servers) supports a link data rate of 32 gigabits per
second (Gbps) and auto negotiation to 16 and 8 Gbps for synergy with switches, directors,
and storage devices. With support for native FICON, High-Performance FICON for Z (zHPF),
and Fibre Channel Protocol (FCP), the IBM z17 server enables you to position your SAN for
even higher performance, which helps you to prepare for an end-to-end 16 Gbps
infrastructure to meet the lower latency and increased bandwidth demands of your
applications.
The supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on
page 273.
FICON Express16SA
FICON Express16SA (carry forward to IBM z16 server) supports a link data rate of
16 gigabits per second (Gbps) and auto negotiation to 8 Gbps for synergy with switches,
directors, and storage devices. With support for native FICON, High-Performance FICON for
IBM Z (zHPF), and Fibre Channel Protocol (FCP), the IBM z17 server enables you to position
your SAN for even higher performance, which helps you to prepare for an end-to-end 16 Gbps
infrastructure to meet the lower latency and increased bandwidth demands of your
applications.
The supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on
page 273.
To use this enhancement, the control unit must support the new IU pacing protocol. IBM
System Storage DS8000 series supports extended distance FICON for IBM Z environments.
The channel defaults to current pacing values when it operates with control units that cannot
use extended distance FICON.
High-performance FICON
High-performance FICON (zHPF) was first provided on IBM System z10®, and is a FICON
architecture for protocol simplification and efficiency. It reduces the number of information
units (IUs) that are processed. Enhancements were made to the z/Architecture and the
FICON interface architecture to provide optimizations for online transaction processing
(OLTP) workloads.
As of this writing, the FICON Express32-4P, FICON Express32S, and FICON Express16SA,
(CHPID type FC) support the FICON protocol and the zHPF protocol in the server LIC.
When used by the FICON channel, the z/OS operating system, and the DS8000 control unit
or other subsystems, the FICON channel processor usage can be reduced and performance
improved. Suitable levels of Licensed Internal Code (LIC) are required.
Also, the changes to the architectures provide end-to-end system enhancements to improve
reliability, availability, and serviceability (RAS).
For example, the zHPF channel programs can be used by the z/OS OLTP I/O workloads,
Db2, VSAM, the partitioned data set extended (PDSE), and the z/OS file system (zFS).
At the zHPF announcement, zHPF supported the transfer of small blocks of fixed size data
(4 K) from a single track. This capability was extended, first to 64 KB, and then to multi-track
operations. The 64 KB data transfer limit on multi-track operations was removed by z196. This
improvement allows the channel to fully use the bandwidth of FICON channels, which results
in higher throughputs and lower response times.
zHPF is enhanced to allow all large write operations (greater than 64 KB) at distances up to
100 km (62.13 miles) to be run in a single round trip to the control unit. This process does not
elongate the I/O service time for these write operations at extended distances. This
enhancement to zHPF removes a key inhibitor for customers that are adopting zHPF over
extended distances, especially when the IBM HyperSwap capability of z/OS is used.
From the z/OS perspective, the FICON architecture is called command mode and the zHPF
architecture is called transport mode. During link initialization, the channel node and the
control unit node indicate whether they support zHPF.
Requirement: All FICON channel path identifiers (CHPIDs) that are defined to the same
LCU must support zHPF. The inclusion of any non-compliant zHPF features in the path
group causes the entire path group to support command mode only.
The mode that is used for an I/O operation depends on the control unit that supports zHPF
and its settings in the z/OS operating system. For z/OS use, a parameter is available in the
IECIOSxx member of SYS1.PARMLIB (ZHPF=YES or NO) and in the SETIOS system command
to control whether zHPF is enabled or disabled. The default is ZHPF=NO.
Support also is added for the D IOS,ZHPF system command to indicate whether zHPF is
enabled, disabled, or not supported on the server.
Similar to the existing FICON channel architecture, the application or access method provides
the channel program (CCWs). How zHPF (transport mode) manages channel program
operations is different from the CCW operation for the existing FICON architecture (command
mode).
While in command mode, each CCW is sent to the control unit for execution. In transport
mode, multiple channel commands are packaged together and sent over the link to the
control unit in a single control block. Fewer processors are used compared to the existing
FICON architecture. Specific complex CCW chains are not supported by zHPF.
The supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on
page 273.
For more information about FICON channel performance, see the performance technical
papers that are available at the IBM Z I/O connectivity page of the IBM IT infrastructure
website.
The MIDAW facility is a system architecture and software feature that is designed to improve
FICON performance. This facility was first made available on IBM System z9® servers, and is
used by the Media Manager in z/OS.
The MIDAW facility provides a more efficient CCW/IDAW structure for certain categories of
data-chaining I/O operations. MIDAW can improve FICON performance for extended format
data sets. Nonextended data sets also can benefit from MIDAW.
MIDAW can improve channel use and I/O response time. It also reduces FICON channel
connect time, director ports, and control unit processor usage.
IBM laboratory tests indicate that applications that use extended format (EF) data sets, such
as Db2, or long chains of small blocks can gain significant performance benefits by using the
MIDAW facility.
MIDAW is supported on FICON channels that are configured as CHPID type FC. The
supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on page 273.
12
Exceptions are made to this statement, and many details are omitted in this description. In this section, we
assume that you can merge this brief description with an understanding of I/O operations in a virtual memory
environment.
Figure 7-2 shows a single CCW that controls the transfer of data that spans noncontiguous
4 K frames in main storage. When the IDAW flag is set, the data address in the CCW points to
a list of words (IDAWs). Each IDAW contains an address that designates a data area within
real storage.
The number of required IDAWs for a CCW is determined by the following factors:
IDAW format as specified in the operation request block (ORB)
Count field of the CCW
Data address in the initial IDAW
For example, three IDAWS are required when the following events occur:
The ORB specifies format-2 IDAWs with 4 KB blocks.
The CCW count field specifies 8 KB.
The first IDAW designates a location in the middle of a 4 KB block.
CCWs with data chaining can be used to process I/O data blocks that have a more complex
internal structure, in which portions of the data block are directed into separate buffer areas.
This process is sometimes known as scatter-read or scatter-write. However, as technology
evolves and link speed increases, data chaining techniques become less efficient because of
switch fabrics, control unit processing and exchanges, and other issues.
The MIDAW facility is a method of gathering and scattering data from and into discontinuous
storage locations during an I/O operation. The MIDAW format is shown in Figure 7-3. It is
16 bytes long and aligned on a quadword.
The use of MIDAWs is indicated by the MIDAW bit in the CCW. If this bit is set, the skip flag
cannot be set in the CCW. The skip flag in the MIDAW can be used instead. The data count in
the CCW must equal the sum of the data counts in the MIDAWs. The CCW operation ends
when the CCW count goes to zero or the last MIDAW (with the last flag) ends.
The combination of the address and count in a MIDAW cannot cross a page boundary.
Therefore, the largest possible count is 4 K. The maximum data count of all the MIDAWs in a
list cannot exceed 64 K, which is the maximum count of the associated CCW.
The scatter-read or scatter-write effect of the MIDAWs makes it possible to efficiently send
small control blocks that are embedded in a disk record to separate buffers from those that
are used for larger data areas within the record. MIDAW operations are on a single I/O block,
in the manner of data chaining. Do not confuse this operation with CCW command chaining.
VSAM and non-VSAM (DSORG=PS) sets can be defined as EF data sets. For non-VSAM
data sets, a 32-byte suffix is appended to the end of every physical record (that is, block) on
disk. VSAM appends the suffix to the end of every control interval (CI), which normally
corresponds to a physical record.
A 32 K CI is split into two records to span tracks. This suffix is used to improve data reliability,
and facilitates other functions that are described next. For example, if the DCB BLKSIZE or
VSAM CI size is equal to 8192, the block on storage consists of 8224 bytes. The control unit
does not distinguish between suffixes and user data. The suffix is transparent to the access
method and database.
EA is useful for creating large Db2 partitions (larger than 4 GB). Striping can be used to
increase sequential throughput, or to spread random I/Os across multiple logical volumes.
DFSMS striping is useful for the use of multiple channels in parallel for one data set. The Db2
logs are often striped to optimize the performance of Db2 sequential inserts.
Processing an I/O operation to an EF data set normally requires at least two CCWs with data
chaining. One CCW is used for the 32-byte suffix of the EF data set. With MIDAW, the extra
CCW for the EF data set suffix is eliminated.
MIDAWs benefit EF and non-EF data sets. For example, to read 12 4 K records from a
non-EF data set on a 3390 track, Media Manager chains together 12 CCWs by using data
chaining. To read 12 4 K records from an EF data set, 24 CCWs are chained (two CCWs per
4 K record). By using Media Manager track-level command operations and MIDAWs, an
entire track can be transferred by using a single CCW.
Performance benefits
z/OS Media Manager includes I/O channel program support for implementing EF data sets,
and automatically uses MIDAWs when appropriate. Most disk I/Os in the system are
generated by using Media Manager.
Users of the Executing Fixed Channel Programs in Real Storage (EXCPVR) instruction can
construct channel programs that contain MIDAWs. However, doing so requires that they
construct an IOBE with the IOBEMIDA bit set. Users of the EXCP instruction cannot construct
channel programs that contain MIDAWs.
The MIDAW facility removes the 4 K boundary restrictions of IDAWs and for EF data sets,
which reduces the number of CCWs. Decreasing the number of CCWs helps to reduce the
FICON channel processor use. Media Manager and MIDAWs do not cause the bits to move
any faster across the FICON link. However, they reduce the number of frames and sequences
that flow across the link and use the channel resources more efficiently.
The performance of a specific workload can vary based on the conditions and hardware
configuration of the environment. IBM laboratory tests found that Db2 gains significant
performance benefits by using the MIDAW facility in the following areas:
Table scans
Logging
Utilities
Use of DFSMS striping for Db2 data sets
Media Manager with the MIDAW facility can provide significant performance benefits when
used in combination applications that use EF data sets (such as Db2) or long chains of small
blocks.
For more information about FICON and MIDAW, see the following resources:
The I/O Connectivity page of the IBM IT infrastructure website includes information about
FICON channel performance
DS8000 Performance Monitoring and Tuning, SG24-8013
ICKDSF
Device Support Facilities, ICKDSF, Release 17 is required on all systems that share disk
subsystems with an IBM z17 processor.
ICKDSF supports a modified format of the CPU information field that contains a two-digit
LPAR identifier. ICKDSF uses the CPU information field instead of CCW reserve/release for
concurrent media maintenance. It prevents multiple systems from running ICKDSF on the
same volume, and at the same time allows user applications to run while ICKDSF is
processing. To prevent data corruption, ICKDSF must determine all sharing systems that
might run ICKDSF. Therefore, this support is required for IBM z17.
Remember: The need for ICKDSF Release 17 also applies to systems that are not part of
the same sysplex, or are running an operating system other than z/OS, such as z/VM.
The zDAC function is integrated into the hardware configuration definition (HCD). Clients can
define a policy that can include preferences for availability and bandwidth that include parallel
access volume (PAV) definitions, control unit numbers, and device number ranges. When new
controllers are added to an I/O configuration or changes are made to existing controllers, the
system discovers them and proposes configuration changes that are based on that policy.
zDAC provides real-time discovery for the FICON fabric, subsystem, and I/O device resource
changes from z/OS. By exploring the discovered control units for defined logical control units
(LCUs) and devices, zDAC compares the discovered controller information with the current
system configuration. It then determines delta changes to the configuration for a proposed
configuration.
All added or changed logical control units and devices are added into the proposed
configuration. They are assigned proposed control unit and device numbers, and channel
paths that are based on the defined policy. zDAC uses channel path chosen algorithms to
minimize single points of failure. The zDAC proposed configurations are created as work I/O
definition files (IODFs) that can be converted to production IODFs and activated.
zDAC is designed to run discovery for all systems in a sysplex that support the function.
Therefore, zDAC helps to simplify I/O configuration on IBM z17 systems that run z/OS, and
reduces complexity and setup time.
zDAC applies to all FICON features that are supported on IBM z17 when configured as
CHPID type FC. The supported operating systems are listed in Table 7-6 on page 272 and
Table 7-7 on page 273.
Information about the channels that are connected to a fabric (if registered) allows other
nodes or storage area network (SAN) managers to query the name server to determine what
is connected to the fabric.
The following attributes are registered for the IBM z17 systems:
Platform information
Channel information
Worldwide port name (WWPN)
Port type (N_Port_ID)
FC-4 types that are supported
Classes of service that are supported by the channel
The platform and name server registration service are defined in the Fibre Channel Generic
Services 4 (FC-GS-4) standard.
The informal name, 63.75-K subchannels, represents 65280 subchannels, as shown in the
following equation:
63 x 1024 + 0.75 x 1024 = 65280
This equation is applicable for subchannel set 0. For subchannel sets 1, 2 and 3, the available
subchannels are derived by using the following equation:
(64 X 1024) -1=65535
The supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on
page 273.
Current z/VM versions MSS support for mirrored direct access storage device (DASD)
provides a subset of host support for the MSS facility to allow the use of an alternative
subchannel set for Peer-to-Peer Remote Copy (PPRC) secondary volumes.
The supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on
page 273. For more information about channel subsystem, see Chapter 5, “Central processor
complex channel subsystem” on page 209.
Subchannel sets
IBM z17 ME1 supports four subchannel sets (SS0, SS1, SS2, and SS3).
Subchannel sets SS1, SS2, and SS3 can be used for disk alias devices of primary and
secondary devices, and as Metro Mirror secondary devices. This set helps facilitate storage
growth and complements other functions, such as extended address volume (EAV) and
Hyper Parallel Access Volumes (HyperPAV).
See Table 7-6 on page 272 and Table 7-7 on page 273 for list of supported operating
systems.
See Table 7-6 on page 272 and Table 7-7 on page 273 for list of supported operating
systems. For more information, see “IPL from an alternative subchannel set” on page 317.
32 K subchannels
To help facilitate growth and continue to enable server consolidation, the IBM z17 supports up
to 32 K subchannels per FICON Express32-4P, FICON Express32S, and FICON
Express16SA channels (CHPID). More devices can be defined per FICON channel, which
includes primary, secondary, and alias devices. The maximum number of subchannels across
all device types that are addressable within an LPAR remains at 63.75 K for subchannel set 0
and 64 K (64 X 1024)-1 for subchannel sets 1, 2, and 3.
This support is available to the IBM z17, IBM z16, IBM z15, z14, IBM z13, and IBM z13s
servers, and in IBM z17 it applies to the FICON Express32-4P,FICON Express32S,and
FICON Express16SA features (defined as CHPID type FC).
The supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on
page 273.
No action is required on z/OS to enable the health check; it is automatically enabled at IPL
and reacts to changes that might cause problems. The health check can be disabled by using
the PARMLIB or SDSF modify commands.
The supported operating systems are listed in Table 7-6 on page 272. For more information
about FICON Dynamic Routing (FIDR), see Chapter 4, “Central processor complex I/O
structure” on page 169.
The supported operating systems are listed in Table 7-6 on page 272.
For more information about FCP channel performance, see the performance technical papers
that are available at the IBM Z I/O connectivity page of the IBM IT infrastructure website.
The FCP protocol is supported by z/VM, z/VSE, and Linux on IBM Z. The supported
operating systems are listed in Table 7-6 on page 272 and Table 7-7 on page 273.
T10-DIF support
American National Standards Institute (ANSI) T10 Data Integrity Field (DIF) standard is
supported on IBM Z for SCSI end-to-end data protection on fixed block (FB) LUN volumes.
IBM Z provides added end-to-end data protection between the operating system and the
DS8870 unit. This support adds protection information that consists of Cyclic Redundancy
Checking (CRC), Logical Block Address (LBA), and host application tags to each sector of FB
data on a logical volume.
IBM Z support applies to FCP channels only. The supported operating systems are listed in
Table 7-6 on page 272 and Table 7-7 on page 273.
N_Port ID Virtualization
N_Port ID Virtualization (NPIV) allows multiple system images (in LPARs or z/VM guests) to
use a single FCP channel as though each were the sole user of the channel. First introduced
with z9 EC, this feature can be used with supported FICON features on IBM z17 servers. The
supported operating systems are listed in Table 7-6 on page 272 and Table 7-7 on page 273.
The capabilities of the WWPN are extended to calculate and show WWPNs for virtual and
physical ports ahead of system installation.
The tool assigns WWPNs to each virtual FCP channel or port by using the same WWPN
assignment algorithms that a system uses when assigning WWPNs for channels that use
NPIV. Therefore, the SAN can be set up in advance, which allows operations to proceed
much faster after the server is installed. In addition, the SAN configuration can be retained
instead of altered by assigning the WWPN to physical FCP ports when a FICON feature is
replaced.
The WWPN tool takes a .csv file that contains the FCP-specific I/O device definitions and
creates the WWPN assignments that are required to set up the SAN. A binary configuration
file that can be imported later by the system is also created. The .csv file can be created
manually or exported from the HCD/HCM. The supported operating systems are listed in
Table 7-6 on page 272 and Table 7-7 on page 273.
The WWPN tool is applicable to all FICON channels that are defined as CHPID type FCP (for
communication with SCSI devices) on IBM z17. It is available for download at the Resource
Link at the following website (log in is required).
Note: An optional feature can be ordered for WWPN persistency before shipment to keep
the same I/O serial number on the new CPC. Current information must be provided during
the ordering process.
The DPU I/O Complex moves functionality from the ASIC on the I/O adapters and in-boards it
into Assist Processors on the PU chip.
The DPU design aims to build an I/O subsystem with similar or better qualities of services
than the existing I/O subsystem. It is designed to deliver value by improved performance and
power efficiency. It also delivers value to our customers with decreased channel latencies. For
more information see: 5.4, “IBM z17 Data Processing Unit (DPU)” on page 219
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
Measurements
IBM z17 has an entirely new I/O hardware and architecture model for both storage and
networking. The new design moves processor and memory closer, which transforms I/O
operations to allow workloads to grow and scale. Adds new channel measurement
characteristics and channel utilization counts for new Channel Measurement Groups (CMG) 4
and 5. The channel utilization counts and channel measurement characteristics for the new
CMGs are provided in Channel Path Measurement Block (IRACPMB) which is populated by
the Channel Path Measurement Facility (CPMF) on IBM zNEXT hardware. Also z/OS WLM
provides support in the Channel Path Measurement Block (IRACPMB).
z/OS Data Gatherer extends its data collection for Channel Measurement Groups 4 and 5 for
SMF 73 records and Monitor III channel data table (ERBCPDG3).
Note: The support is available for z/OS V2.5 or higher. The Exploitation support is
available via APAR OA66014 and OA66054.
Restriction: The Channel Path Measurement Facility is not available under z/VM, no
channel measurement characteristics and utilization data will be retrieved.
Similar to SMC-R, this protocol uses shared memory architectural concepts that eliminate
TCP/IP processing in the data path, yet preserve TCP/IP Qualities of Service for connection
management purposes.
Support in select Linux on IBM Z distributions is now provided for Shared Memory
Communications over Direct Memory Access (SMC-D). For more information, see this Linux
on IBM Z Blogspot web page.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
Because the initial version of SMC was limited to TCP/IP connections over the same layer 2
network, it was not routable across multiple IP subnets. The associated TCP/IP connection
was limited to hosts within a single IP subnet that required the hosts to have direct access to
the same physical layer 2 network (that is, the same Ethernet LAN over a single VLAN ID).
The scope of eligible TCP/IP connections for SMC was limited to and defined by the single IP
subnet.
SMC over RDMA Version 2 (SMC-R v2) provides support for SMC over multiple IP subnets
for both SMC-D and SMC-R and is referred to as SMC-D v2 and SMC-R v2. SMC v2 requires
updates to the underlying network technology. SMC-D v2 requires ISM v2 and SMC-R v2
requires RoCE v2.
The SMC-R v2 protocol is downward compatible, which allows SMC-R v2 hosts to continue to
communicate with SMC-R v1 previous hosts.
Although SMC-R v2 changes the SMC connection protocol to enable multiple IP subnet
support, SMC-R v2 does not change how user TCP socket data is transferred, which
preserves the benefits of SMC to TCP workloads.
TCP/IP connections that require IPsec are not eligible for any form of SMC.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
Multiple Write Facility with fewer I/O interrupts is designed to reduce processor use of the
sending and receiving partitions.
Support for this function is required by the sending operating system. For more information,
see “HiperSockets” on page 198. The supported operating systems are listed in Table 7-8 on
page 275.
Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network
configuration is simplified and intuitive, and LAN administrators can configure and maintain
the mainframe environment the same way as they do a non-mainframe environment.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
Linux on IBM Z tools can be used to format, edit, and process the trace records for analysis
by system programmers and network administrators.
The IBM z17 SMC-Rv2 support requires OSH and NETH CHPID types must be converged on
the same PCHID/port, with matching interface statements. The Network Express supports
Enhanced QDIO (EQDIO) architecture, allowing z/OS Communications Server to interact with
hardware using optimized operations, to allow growing I/O rates.
EQDIO builds the foundation for the introduction of advanced Ethernet and networking
capabilities, which support IBM Z Hybrid Cloud Enterprise users. The z/OS Communications
Server SMF records are updated in order to support the new ports. z/OS IOS supports the
new OSA networking CHPID type, OSH (OSA Hybrid – Network Express for Ethernet).
Note: The support is available for z/OS V2.5 or higher. The Exploitation support is
available via APAR OA63265. PH56528, OA64896, PH54596.
Each port can be configured to provide support for a single host protocol (EQDIO or native
PCIe) or combination of host protocols. Adapters can be configured with both ports either as
OSH/NETH or as NETD.
Important: When used as guest of a z/VM that supports IBM z17, there are two options for
network connectivity:
1. Dedicate a device on an OSH CHPID to the z/OS guest. The z/OS configuration will
operate the device as an OSH-type device.
2. Connect the z/OS guest to a z/VM virtual switch. The z/OS configuration will continue
operate the virtual NIC as an OSD-type device. The virtual switch uplink will provide
physical connectivity via a device on an OSH CHPID.
A z/VM VSwitch supporting Network Express OSH does not currently support z/OS
guests exploiting an EQDIO uplink port. In the interim, clients will be required to use
either a guest-attached OSH device or existing functionality available with
OSA-Express7S adapters.
The new Converged Multi-Function Network Adapter is a two port feature supporting either
10Gb or 25Gb optics and has and one PCHID/CHPID. Both ports must carry the same speed
optics. Single mode (LR/LX) or multimode (SR/SX) fiber with Small form factor pluggable
(SFP+) optics using LC Duplex connector. Networking Express does NOT auto-negotiate to a
slower speed.
OSA-Express7S 1.2 25 GbE SR (FC 0459) and OSA-Express7S 1.2 25 GbE LR (FC 0460)
are installed in the PCIe+ I/O Drawer and have 25 GbE physical port. New with the generation
is the Long Reach version, which uses single mode fiber and can be point to point connected
to a distance of up to 10 km (6.2 miles). The features connect to a 25 GbE switch and do not
support auto-negotiation to a different speed.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
OSA-Express7S 1.2 10 GbE SR (FC 0457) and OSA-Express7S 1.2 10 GbE LR (FC 0456)
are installed in the PCIe+ I/O Drawer and have 10 GbE physical port. The features connect to
a 10 GbE switch and do not support auto-negotiation to a different speed.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
OSA-Express7S 1.2 GbE SX (FC 0455) and OSA-Express7S 1.2 10 GbE LX (FC 0454) are
installed in the PCIe+ I/O Drawer and have two GbE physical ports. The features connect to a
GbE switch and do not support auto-negotiation to a different speed.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
With the OSA-ICC function, 3270 emulation for console session connections is integrated
through a port on the OSA-Express7S GbE.
Note: OSA-ICC supports up to 48 secure sessions per CHPID (the overall maximum of
120 connections is unchanged).
OSA-ICC Enhancements
With HMC 2.14.1 and newer the following enhancements are available:
The IPv6 communications protocol is supported by OSA-ICC 3270 so that clients can
comply with regulations that require all computer purchases to support IPv6.
TLS negotiation levels (the supported TLS protocol levels) for the OSA-ICC 3270 client
connection can now be specified:
– TLS 1.0 OSA-ICC 3270 server permits TLS 1.0, TLS 1.1, and TLS 1.2 client
connections.
– TLS 1.1 OSA-ICC 3270 server permits TLS 1.1 and TLS 1.2 client connections.
– TLS 1.2 OSA-ICC 3270 server permits only TLS 1.2 client connections.
Separate and unique OSA-ICC 3270 certificates are supported (for each PCHID) for the
benefit of customers who host workloads across multiple business units or data centers
where cross-site coordination is required. Customers can avoid interruption of all the TLS
connections at the same time when having to renew expired certificates. OSA-ICC also
continues to support a single certificate for all OSA-ICC PCHIDs in the system.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
Checksum offload provides checksum offload for several types of traffic and is supported by
the following features when configured as CHPID type OSD (QDIO mode only):
OSA-Express7S 1.2, OSA_Express7S 1.1, and OSA-Express7S 25 GbE
OSA-Express7S and OSA-Express7S 1.2 10 GbE
OSA-Express7S and OSA-Express7S 1.2 GbE
OSA-Express7S and OSA-Express7S 1.2 1000BASE-T Ethernet
When checksum is off-loaded, the OSA-Express feature runs the checksum calculations for
Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6) packets. The
checksum offload function applies to packets that go to or come from the LAN.
When multiple IP stacks share an OSA-Express, and an IP stack sends a packet to a next
hop address that is owned by another IP stack that is sharing the OSA-Express,
OSA-Express sends the IP packet directly to the other IP stack. The packet does not have to
be placed out on the LAN, which is termed LPAR-to-LPAR traffic. Checksum offload is
enhanced to support the LPAR-to-LPAR traffic, which was not originally available.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
The use of display OSAINFO (z/OS) or NETSTAT OSAINFO (z/VM) allows the operator to monitor
and verify the current OSA configuration and helps improve the overall management,
serviceability, and usability of OSA-Express cards.
These commands apply to CHPID type OSD. The supported operating systems are listed in
Table 7-8 on page 275.
QDIO data connection isolation allows disabling internal routing for each QDIO connected. It
also provides a means for creating security zones and preventing network traffic between the
zones.
QDIO data connection isolation is supported by all OSA-Express features on IBM z16. The
supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on page 276.
QDIO interface isolation is supported on all OSA-Express features on IBM z16. The
supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on page 276.
The supported operating systems are listed in Table 7-8 on page 275.
Each input queue is a unique type of workload, and has unique service and processing
requirements. The IWQ function allows z/OS to preassign the appropriate processing
resources for each input queue. This approach allows multiple concurrent z/OS processing
threads to process each unique input queue (workload), which avoids traditional resource
contention.
IWQ reduces the conventional z/OS processing that is required to identify and separate
unique workloads. This advantage results in improved overall system performance and
scalability.
13 Only OSA-Express6S and OSA-Express7S cards are supported on IBM z16 as carry forward.
The following types of z/OS workloads are identified and assigned to unique input queues:
z/OS Sysplex Distributor traffic
Network traffic that is associated with a distributed virtual Internet Protocol address (VIPA)
is assigned to a unique input queue. This configuration allows the Sysplex Distributor
traffic to be immediately distributed to the target host.
z/OS bulk data traffic
Network traffic that is dynamically associated with a streaming (bulk data) TCP connection
is assigned to a unique input queue. This configuration allows the bulk data processing to
be assigned the suitable resources and isolated from critical interactive workloads.
EE (Enterprise Extender / SNA traffic)
IWQ for the OSA-Express features is enhanced to differentiate and separate inbound
Enterprise Extender traffic to a dedicated input queue.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
nondisruptive failover if a port becomes unavailable. The target links for aggregation must be
of the same type.
Link aggregation is applicable to CHPID type OSD (QDIO) and to OSH (EQDIO). The
supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on page 276.
Large send support for IPv6 packets applies to the OSA-Express7S 1.2, OSA-Express7S,
and OSA-Express6S13 features (CHPID type OSD) on IBM z16, IBM z15, and IBM z14.
OSA-Express6S added TCP checksum on large send, which reduces the cost (CPU time) of
error detection for large send.
The supported operating systems are listed in Table 7-8 on page 275 and Table 7-9 on
page 276.
In all cases, the TCP/IP stack determines the best setting that is based on the current system
and environmental conditions, such as inbound workload volume, processor use, and traffic
patterns. It can then dynamically update the settings.
Supported OSA-Express features adapt to the changes, which avoids thrashing and frequent
updates to the OAT. Based on the TCP/IP settings, OSA holds the packets before presenting
them to the host. A dynamic setting is designed to avoid or minimize host interrupts.
OSA Dynamic LAN idle is supported by all OSA-Express features on IBM z16 when in QDIO
mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on
page 275.
OSA Layer 3 VMAC is supported by all OSA-Express features on IBM z16 when in QDIO
mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on
page 275.
The Network Traffic Analyzer is supported by all OSA-Express features on IBM z16 when in
QDIO mode (CHPID type OSD). The supported operating systems are listed in Table 7-8 on
page 275.
The minimum software support levels are described in the following sections. Review the
current PSP buckets to ensure that the latest support levels are known and included as part
of the implementation plan.
Database administrators can use z/OS Dataset Encryption, z/OS Coupling Facility
Encryption, z/VM encrypted hypervisor paging, and z/TPF transparent database encryption,
which use the performance enhancements in the hardware.
In addition, the IBM z17, IBM z16, and IBM z15 cores implement a Modulo Arithmetic unit in
support of Elliptic Curve Cryptography.
CPACF is used by several IBM software product offerings for z/OS, such as IBM WebSphere
Application Server for z/OS. For more information, see 6.4, “CP Assist for Cryptographic
Functions” on page 231.
The supported operating systems are listed in Table 7-10 on page 279 and Table 7-11 on
page 280.
Crypto Express8S
Crypto Express8S includes a single- or dual- HSM adapter (single or dual IBM 4770 PCIe
Cryptographic Co-processor [PCIeCC]) and complies with the following Physical Security
Standards:
FIPS 140-3 level 4
Common Criteria EP11 EAL4+
Payment Card Industry (PCI) HSM
German Banking Industry Commission (GBIC, formerly DK)
AusPayNet (APN)
Support of Crypto Express8S functions varies by operating system and release and by the
way that the card is configured as a coprocessor or an accelerator. The supported operating
systems are listed in Table 7-10 on page 279 and Table 7-11 on page 280.
The supported operating systems are listed in Table 7-10 on page 279 and Table 7-11 on
page 280.
Web deliverables
For more information about web-deliverable code on z/OS, see the z/OS downloads website.
For Linux on IBM Z, support is delivered through IBM and the distribution partners. For more
information, see Linux on IBM Z on the IBM Developer website.
IBM categorized the following security functions according to International Organization for
Standardization (ISO) standard 7498-2:
Identification and authentication: Includes the ability to identify users to the system and
provide proof that they are who they claim to be.
Access control: Determines which users can access which resources.
Data confidentiality: Protects an organization’s sensitive data from being disclosed to
unauthorized persons.
Data integrity: Ensures that data is in its original form and that nothing altered it.
Security management: Administers, controls, and reviews a business security policy.
Nonrepudiation: Assures that the suitable individual sent the message.
Only cryptographic services can provide the data confidentiality and the identity
authentication that is required to protect business commerce on the internet15.
ICSF support for IBM z17 is provided with PTFs, not as previously was the case, through Web
deliverables.
Supported levels of ICSF automatically detect what hardware cryptographic capabilities are
available where it is running. Then, it enables functions accordingly. No toleration of new
hardware is necessary because it is “just there”. ICSF maintenance is necessary if you want
to use new capabilities.
When new Quantum Safe Algorithms are used and a KDS is shared in a sysplex, ensure that
all ICSF PTFs are installed on all systems.
For more information about ICSF versions and FMID cross-references, see this IBM Support
web page.
Reporting can be done at an LPAR/domain level to provide more granular reports for capacity
planning and diagnosing problems. This feature requires fix for APAR OA54952.
The supported operating systems are listed in Table 7-10 on page 279.
Policy-driven z/OS Data Set Encryption enables users to perform the following tasks:
De-couple encryption from data classification; encrypt data automatically independent of
labor-intensive data classification work.
Encrypt data immediately and efficiently at the time that it is written.
Reduce risks that are associated with mis-classified or undiscovered sensitive data.
Help protect digital assets automatically.
Achieve application transparent encryption.
IBM Db2 for z/OS and IBM Information Management System (IMS) intend to use z/OS Data
Set Encryption.
With z/OS, Data Set Encryption DFSMS enhances data security with support for data set
level encryption by using DFSMS access methods. This function is designed to give users the
ability to encrypt their data sets without changing their application programs.
DFSMS users can identify which data sets require encryption by using JCL, Data Class, or
the RACF data set profile. Data set level encryption can allow the data to remain encrypted
during functions, such as backup and restore, migration and recall, and replication.
z/OS Data Set Encryption requires CP Assist for Cryptographic Functions (CPACF).
Considering the significant enhancements that were introduced with z14, the encryption
mode of XTS is used by access method encryption to obtain the best performance possible. It
is not recommended to enable z/OS data set encryption until all sharing systems, fallback,
backup, and DR systems support encryption.
In addition to applying PTFs enabling the support, ICSF configuration is required. The
supported operating systems are listed in Table 7-10 on page 279.
The following Quantum-safe enhancements were introduced with IBM z16 to accomplish this
encryption:
Key generation
Hybrid key exchange schemes
Dual digital signature schemes
Included in this support is the ability to dynamically control whether a running z/VM system is
encrypting this data. This support protects guest paging data from administrators or users
with access to volumes. Enabled with AES encryption, z/VM Encrypted Paging includes low
overhead by using CPACF.
The supported operating systems are listed in Table 7-10 on page 279.
Because of the potential costs and overhead, most of the organizations avoid the use of
host-based network encryption today. By using enhanced CPACF and SIMD on IBM z16, TLS
and IPsec can use hardware performance gains while benefiting from transparent
enablement. Reduced cost of encryption enables broad use of network encryption.
A new z/OSMF Compliance fact collection REST API sends an ENF86 signal to all systems.
Participating products and components collect and write compliance data to new SMF 1154
records that are associated with its unique subtype. These new SMF 1154 records can be
integrated into solutions, such as the IBM z16 IBM Z Security and Compliance Center.
This support requires PTFs for z/OS 2.4 and z/OS 2.5. The PTFs are identified by a fix
category that is designated specifically for Compliance data collection support named
IBM.Function.Compliance.DataCollection. For more information about how to use this fix
category to identify and install the specific PTFs that enable compliance data collection, see
“IBM Fix Category Values and Descriptions”.
For more information about z/OS collection sources and enablement, see the following
resources:
Software Announcement 222-005, IBM Z Security and Compliance Center.
Software Announcement 222-092, CICS Transaction Server for z/OS 6.1.
Software Announcement 222-003, Db2 13 for z/OS powered by AI innovations provides
industry scalability, business resiliency and intelligence.
The following prerequisite operating system versions are supported for the collection of
compliance data:
Red Hat Enterprise Linux 8.0 (RHEL) on IBM Z, or later
Although IBM z17 servers do not require any “functional” software, it is recommended to
install all IBM z17 service before upgrading to the new server. The support matrix for z/OS
releases and the IBM Z servers that support them are listed in Table 7-17, where “X” indicates
that the hardware model is supported.
New: ICSF support for IBM z17 is provided with PTFs, not Web deliverables
16
For example, the use of Crypto Express7S requires the Cryptographic Support for z/OS V2R2 - z/OS V2R3 web
deliverable.
Keep members of the sysplex at the same software level, except during brief migration
periods.
Upgrade Coupling Facility LPARs to current levels (review all structure sizes by using the
CFSIZER tool before the CF is upgraded).
Review any restrictions and migration considerations before creating an upgrade plan.
Acknowledge that some hardware features cannot be ordered or carried forward for an
upgrade from an earlier server to IBM z17 and plan accordingly.
Determine the changes in IOCP, HCD, and HCM to support defining IBM z17 configuration
and the new features and functions it introduces.
Ensure that none of the new z/Architecture Machine Instructions (mnemonics) that were
introduced with IBM z17 are colliding with the names of Assembler macro instructions you
use17.
Check the use of MACHMIG statements in LOADxx PARMLIB commands.
Contact software vendors to inform them of new machine model and request new license
keys, if applicable.
Review the z/OS Upgrade Workflow for z/OSMF that is provided as a ++APAR for z/OS
V2R4 and higher18. This Workflow also is available in the IBM Documentation library.
The use of many functions covers fixes that are required to use the capabilities of the IBM z16
servers. They are identified by:
IBM.Device.Server.IBM z17-9175.Exploitation
Use the SMP/E REPORT MISSINGFIX command to determine whether any FIXCAT APARs exist
that are applicable and are not yet installed, and whether any SYSMODs are available to
satisfy the missing FIXCAT APARs.
Before any action to install required service can take place you should update your SMP/E
HOLDDATA to the most current level. In Example 7-2 we show you an example in how to
update your HOLDDATA information.
In the workflow, all PFT FIXCATs are clearly documented with steps to run SMP/E REPORT
MISSINGFIX command you see in Example 7-3 to assist in knowing if prepared for IBM z17.
// NOTIFY=&SYSUID,TIME=1440,CLASS=1
//*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
//SMPREC EXEC PGM=GIMSMP
//SMPCSI DD DSN=<Your SMP/E Global CSI>,DISP=SHR
//SMPOUT DD SYSOUT=*
//SMPRPT DD SYSOUT=*
//SMPHRPT DD SYSOUT=*
//SYSPRINT DD SYSOUT=*
//SMPCNTL DD *
SET BOUNDARY (GLOBAL) .
REPORT MISSINGFIX ZONES(<Your Target Zone>)
FIXCAT(IBM.Device.Server.z17* ).
/*
//
If no further maintenance is required for your product in order to support z17 hardware you
will see an SMP/E output like we show in Example 7-4.
If any additional maintenance is required you will see an detailed SMPRPT who shows all
required software that is currently not installed. In Example 7-5 an additional SMPPUNCH
output is available who can be used in another SMP/E JCL to apply all missing PTF’s that are
found in the MISSINGFIX report.
For more information about IBM Fix Category Values and Descriptions, see this IBM Support
web page.
CPU Measurement Facility new extended counters <<APAR Not yet defined, please check
again>
For more information about this release, see this announcement letter.
Enhancements that continue to simplify and modernize the z/OS environment for a better
user experience and improved productivity by reducing the level of IBM Z specific skills
that are required to maintain z/OS.
Ongoing industry-wide simplification improvements to help companies install and
configure software by using a common and modern method. These installation
improvements range from the packaging of software through the configuration so that
faster time to value can be realized throughout the enterprise.
IBM Open Data Analytics for z/OS provides enhancements to simplify data analysis by
combining open source run times and libraries with analysis of z/OS data at its source,
Enhancements to security and data protection on the system with support for new industry
cryptography and continued enhancements driving pervasive encryption through the
ability to encrypt data without application changes. A new RACF capability improves
management of access and privileges.
The use of IBM z16 capabilities
System Recovery Boost reduces the time that z/OS is offline when the operating system is
offline for any reason. The use of IBM System Recovery Boost expedites planned
operating system shutdown processing, operating system Initial Program Load (IPL),
middleware and workload restart and recovery, and the client workload execution that
follows.
It enables businesses to return their systems to work faster, not just from catastrophes, but
after all kinds of disruptions (planned and unplanned). Another aspect of System
Recovery Boost is to expedite and streamline the execution of GDPS recovery scripts that
perform reconfiguration actions during various planned and unplanned operational
scenarios.
Configurations with a Coupling Facility on one of these servers can add an IBM z17 Server to
their Sysplex for a z/OS or a Coupling Facility image. IBM z17 does not support participating
in a Parallel Sysplex with System IBM z14/IBM z14 ZR1 and earlier systems.
Each system can use, or not use, internal coupling links, CE LR links, or ICA SR coupling
links independently of what other systems are using.
Coupling connectivity is available only when other systems also support the same type of
coupling. For more information about supported coupling link technologies on IBM z16, see
4.6.4, “Parallel Sysplex connectivity” on page 199, and the Coupling Facility Configuration
Options white paper.
For more information about the ARCHITECTURE, TUNE, and VECTOR compiler options, see z/OS
XL C/C++ User’s Guide, SC14-7307-40.
z/OS XL C/C++ Web deliverables are available at no charge to z/OS XL C/C++ customers:
Based on open-source LLVM infrastructure; supports up to date C++ language standards
64-bit, UNIX System Services only
Statement of Direction: IBM will continue to adopt the LLVM and Clang compiler
infrastructure in future C/C++ offerings on IBM Za.
a. Any statements regarding IBM's future direction, intent, or product plans are subject to change
or withdrawal without notice.
– Network Express Adapter CHPID type OSH and PCI Function Types NETH and NETD
– AI Accelerator Adapter PCI Function Type PAIA
The new Guest Exploitation support for the following new features will be available with z17:
19
A z/VM VSwitch supporting Network Express OSH does not currently support z/OS guests exploiting an EQDIO
uplink port. In the interim, clients will be required to use either a guest-attached OSH device or existing functionality
available with OSA-Express7S adapters.
RoCE Network Express Adapter Hybrid (NETH) & Direct (NETD) Virtual Function support
allows guests to directly exploit RoCE functionality of the Network Express adapter
(additional details later in presentation)
Networking Express Adapter EQDIO OSA Hybrid (OSH) CHPID support allows guests to
directly exploit OSH functionality of the Network Express adapter, and allows guests to
exploit OSD simulated devices via the z/VM VSwitch to a real OSH device.
AI Accelerator Adapter allow guests to take advantage of and exploit the capabilities of the
enhanced AI Accelerator Adapter
Changes to the current physical packaging and, in the case of networking, architecture are
needed. In Figure 7-7 we show how the I/O Functionality has been moved to the z17 DPU in
order to accelerate I/O Operations.
The entire System Z I/Os are moved to the I/O abstraction layer (firmware) located within an
adapter-based RISC Processor to a Z Core. That:
– Eliminates the need for proprietary adapter hardware.
– Locates the adapter closer to memory and the nest (cores).
The next generation of abstraction layer for networking called Enhanced QDIO which has
been designed to:
– Reduce protocol message exchanges.
– Cache collision reduction.
– Increased memory efficiency.
– Increased bandwidth potential.
– Scalability for Z Virtualization
The Adapter are now multi-function networking adapters and will have 2 Ports per I/O Slot.
Initial z17 support will be available for 10GbE and 25GbE transmission speed. Each port on
card is a unique CHPID. Multiple protocols can be shared on the same physical port. Each
port can be configured to provide a single function or combination of functions.
Management, also known as Pin and Unpin SBs. Furthermore z/VM maintains an set of
shadow queues in memory who are only accessible by z/VM and the adapter. z/VM’s
responsibility is to keep the guest and shadow queues synchronized.
Note: VM VSWITCH provides a customer the ability to define its own virtual network to
interconnect multiple virtual machines using a simulated device to a physical network.
The DEFINE NIC TYPE QDIO creates an network adapter working either on Layer 2
(Ethernet) or Layer 3 (IP).
Attention: New EQDIO supports Layer 2 mode only. Using Layer 3 requires traditional
QDIO uplink.
The VSWITCH configuration externals remain mostly unchanged with some existing parms
not applicable to EQDIO.
z/VM
EQDIO EQDIO
Adapter Adapter
Network
7.6.3 Capacity
For the capacity of any z/VM logical partition (LPAR) and any z/VM guest, you might want to
adjust the number to accommodate the PU capacity of IBM z17 servers in terms of the
number of Integrated Facility for Linux (IFL) processors and central processors (CPs), real or
virtual.
Consider the following general guidelines when you are migrating VSEn environment to IBM
z17 servers:
Collect reference information before migration
This information includes baseline data that reflects the status of, for example,
performance data, CPU use of reference workload, I/O activity, and elapsed times.
This information is required to size IBM z17 and is the only way to compare workload
characteristics after migration.
This section provides an overview of the following IBM Z software licensing options that are
available for IBM z17 software, including MLC, zIPLA, subcapacity, sysplex, and Taylor Fit
Pricing:
Monthly license charge (MLC)
MLC is a recurring charge that is applied monthly. It includes the right to use the product
and provides access to IBM product support during the support period. Select an MLC
pricing metric that is based on your goals and environment.
The selected metric is used to price MLC products, such as z/OS, z/TPF, z/VSE,
middleware, compilers, and selected systems management tools and utilities:
– Key MLC metrics and offerings
MLC metrics include various offerings. The metrics and pricing schemes that are
available on IBM z15, IBM z16, and IBM z17 are listed in Table 7-19.
b. The Country Multiplex offering was withdrawn as of 1 January 2021. For existing CMP
customers, machines that are eligible to be included in a multiplex cannot be older than two
generations before the most recently available server.
c. Metric available with AWLC or CMLC only.
d. This metric is eligible for subcapacity charges or for aggregation in a qualified Parallel
Sysplex environment.
e. PSLCs are available only on IBM z16 A02 or IBM z16 AGZ, or IBM z15 T02, when that
machine is participating in a qualified Parallel Sysplex environment.
zIPLA licensing
International Program License Agreement (IPLA) programs include a one-time charge
(OTC) and an optional annual maintenance charge, called Subscription and Support. This
annual charge includes access to IBM technical support and enables you to obtain version
upgrades at no charge for products that generally fall under the zIPLA such as application
development tools, CICS tools, data management tools, WebSphere for IBM Z products,
Linux on IBM Z middleware and z/VM.
The following pricing metrics apply to IBM Z IPLA products:
– Value Unit pricing applies to most IPLA products that run on z/OS. Value Unit pricing is
typically based on a number of MSUs and allows for lower cost of incremental growth.
– z/VM and specific z/VM middleware include pricing that is based on the number of
engines. Engine-based Value Unit pricing allows for a lower cost of incremental growth
with more engine-based licenses that are purchased.
– Most Linux middleware also is priced based on the number of engines. The number of
engines is converted into Processor Value Units under the IBM Passport Advantage®
terms and conditions.
For more information, see this web page.
Subcapacity licensing
Subcapacity licensing includes software charges for specific IBM products that are based
on the use capacity of the logical partitions (LPARs) on which the product runs.
Subcapacity licensing removes the dependency between the software charges and CPC
(hardware) installed capacity.
The subcapacity licensed products are charged monthly based on the highest observed
4-hour rolling average use of the logical partitions in which the product runs.
The 4-hour rolling average use of the logical partition can be limited by a defined capacity
value on the image profile of the partition. This value activates the soft capping function of
PR/SM, which limits the 4-hour rolling average partition use to the defined capacity value.
Soft capping controls the maximum 4-hour rolling average usage (the last 4-hour average
value at every 5-minute interval), but does not limit the maximum instantaneous partition
use.
You can also use an LPAR group capacity limit, which sets soft capping by PR/SM for a
group of logical partitions that are running z/OS. Only the 4-hour rolling average use of the
LPAR group is tracked, which allows use peaks above the group capacity value.
Sysplex licensing
Sysplex licensing allows monthly software licenses to be aggregated across a qualified
Parallel Sysplex. To be eligible for Sysplex pricing aggregation, the customer environment
must meet hardware, software, operation, and verification criteria to be considered
“actively coupled”. For more information about Sysplex licensing, see this web page.
Taylor Fit Software Consumption
Taylor Fit Software Consumption Solution is a cloud-like, usage-based licensing model.
Usage is measured based on MSUs that are used, which removes the need for manual or
automated capping. It also allows customers to configure their systems to support optimal
response times and service level agreements.
Tailored Fit Pricing (TFP) requires z/OS V2.4, or later, with the applicable PTFs applied.
The requirements for TFP vary with the solution. The specific requirements for a solution
must be met before IBM can accept and process subcapacity reports in which TFP
solutions are reported. For more information about TFP, see this web page.
Technology Update Pricing for the IBM z17 extends the software price and performance that
is provided by AWLC for IBM z17 servers. The new and revised Transition Charges for
Sysplexes offerings provide a transition to Technology Update Pricing for the IBM z17 for
customers who have not fully migrated to IBM z17 servers. This transition ensures that
aggregation benefits are maintained and phases in the benefits of Technology Update Pricing
for the IBM z17 pricing as customers migrate.
When an IBM z17 server is in an actively coupled Parallel Sysplex or a Loosely Coupled
Complex, you might choose aggregated Advanced Workload License Charges (AWLC)
pricing or aggregated Parallel Sysplex License Charges (PSLC) pricing (subject to all
applicable terms and conditions).
When an IBM z17 server is part of a Multiplex under Country Multiplex Pricing (CMP) terms,
Country Multiplex License Charges (CMLC), Multiplex zNALC (MzNALC), and Flat Workload
License Charges (FWLC) are the only pricing metrics that are available (subject to all
applicable terms and conditions).
When an IBM z17 server is running z/VSE, you can choose Mid-Range Workload License
Charges (MWLC), which are subject to all applicable terms and conditions.
For more information about AWLC, CMLC, MzNALC, PSLC, MWLC, or the Technology
Update Pricing and Transition Charges for Sysplexes or Multiplexes TTO offerings, see the
IBM Z Software Pricing page of the IBM IT infrastructure website.
7.9 References
For more information about planning, see the home pages for the following operating
systems:
z/OS
z/VM
z/VSE
z/TPF
Linux on IBM Z
KVM for IBM Z
IBM z17 servers support dynamic provisioning features to give clients exceptional flexibility
and control over system capacity and costs.
8.1 Introduction
A key resource for managing client IBM Z servers is the IBM Resource Link website. Once
registered, a client can view product information by clicking Resource Link → Client
Initiated Upgrade Information, and selecting Education. Select your particular product
from the list of available systems.
8.2.1 Overview
Upgrades can be categorized as described in this section.
Tip: An MES provides system upgrades that can result in more enabled processors, a
different central processor (CP) capacity level, more processor drawers, memory,
PCIe+ I/O drawers, and I/O features (physical upgrade). Extra planning tasks are
required for nondisruptive logical upgrades. An MES is ordered through your IBM
representative and installed by IBM service support representatives (IBM SSRs).
Temporary upgrades
All temporary upgrades are LICCC-based. The one billable capacity offering is On/Off
Capacity on Demand (CoD), which can be used for short-term capacity requirements and
are pre-paid or post-paid.
The replacement capacity offering available is the Capacity Backup (CBU), and the
Flexible Capacity for Cyber Resiliency.
System Recovery Boost zIIP capacity is a pre-paid offering that is available on IBM z16
A01 and IBM z17 ME1. It is intended to provide temporary zIIP capacity to be used to
boost CPU performance for boost events. For more information, see Introducing IBM Z
System Recovery Boost, REDP-5563.
Flexible Capacity for Cyber Resiliency is new type of temporary record that was introduced
with IBM z16. This record holds the Flexible Capacity Entitlements for IBM z17 machines
across two or more sites.
Activated capacity Capacity that is purchased and activated. Purchased capacity can be greater than the activated
capacity.
Billable capacity Capacity that helps handle workload peaks (expected or unexpected). The one billable offering
that is available is On/Off CoD.
Capacity Hardware resources (processor and memory) that can process the workload can be added to
the system through various capacity offerings.
Capacity Backup This capacity allows you to place model capacity or specialty engines in a backup system. CBU
(CBU) is used in an unforeseen loss of system capacity because of an emergency or for Disaster
Recovery testing.
Capacity for Planned Used when temporary replacement capacity is needed for a short-term event. CPE activates
Event (CPE)a processor capacity temporarily to facilitate moving systems between data centers, upgrades,
and other routine management tasks. CPE is an offering of CoD.
Capacity levels Can be full capacity or sub-capacity. For an IBM z17 ME1 system, capacity levels for the CP
engine are 7, 6, 5, and 4.
Capacity setting Derived from the capacity level and the number of processors.
For the IBM z17 ME1system, the capacity levels are 7nn, 6yy, 5yy, and 4xx, where xx, yy, or
nn indicates the number of active CPs.
Term Description
Customer Initiated A web-based facility where you can request processor and memory upgrades by using the IBM
Upgrade (CIU) Resource Link and the system’s Remote Support Facility (RSF) connection.
Capacity on Demand The ability of a system to increase or decrease its performance capacity as needed to meet
(CoD) fluctuations in demand.
Capacity As a component of z/OS Capacity Provisioning, CPM monitors business-critical workloads that
Provisioning are running z/OS on IBM z17.
Manager (CPM)
Customer profile This information is on Resource Link and contains client and system information. A customer
profile can contain information about systems that are related to their IBM customer numbers.
Flexible Capacity for Available on IBM z17 ME1 servers, the optional Flexible Capacity Record is an orderable
Cyber Resiliency feature that entitles a customer to active MIPS flexibility for all engine types between IBM z17
servers across two or more sites. It allows capacity swaps for an extended term.
Full capacity CP For IBM z17 servers, feature (CP7) provides full capacity. Capacity settings 7nn are full
feature capacity settings with the ranges of 1 - 99 in decimal and A0 - K0, where A0 represents 100 and
K8 represents 208, for capacity level 7nn.
Installed record The LICCC record is downloaded, staged to the Support Element (SE), and is installed on the
central processor complex (CPC). A maximum of eight different records can be concurrently
installed.
Model capacity Shows the current active capacity on the system, including all replacement and billable capacity.
identifier (MCI) For IBM z17 ME1 servers, the model capacity identifier is in the form of 4xx, 5yy, 6yy, or 7nn,
where xx, yy, or nn indicates the number of active CPs:
xx can have a range of 00 - 43. An all IFL or an all ICF system has a capacity level of 400.
yy can have a range of 01 - 43.
1 - 99 in decimal and A0 - K8, where A0 represents 100 and K8 represents 208, for
capacity level 7nn.
Model Permanent Keeps information about the capacity settings that are active before any temporary capacity is
Capacity Identifier activated.
(MPCI)
Model Temporary Reflects the permanent capacity with billable capacity only, without replacement capacity. If no
Capacity Identifier billable temporary capacity is active, MTCI equals the MPCI.
(MTCI)
On/Off Capacity on Represents a function that allows spare capacity in a CPC to be made available to increase the
Demand total capacity of a CPC. For example, On/Off CoD can be used to acquire capacity for handling
a workload peak.
Permanent capacity The capacity that a client purchases and activates. This amount might be less capacity than the
total capacity purchased.
Permanent upgrade LICC that is licensed by IBM to enable the activation of applicable computing resources, such
as processors or memory, for a specific CIU-eligible system on a permanent basis.
Purchased capacity Capacity that is delivered to and owned by the client. It can be higher than the permanent
capacity.
Permanent/ The internal representation of a temporary (TER) or permanent (PER) capacity upgrade that is
Temporary processed by the CIU facility. An entitlement record contains the encrypted representation of
entitlement record the upgrade configuration with the associated time limit conditions.
Term Description
Replacement A temporary capacity that is used for situations in which processing capacity in other parts of
capacity the enterprise is lost. This loss can be a planned event or an unexpected disaster. The two
replacement offerings available are Capacity for Planned Events and Capacity Backup.
Resource Link The IBM Resource Link website a technical support website that provides a comprehensive set
of tools and resources (log in required).
Secondary approval An option that is selected by the client that requires second approver control for each CoD order.
When a secondary approval is required, the request is sent for approval or cancellation to the
Resource Link secondary user ID.
Staged record The point when a record that represents a temporary or permanent capacity upgrade is
retrieved and loaded on the SE disk.
Subcapacity For IBM z17 ME1 servers, CP features (CP4, CP5, and CP6) provide reduced capacity relative
to the full capacity CP feature (CP7).
System Recovery Available on IBM z17 ME1 servers, the optional System Recovery Boost Upgrade is an
Boost Upgrade orderable feature that provides more capacity for a limited time to enable speeding up
record shutdown, restart, and catchup processing for a limited event duration.
Temporary capacity An optional capacity that is added to the current system capacity for a limited amount of time. It
can be capacity that is owned or not owned by the client.
Vital product data Information that uniquely defines system, hardware, software, and microcode elements of a
(VPD) processing system.
a. Capacity for Planned Event (CPE) is not supported on IBM z17.
Tip: The use of the CIU facility for a system requires that the online CoD buying feature
(FC 9900) is installed on the system. The CIU facility is enabled through the permanent
upgrade authorization feature code (FC 9898).
Considerations: Most of the MESs can be concurrently applied without disrupting the
workload. For more information, see 8.3, “Concurrent upgrades” on page 360. However,
specific MES changes might be disruptive, such as adding PCIe IO drawers.
Memory upgrades that require dual inline memory module (DIMM) changes can be made
non disruptively if multiple CPC drawers are available and the flexible memory option is
used.
Prepaid On/Off CoD tokens: Beginning with IBM z16, new prepaid On/Off CoD tokens
that are purchased do not carry forward to future systems.
CBU
This offering allows you to replace model capacity or specialty engines in a backup system
that is used in an unforeseen loss of system capacity because of a disaster.
System Recovery Boost record (FC 6802)
This offering allows you to add up to 20 zIIPs for use with the System Recovery Boost
facility. System Recovery Boost provides temporary extra capacity for CP workloads to
allow rapid shutdown, restart, and recovery of eligible systems. System Recovery Boost
records are prepaid, licensed for 1 - 5 years, and can be renewed at any time.
Flexible Capacity Record
This offering allows you to move CPU capacity between machines across two or more
sites. Capacity can be moved between sites a maximum of 12 times per year for a
maximum of 12 months per move.
Billable capacity
To handle a peak workload, you can activate up to double the purchased capacity of any
processor unit (PU) type temporarily. You are charged daily.
Replacement capacity
When processing capacity is lost in part of an enterprise, replacement capacity can be
activated. It allows you to activate any PU type up to your authorized limit.
This capability is based on the flexibility of the design and structure, which allows concurrent
hardware installation and Licensed Internal Control Code (LICC) configuration changes.
The sub-capacity models allow more configuration granularity within the family. The added
granularity is available for models that are configured with up to 43 CPs, and provides 129
extra capacity settings. Sub-capacity models provide for CP capacity increase in two
dimensions that can be used together to deliver configuration granularity. The first dimension
is adding CPs to the configuration. The second is changing the capacity setting of the CPs
currently installed to a higher model capacity identifier.
IBM z17 allows the concurrent and nondisruptive addition of processors to a running logical
partition (LPAR). As a result, you can have a flexible infrastructure to which you can add
capacity. This function is supported by z/OS, z/VM, and z/VSE. This addition is made by using
one of the following methods:
With planning ahead for the future need of extra processors. Reserved processors can be
specified in the LPAR’s profile. When the extra processors are installed, the number of
active processors for that LPAR can be increased without the need for a partition
reactivation and initial program load (IPL).
Another (easier) way is to enable the dynamic addition of processors through the z/OS
LOADxx member. Set the DYNCPADD parameter in member LOADxx to ENABLE.
The model capacity identifier can be concurrently changed. Concurrent upgrades can be
performed for permanent and temporary upgrades.
Tip: A CPC drawer feature upgrade can be performed concurrently only for a Max43 or a
Max902 machine if feature codes 2933 or 2934 were ordered with the base machine.
The concurrent I/O upgrade capability can be better used if a future target configuration is
considered during the initial configuration.
Important: The LICCC-based PU conversions require that at least one PU (CP, ICF, or
IFL), remains unchanged. Otherwise, the conversion is disruptive. The PU conversion
generates a LICCC that can be installed concurrently in two steps:
1. Remove the assigned PU from the configuration.
2. Activate the newly available PU as the new PU type.
LPARs also might have to free the PUs to be converted. The operating systems must include
support to configure processors offline or online so that the PU conversion can be done non
disruptively.
Considerations: Client planning and operator action are required to use concurrent PU
conversion. Consider the following points about PU conversion:
It is disruptive if all current PUs are converted to different types.
It might require individual LPAR outages if dedicated PUs are converted.
The CIU facility is controlled through the permanent upgrade authorization FC 9898. A
prerequisite to FC 9898 is the online CoD buying feature code (FC 9900). Although FC 9898
can be installed on your IBM z17 servers at any time, often it is added when ordering an
IBM z17.
After you place an order through the CIU facility, you receive a notice that the order is ready
for download. You can then download and apply the upgrade by using functions that are
available through the Hardware Management Console (HMC), along with the RSF. After all of
the prerequisites are met, the entire process (from ordering to activation of the upgrade) is
performed by the customer, and does not require any onsite presence of IBM SSRs.
CIU prerequisites
The CIU facility supports LICCC upgrades only. It does not support I/O upgrades. All other
capacity that is required for an upgrade must be previously installed. Extra processor drawers
or I/O cards cannot be installed as part of an order that is placed through the CIU facility. The
sum of CPs, unassigned CPs, ICFs, zIIPs, IFLs, and unassigned IFLs cannot exceed the
client PU count of the installed processor drawers. The total number of zIIPs can be twice the
number of purchased CPs.
As part of the setup, provide one resource link ID for configuring and placing CIU orders and,
if required, a second ID as an approver. The IDs are then set up for access to the CIU
support. The CIU facility allows upgrades to be ordered and delivered much faster than
through the regular MES process.
To order and activate the upgrade, log on to the IBM Resource Link website, and start the CIU
application to upgrade a system for processors or memory. You can request a client order
approval to conform to your operational policies. You also can allow the definition of more IDs
to be authorized to access the CIU. More IDs can be authorized to enter or approve CIU
orders, or only view orders.
Permanent upgrades
Permanent upgrades can be ordered by using the CIU facility. Through the CIU facility, you
can generate online permanent upgrade orders to concurrently add processors (CPs, ICFs,
zIIPs, and IFLs), and memory, or change the model capacity identifier. You can do so up to
the limits of the installed processor drawers on a system.
Temporary upgrades
The IBM z17 ME1 base model describes permanent and dormant capacity by using the
capacity marker and the number of PU features that are installed on the system. Up to eight
temporary offerings can be present. Each offering includes its own policies and controls, and
each can be activated or deactivated independently in any sequence and combination.
Although multiple offerings can be active at any time, only one On/Off CoD offering can be
active at any time if enough resources are available to fulfill the offering specifications.
Temporary upgrades are represented in the system by a record. All temporary upgrade
records are on the SE hard disk drive (HDD). The records can be downloaded from the RSF
or installed from portable media. At the time of activation, you can control everything locally.
API
HMC Application
Query Activation
R3 Up to 8 records installed
R1 R2 R4 R5 R6 R7 R8 and active
Dormant
capacity
Base Change permanent capacity
model through CIU or MES order
Purchased
capacity
The authorization layer enables administrative control over the temporary offerings. The
activation and deactivation can be driven manually or under the control of an application
through a documented application programming interface (API)2.
By using the API approach, you can customize at activation time the resources that are
necessary to respond to the current situation up to the maximum that is specified in the order
record. If the situation changes, you can add or remove resources without having to return to
the base configuration. This process eliminates the need for temporary upgrade
specifications for all possible scenarios.
This approach also enables you to update and replenish temporary upgrades, even in
situations where the upgrades are active. Likewise, depending on the configuration,
permanent upgrades can be performed while temporary upgrades are active. Examples of
the activation sequence of multiple temporary upgrades are shown in Figure 8-2.
R1 R2 R3 R4
CBU OOCoD CBU CBU Record and associated authorization
R3 R1
CBU CBU
R4 R4 R3
CBU CBU CBU
R2 R2 R2 R2 R2 R3
OOCoD OOCoD OOCoD OOCoD OOCoD CBU
As shown in Figure 8-2, if R2, R3, and R1 are active at the same time, only parts of R1 can be
activated because not enough resources are available to fulfill all of R1. When R2 is
deactivated, the remaining parts of R1 can be activated as shown.
Temporary capacity can be billable as On/Off CoD, or replacement capacity as CBU, Flexible
Capacity, or System Recovery Boost. Consider the following points:
On/Off CoD is a function that enables concurrent and temporary capacity growth of the
system.
On/Off CoD can be used for client peak workload requirements, for any length of time, and
includes a daily hardware and maintenance charge. The software charges can vary
according to the license agreement for the individual products. For more information,
contact your IBM Software Group representative.
2 API details can be found in z/OS MVS Programming: Callable Services for High-Level Languages, SA23-1377
On/Off CoD can concurrently add processors (CPs, ICFs, zIIPs, and IFLs), increase the
model capacity identifier, or both. It can do so up to the limit of the installed processor
drawers of a system. It is restricted to twice the installed capacity. On/Off CoD requires a
contractual agreement between you and IBM.
You decide whether to pre-pay or post-pay On/Off CoD. Capacity tokens that are inside the
records are used to control activation time and resources.
CBU is a concurrent and temporary activation of more CPs, ICFs, zIIPs, and IFLs; or an
increase of the model capacity identifier; or both.
Note: CBU cannot be used for peak workload management in any form.
On/Off CoD is the correct method to use for workload management. A CBU activation can
last up to 90 days when a disaster or recovery situation occurs.
CBU features are optional, and require unused capacity to be available on installed
processor drawers of the backup system. They can be available as unused PUs, an
increase in the model capacity identifier, or both.
A CBU contract must be in place before the special code that enables this capability can
be loaded on the system. The standard CBU contract provides for five 10-day tests (the
CBU test activation) and one 90-day activation over a five-year period. For more
information, contact your IBM representative.
You can run production workload on a CBU upgrade during a CBU test. At least an
equivalent amount of production capacity must be shut down during the CBU test. If you
signed CBU contracts, you also must sign an Amendment (US form #Z125-8145) with IBM
to allow you to run production workload on a CBU upgrade during your CBU tests. More
10-day tests can be purchased with the CBU record.
The System Recovery Boost Upgrade allows a concurrent activation of extra zIIPs.
The System Recovery Boost Upgrade record offering can be used to provide extra zIIP
capacity that can be used by the System Recovery Boost facility. You might want to
consider the use of this offering if your server is a full capacity model (7nn) and can benefit
from more CP capacity (running on zIIPs) for system shutdown and restart. The capacity
is delivered as zIIPs that can perform CP work during the boost periods for an LPAR.
A System Recovery Boost Record contract must be in place before the special code that
enables this capability can be loaded on the system. The standard contract provides for
one 6-hour activation for the specific purpose of System Recovery Boost only. For more
information, contact your IBM representative.
Activation of System Recovery Boost Record does not change the MCI of your system.
Permanent MES CPs, ICFs, zIIPs, IFLs, processor drawer, Installed by IBM SSRs
memory, and I/Os
Online permanent CPs, ICFs, zIIPs, IFLs, and memory Performed through the CIU facility
upgrade
Temporary On/Off CoD CPs, ICFs, zIIPs, and IFLs Performed through the On/Off CoD
facility
CBU CPs, ICFs, zIIPs, and IFLs Activated through model conversion
Flexible Capacity CPs, ICFs, zIIPs, and IFLs Activated through model conversion
Record
The MES upgrade can be performed by using LICCC only, installing more processor drawers,
adding PCIe+ I/O drawers, adding I/O3 features, or using the following combinations:
MES upgrades for processors are done by any of the following methods:
– LICCC assigning and activating unassigned PUs up to the limit of the installed
processor drawers.
– LICCC to adjust the number and types of PUs to change the capacity setting, or both.
– Installing more processor drawers and LICCC assigning and activating unassigned
PUs on the installed processor drawers.
MES upgrades for memory are done by one of the following methods:
– By using LICCC to activate more memory capacity up to the limit of the memory cards
on the currently installed processor drawers. Flexible memory features enable you to
implement better control over future memory upgrades. For more information about the
memory features, see 2.5.7, “Flexible Memory Option” on page 53.
– Installing more processor drawers and the use of LICCC to activate more memory
capacity on installed processor drawers.
– By using the CPC Enhanced Drawer Availability (EDA), where possible, on
multi-drawer systems to add or change the memory cards.
3
Other adapter types, such as zHyperlink, Coupling Express LR, and Remote Direct Memory Access (RDMA) over
Converged Ethernet (RoCE), also can be added to the PCIe+ I/O drawers through an MES.
MES upgrades for I/O are done by installing I/O features and supporting infrastructure (if
required) on PCIe drawers that are installed, or installing PCIe drawers to hold the new
cards.
An MES upgrade requires IBM SSRs for the installation. In most cases, the time that is
required for installing the LICCC and completing the upgrade is short, depending on how up
to date the machine microcode levels are.
To better use the MES upgrade function, carefully plan the initial configuration to allow a
concurrent upgrade to a target configuration. The availability of PCIe+ I/O drawers improves
the flexibility to perform unplanned I/O configuration changes concurrently.
The Store System Information (STSI) instruction gives more useful and detailed information
about the base configuration and temporary upgrades.
The model and model capacity identifiers that are returned by the STSI instruction are
updated to coincide with the upgrade. For more information, see “Store System Information
instruction” on page 396.
Upgrades: An MES provides the physical upgrade, which results in more enabled
processors, different capacity settings for the CPs, and more memory, I/O ports, I/O
adapters, and I/O drawers. Extra planning tasks are required for nondisruptive logical
upgrades. For more information, see “Guidelines to avoid disruptive upgrades” on
page 398.
Limits: The sum of CPs, inactive CPs, ICFs, unassigned ICFs, zIIPs, unassigned zIIPs,
IFLs, and unassigned IFLs, cannot exceed the maximum limit of PUs available for client
use.
An example of an MES upgrade for processors (with two upgrade steps) is shown in
Figure 8-3 on page 368
9175 ME1 Max 43 MCI 708 (Drawer 0, 52 PUs) with Feat #2933
CP CP CP CP CP CP CP CP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP
SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP
MES + 48 CPs
(+ 1 Drawer)
CP CP CP CP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP
SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP
MES + 2 CP + 2 IFLs
CP CP CP CP CP CP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP
SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP SP IFL IFL
An IBM z17 model ME1 Max43 (one processor drawer), model capacity identifier 708 (eight
CPs), is concurrently upgraded to a model ME1 Max90 (two processor drawers), with MCI
756 (56 CPs). The model upgrade requires adding a processor drawer and assigning and
activating 48 PUs as CPs. Then, model Max90, MCI 756, is concurrently upgraded to a
capacity identifier 758 (58 CPs) with two IFLs. This process is done by assigning and
activating four more unassigned PUs (two as CP and two as IFLs). If needed, LPARs can be
created concurrently to use the newly added processors.
The example that is shown in Figure 8-3 shows how the addition of PUs as CPs and IFLs and
the addition of a processor drawer works. The addition of a processor drawer to an IBM z17
Max43 upgrades the machine to Max90.
After the second CPC drawer addition, CPC drawer 0 has 52 configurable PUs and CPC
drawer 1 has 52 configurable PUs, which allows 90 PUs to be characterized on the new
Max90 configuration.
The number of processors that are supported by various operating systems releases are
listed in Table 8-3.
z/OS V2R4 and later 200 PUs per z/OS LPAR in non-SMT mode and 128 PUs per
z/OS LPAR in SMT mode. For both, the PU total is the sum of CPs
and zIIPs.
z/VSEn V6.3 z/VSE Turbo Dispatcher can use up to 4CPs, and tolerates up to
10-way LPARs.
z/TPF 86 CPs.
Linux on IBM z17 The IBM z17 limit is 200 CPs although Linuxa supports 256 cores
without SMT and 128 cores with SMT (256 threads).
a. Supported Linux on IBM Z distributions (for more information, see 322).
Software charges, which are based on the total capacity of the system on which the software
is installed, are adjusted to the new capacity after the MES upgrade.
Software products that use Workload License Charges (WLC) or Taylor Fit Pricing (TFP)
might not be affected by the system upgrade. Their charges are based on partition usage, not
on the system total capacity. For more information about WLC, see 7.8, “Software licensing”
on page 348.
The Flexible Memory Feature is available to allow better control over future memory
upgrades. For more information about flexible memory features, see 2.5.7, “Flexible Memory
Option” on page 53.
If the IBM z17 is a multiple processor drawer configuration, you can use the EDA feature to
remove a processor drawer and add DIMM memory cards. It also can be used to upgrade the
installed memory cards to a larger capacity size. You can then use LICCC to enable the extra
memory.
With suitable planning, memory can be added non disruptively to z/OS partitions and z/VM
partitions. If necessary, new LPARs can be created non disruptively to use the newly added
memory.
Concurrency: Upgrades that require DIMM changes can be concurrent by using the EDA
feature. Planning is required to see whether this option is a viable for your configuration.
The use of the flexible memory option ensures that EDA can work with the least disruption.
You also can add memory by concurrently adding a second processor drawer with sufficient
memory into the configuration and then, using LICCC to enable that memory. Changing
DIMMs in a single CPC drawer system is disruptive.
An LPAR can dynamically take advantage of a memory upgrade if reserved storage is defined
to that LPAR. The reserved storage is defined to the LPAR as part of the image profile.
Reserved memory can be configured online to the LPAR by using the LPAR dynamic storage
reconfiguration (DSR) function. DSR allows a z/OS operating system image and z/VM
partitions to add reserved storage to their configuration if any unused storage exists.
The nondisruptive addition of storage to a z/OS and z/VM partition requires the correct
operating system parameters to be set. If reserved storage is not defined to the LPAR, the
LPAR must be deactivated, the image profile changed, and the LPAR reactivated. This
process allows the extra storage resources to be available to the operating system image.
For more information about PCIe+ I/O drawers, see 4.2, “I/O system overview” on page 172.
The number of PCIe+ I/O drawers that can be present in an IBM z17 depends on how many
CPC drawers are present. It also depends on whether the CPC drawer reserve features are
present.
The number of drawers for IBM z17 configuration options is listed in Table 8-4.
Note: The maximum number of I/O drawers in the table is reduced by one for each CPC
drawer reserve feature that is present.
Table 8-4 PCIe+ I/O drawer limit for the IBM z17 systems
Number of frames Max43 90 Max136 Max183/Max208
1 3 2 1 -
2 6 7 6 4
3 - 12 11 9
4 - - - 12
Depending on the number of I/O features, the configurator determines the number of PCIe+
I/O drawers that is required.
To better use the MES for I/O capability, carefully plan the initial configuration to allow
concurrent upgrades up to the target configuration.
If a PCIe+ I/O drawer is added to an IBM z17 and original features must be physically moved
to another PCIe+ I/O drawer, original card moves are disruptive.
z/VSEn , z/TPF, and Linux on Z do not provide dynamic I/O configuration support. Although
installing the new hardware is done concurrently, defining the new hardware to these
operating systems requires an IPL.
Tip: IBM z17 ME1 features a hardware system area (HSA) of 884 GB HSA is not part of
the client-purchased memory.
A staged record can be removed without installing it. A FoD record can be installed only
completely; no selective feature or partial record installation is available. The features that are
installed are merged with the CPC LICCC after activation.
A FoD record can be installed only once. If it is removed, a new FoD record is needed to
reinstall. A remove action cannot be undone.
Tip: Accurate planning and the definition of the target configuration allows you to maximize
the value of these plan-ahead features.
Adding permanent upgrades to a system through the CIU facility requires that the permanent
upgrade enablement feature (FC 9898) is installed on the system. A permanent upgrade
might change the system model capacity identifier (4xx, 5yy, 6yy, or 7nn) if more CPs are
requested, or if the capacity identifier is changed as part of the permanent upgrade. If
necessary, more LPARs can be created concurrently to use the newly added processors.
Software charges that are based on the total capacity of the system on which the software is
installed are adjusted to the new capacity after the permanent upgrade is installed. Software
products that use WLC or customers with TFP might not be affected by the system upgrade
because their charges are based on LPAR usage rather than system total capacity.
For more information about WLC, see 7.8, “Software licensing” on page 348.
The CIU facility process on IBM Resource Link is shown in Figure 8-5.
https://www.ibm.com/support/resourcelink
Customer /
Internet
Online
permanent
Optional customer order
secondary order
approval
Remote
Support Facility
The following sample sequence shows how to start an order on IBM Resource Link:
1. Sign on to Resource Link.
2. Select Customer Initiated Upgrade from the main Resource Link page. Client and
system information that is associated with the user ID are displayed.
3. Select the system to receive the upgrade. The current configuration (PU allocation and
memory) is shown for the selected system.
4. Select Order Permanent Upgrade. Resource Link limits the options to those options that
are valid or possible for the selected configuration (system).
5. After the target configuration is verified by the system, accept or cancel the order. An order
is created and verified against the pre-established agreement.
6. Accept or reject the price that is quoted. A secondary order approval is optional. Upon
confirmation, the order is processed. The LICCC for the upgrade is available within hours.
The order activation process for a permanent upgrade is shown in Figure 8-6. When the
LICCC is passed to the Remote Support Facility, you are notified through an e-mail that the
upgrade is ready to be downloaded.
8.5.1 Ordering
IBM Resource Link provides the interface that enables you to order a concurrent upgrade for
a system. You can create, cancel, or view the order, and view the history of orders that were
placed through this interface.
Configuration rules enforce that only valid configurations are generated within the limits of the
individual system. Warning messages are issued if you select invalid upgrade options. The
process allows only one permanent CIU-eligible order for each system to be placed at a time.
For more information, see the IBM Resource Link website (log in required).
The initial view of the Machine profile on Resource Link is shown in Figure 8-7.
The number of CPs, ICFs, zIIPs, IFLs, SAPs, memory size, and unassigned IFLs on the
current configuration are displayed on the left side of the page.
Resource Link retrieves and stores relevant data that is associated with the processor
configuration, such as the number of CPs and installed memory cards. It allows you to select
only those upgrade options that are deemed valid by the order process. It also allows
upgrades only within the bounds of the currently installed hardware.
When the order is available for download, you receive an e-mail that contains an activation
number. You can then retrieve the order by using the Perform Model Conversion task from the
SE, or through the Single Object Operation to the SE from an HMC.
In the Perform Model Conversion window, select Permanent upgrades to start the process,
as shown in Figure 8-8.
The window provides several possible options. If you select the Retrieve and apply data
option, you are prompted to enter the order activation number to start the permanent
upgrade, as shown in Figure 8-9.
Note: Details about On/Off Capacity on Demand are taken from the Capacity on
Demand User’s Guide (SC28-7025-01). Please make sure to download the latest copy
on IBM Documentation. Go to https://www.ibm.com/docs/en/systems-hardware, select
IBM Z or IBM LinuxONE, then select your configuration, and click Library Overview on
the navigation bar.
8.6.1 Overview
Before implementing any temporary capacity upgrades using On/Off CoD, plan in advance to
determine what configurations you might need based on workload projections. This is
important because, when properly planned, you only need to order one On/Off CoD record;
and this record should be able to handle any possible configurations you want to activate.
When you order an On/Off CoD record, you can prepay for the upgrade or post-pay for the
upgrade.
When ordering a post-paid On/Off CoD record without spending limits, you select your
upgrade configuration. There is no cost incurred when you order or install this type of
record. You pay for what you activate during the activation time. You are charged on a
24-hour basis.
When ordering a prepaid On/Off CoD record, you can select one or more configurations
and identify the duration of each configuration. Then Resource Link calculates the total
number of tokens you will need. As resources are used, the tokens are decremented.
When ordering a post-paid On/Off CoD record with spending limits, you can select your
upgrade configuration and identify your maximum spending limit. Then, Resource Link
calculates the number of tokens that will not allow you to exceed that limit. As the
resources are used, the tokens are decremented.
For CP engines, a token represents an amount of processing capacity resulting in one MSU
of software cost for one day (an MSU day). For specialty engines, a token represents the
activation of one engine of that type for one day (a processor day).
Ensure that you enable your system well in advance of needing to place an order.
On/Off CoD allows you to temporarily turn on unowned PUs, unassigned CPs (or unassigned
CP capacity), and unassigned IFLs, zIIPs and ICFs available within the current model with the
following limitations:
Temporary model capacity with CPs and capacity level equal to or greater than the active
model capacity, up to 100% of the purchased capacity (active permanent capacity plus
unassigned permanent capacity)
As many temporary IFLs up to the total of purchased IFLs (permanently active IFLs plus
unassigned IFLs)
As many additional specialty engines of each type up to the total purchased specialty
engines of each type.
Note: On/Off CoD requires that the Online CoD Buying feature (FC 9900) is installed on
the system that you want to upgrade.
The temporary addition of memory and I/O ports or adapters is not supported.
An On/Off CoD upgrade cannot change the system capacity feature. The addition of
processor drawers is not supported. However, the activation of an On/Off CoD upgrade can
increase the model capacity identifier (4nn, 5nn, 6nn, or 7nn).
An On/Off CoD test record allows you to validate that the retrieve, install, activate, and
deactivate On/Off CoD capacity upgrade process performs non disruptively. Authorized users
can train to activate an On/Off CoD record, test an LPAR configuration and verify you can
change between CP activation levels.
Each On/Off CoD registered machine is entitled to one free On/Off CoD test. No IBM charges
are assessed for the test, including charges that are associated with temporary hardware
capacity, IBM software, and IBM maintenance.
This test can have a maximum duration of 24 hours, which commences upon the activation of
any capacity resource that is contained in the On/Off CoD record. Activation levels of capacity
can change during the 24-hour test period. The On/Off CoD test deactivates at the end of the
24-hour test period.
An administrative On/Off CoD test record allows you to test the Capacity on Demand
process for training and API testing without incurring hardware or software charges. An
administrative On/Off CoD test does not activate any additional capacity. The capacity level is
fixed at 0%.
8.6.3 Ordering
The enablement process for each Capacity on Demand offering begins when you order the
associated enablement feature code and sign the associated IBM contract document(s), and
for online buying capability, completes when you receive an e-mail from Resource Link
notifying you that your machine is enabled for ordering upgrade records.
gained by adding the engines. It is based off the published LSPR values for the
configuration.
The maximum upgrade allowed for specialty engines is doubling the number of engines.
For example, for an increase in model capacity, if you have a 711, you can activate up to a
725. If you have a 711 with 2 unassigned engines (713 purchased), you would be able to
activate up to a 730. For an increase in specialty engines, if you have 6 ICFs, you can add up
to 6 more ICFs.
It is recommended that when you order a post-paid On/Off CoD record, you order the
maximum capacity and maximum number of specialty engines.
Note: Resource Link will not allow you to order beyond the maximum.
Although it is recommended that you order the maximum capacity and number of specialty
engines when you order a On/Off CoD record, there may be reasons when you do not want to
maximize. For example, you may:
Not want all engines available for use.
Want to prevent certain types of upgrades.
Want to reactivate just the unassigned capacity (order 0%).
Note: Even though Resource Link displays the high water mark model when you specify 0%
when ordering, a 0% On/Off CoD record on a downgraded machine allows you to activate any
supported On/Off CoD upgrades to unassigned model capacities between the active
permanent configuration and your high water mark.
By default, an On/Off CoD record is initially available up to 180 days, starting on the date you
place your order. After the 180 days, the record will expire unless you “replenish” the record.
Replenish allows you to use an existing configuration to either increase your capacity, add
specialty engines, or extend the expiration date rather than ordering a new On/Off CoD
record.
You can order a replenishment record to manually extend the expiration date or you can
enable the automatic renewal function to automatically extend the expiration date of installed
records. With the automatic renewal function, a replenishment record is automatically
generated 90 days before the record expires. The expiration date on the newly generated
replenishment record is set to 180 days from the date the record was automatically
generated, which extends the expiration date 90 days from the previous expiration date.
The automatic renewal function is available on post-paid On/Off CoD records. Automatic
renewal requires a Remote Support Facility (RSF) connection.
If you apply a permanent upgrade, by default, any active On/Off CoD resources of the same
type are converted to permanent upgrades. If all On/Off CoD resources are consumed by the
permanent upgrade, the On/Off CoD record remains active with zero resources allocated.
Therefore, after the permanent upgrade is complete, you should deactivate (or Undo) the
On/Off CoD record.
If your business process requires you to have a purchase order before placing an order, make
sure you have the purchase order number ready before placing your order.
On/Off CoD can be ordered as prepaid or postpaid. A prepaid On/Off CoD offering record
contains resource descriptions, MSUs, specialty engines, and tokens that describe the total
capacity that can be used. For CP capacity, the token contains MSU-days. For specialty
engines, the token contains specialty engine-days.
When resources on a prepaid offering are activated, they must have enough capacity tokens
to allow the activation for an entire billing window, which is 24 hours. The resources remain
active until you deactivate them or until one resource uses all of its capacity tokens. Then, all
activated resources from the record are deactivated.
A postpaid On/Off CoD offering record contains resource descriptions, MSUs, specialty
engines, and can contain capacity tokens that denote MSU-days and specialty engine-days.
When resources in a postpaid offering record without capacity tokens are activated, those
resources remain active until they are deactivated, or until the offering record expires. The
record normally expires 180 days after its installation.
When resources in a postpaid offering record with capacity tokens are activated, those
resources must include enough capacity tokens to allow the activation for an entire billing
window (24 hours). The resources remain active until they are deactivated until all of the
resource tokens are used, or until the record expires. The record usually expires 180 days
after its installation. If one capacity token type is used, resources from the entire record are
deactivated.
For example, for an IBM z17 with capacity identifier 502 (two CPs), a capacity upgrade
through On/Off CoD can be delivered in the following ways:
Add CPs of the same capacity setting. With this option, the model capacity identifier can
be changed to a 503, which adds another CP to make it a three-way CP. It also can be
changed to a 504, which adds two CPs and makes it a four-way CP.
Change to a different capacity level of the current CPs and change the model capacity
identifier to a 602 or 702. The capacity level of the CPs is increased, but no other CPs are
added. The 502 also can be temporarily upgraded to a 603, which increases the capacity
level and adds a processor. The capacity setting 439 does not have an upgrade path
through On/Off CoD because you cannot reduce the number of CPs and a 539 is more
than twice the capacity.
Use the Large System Performance Reference (LSPR) information to evaluate the capacity
requirements according to your workload type. For more information about LSPR data for
current IBM processors, see this web page.
The On/Off CoD hardware capacity is charged on a 24-hour basis. A grace period is granted
at the end of the On/Off CoD day. This grace period allows up to an hour after the 24-hour
billing period to change the On/Off CoD configuration for the next 24-hour billing period or
deactivate the current On/Off CoD configuration. The times when the capacity is activated
and deactivated are maintained in the IBM z17 and returned to the IBM support systems.
If On/Off capacity is active, On/Off capacity can be added without having to return the system
to its original capacity. If the capacity is increased multiple times within a 24-hour period, the
charges apply to the highest amount of capacity active in that period.
If capacity is added from an active record that contains capacity tokens, the system checks
whether the resource has enough capacity to be active for an entire billing window (24 hours).
If that criteria is not met, no extra resources are activated from the record.
If necessary, more LPARs can be activated concurrently to use the newly added processor
resources.
Consideration: On/Off CoD provides a concurrent hardware upgrade that results in more
capacity being made available to a system configuration. Extra planning tasks are required
for nondisruptive upgrades. For more information, see “Guidelines to avoid disruptive
upgrades” on page 398.
To participate in this offering, you must accept contractual terms for purchasing capacity
through Resource Link, establish a profile, and install an On/Off CoD enablement feature on
the system. Later, you can concurrently install temporary capacity up to the limits in On/Off
CoD and use it for up to 180 days.
Monitoring occurs through the system call-home facility. An invoice is generated if the
capacity is enabled during the calendar month. You are billed for the use of temporary
capacity until the system is returned to the original configuration. Remove the enablement
code if the On/Off CoD support is no longer needed.
Resource Link provides the interface to order a dynamic upgrade for a specific system. You
can create, cancel, and view the order. Configuration rules are enforced, and only valid
configurations are generated based on the configuration of the individual system. After you
complete the prerequisites, orders for the On/Off CoD can be placed. The order process uses
the CIU facility on Resource Link.
An example of an On/Off CoD order on the Resource Link web page is shown in Figure 8-10.
The example order that is shown in Figure 8-10 is an On/Off CoD order for 0% more CP
capacity (system is at capacity level 7), and for two more ICFs and two more zIIPs. The
maximum number of CPs, ICFs, zIIPs, and IFLs is limited by the current number of available
unused PUs of the installed processor drawers. The maximum number of SAPs is determined
by the model number and the number of available PUs on the already installed processor
drawers.
To finalize the order, you must accept Terms and Conditions for the order, as shown in
Figure 8-11.
Before an ordered On/Off CoD record can be activated, it has to be retrieved and installed. To
retrieve a temporary upgrade record, log onto the HMC in system programmer mode, find
Perform Model Conversion in the Configuration task list and click on Temporary
upgrades and Retrieve. The record is now placed in a staging area so it can be installed at a
later time.
To install the record, go to the Staged Records tab, select the record and click Install. The
installed Records page opens showing the newly installed record. The On/Off CoD record is
now ready to be activated.
A temporary upgrade record can be activated using any of the following methods:
Manually, using the Support Element.
Setting up scheduled operations
Using APIs
z/OS Capacity Provisioning
Please refer to the Capacity on Demand User’s Guide for details about these activation
methods.
When you are finished using all or part of a capacity upgrade, you can take action to remove
processors or decrease model capacity using the Support Element. You can only remove
activated resources for the specific offering. You cannot remove dedicated engines or the last
of the engine type.
If you do not manually deactivate the added capacity, the activated resources are
automatically deactivated at expiration time (including any grace period). You will receive daily
warning messages (hardware messages) starting five days in advance of the expiration.
Once a temporary record enters the grace period, the only customer option is to deactivate all
resources from this record. You cannot change the activation level by increasing or
decreasing partial resources. If you attempt to partially increase or decrease resources, you
will receive an error indicating the temporary record has expired.
The Capacity on Demand User’s Guide explains in depth how to deactivate temporary
records and any considerations before deactivating.
Please refer to the Capacity on Demand Users’s Guide to which actions are needed for your
event.
When you no longer need your CIU machine profile you may also disable the profile on
Resource Link. Disabling a machine profile does not delete it. A disabled machine profile
remains listed on the CIU All machine profiles page on Resource Link so you can review its
order history, billing history, or other information if necessary.
z/OS Capacity Provisioning is delivered as part of the z/OS MVS Base Control Program
(BCP).
The Provisioning Manager monitors the workload on a set of z/OS systems and organizes the
provisioning of extra capacity to these systems when required. You define the systems to be
observed in a domain configuration file.
The details of extra capacity and the rules for its provisioning are stored in a policy file. These
two files are created and maintained through the Capacity Provisioning Management Console
(CPMC).
HMC
SE IBM z17
PR/SM
RMF
Ethernet WLM RMF RMF WLM
Switch DDS
CIM
server
Capacity
Provisioning CIM
(SNMP) server
Manager (CPM)
Provisioning
policy
Capacity Provisioning
Control Center (CPCC)
z/OS console(s)
The z/OS WLM manages the workload by goals and business importance on each z/OS
system. WLM metrics are available through existing interfaces, and are reported through IBM
Resource Measurement Facility (RMF) Monitor III, with one RMF gatherer for each z/OS
system.
Sysplex-wide data aggregation and propagation occur in the RMF Distributed Data Server
(DDS). The RMF Common Information Model (CIM) providers and associated CIM models
publish the RMF Monitor III data.
CPM retrieves critical metrics from one or more z/OS systems’ CIM structures and protocols.
CPM communicates to local and remote SEs and HMCs by using the Simple Network
Management Protocol (SNMP).
CPM can see the resources in the individual offering records and the capacity tokens. When
CPM activates resources, a check is run to determine whether enough capacity tokens
remain for the specified resource to be activated for at least 24 hours. If insufficient tokens
remain, no resource from the On/Off CoD record is activated.
If a capacity token is used during an activation that is driven by the CPM, the corresponding
On/Off CoD record is deactivated prematurely by the system. This process occurs even if the
CPM activates this record, or parts of it. However, you receive warning messages five days
before a capacity token is fully used.
The five days are based on the assumption that the consumption is constant for the five days.
You must put operational procedures in place to handle these situations. You can deactivate
the record manually, allow it to occur automatically, or replenish the specified capacity token
by using the Resource Link application.
HMC
z/OSMF Browser Interface
SE
Capacity Provisioning SE
Manager Console
Linux on
CF IBM Z CF
Sysplex A z/VM Sysplex B
CF CF
Sysplex B z/OS image Sysplex A
z/OS image
Sysplex A
Sysplex A
z/OS image
Capacity
Provisioning z/OS
z/OSimage
image z/OS
z/OSimage
image
Manager Sysplex A Sysplex A
z/OS image
z/OS image
z/OS
z/OSimage
image z/OS image Sysplex B
Sysplex B
The CPD configuration defines the CPCs and z/OS systems that are controlled by an
instance of the CPM. One or more CPCs, sysplexes, and z/OS systems can be defined into a
domain. Although sysplexes and CPCs do not have to be contained in a domain, they must
not belong to more than one domain.
CPM operates in the following modes, which allows four different levels of automation:
Manual
Use this command-driven mode when no CPM policy is active.
Analysis
In analysis mode, CPM processes capacity-provisioning policies and informs the operator
when a provisioning or de-provisioning action is required according to policy criteria.
Also, the operator determines whether to ignore the information or to manually upgrade or
downgrade the system by using the HMC, SE, or available CPM commands.
Confirmation
In this mode, CPM processes capacity provisioning policies and interrogates the installed
temporary offering records. Every action that is proposed by the CPM must be confirmed
by the operator.
Autonomic
This mode is similar to the confirmation mode, but no operator confirmation is required.
Several reports are available in all modes that contain information about the workload,
provisioning status, and the rationale for provisioning guidelines. User interfaces are provided
through the z/OS console and the CPMC application.
The provisioning policy defines the circumstances under which more capacity can be
provisioned (when, which, and how). The criteria features the following elements:
A time condition is when provisioning is allowed:
– Start time indicates when provisioning can begin.
– Deadline indicates that provisioning of more capacity is no longer allowed.
– End time indicates that deactivation of capacity must begin.
A workload condition is which work qualifies for provisioning. It can have the following
parameters:
– The z/OS systems that can run eligible work.
– The importance filter indicates eligible service class periods, which are identified by
WLM importance.
– Performance Index (PI) criteria:
• Activation threshold: PI of service class periods must exceed the activation
threshold for a specified duration before the work is considered to be suffering.
• Deactivation threshold: PI of service class periods must fall below the deactivation
threshold for a specified duration before the work is considered to no longer be
suffering.
– Included service classes are eligible service class periods.
– Excluded service classes are service class periods that must not be considered.
Tip: If no workload condition is specified, the full capacity that is described in the policy
is activated and deactivated at the start and end times that are specified in the policy.
Provisioning scope is how much more capacity can be activated and is expressed in
MSUs.
The number of zIIPs must be one specification per CPC that is part of the CPD and are
specified in MSUs.
The maximum provisioning scope is the maximum extra capacity that can be activated for
all the rules in the CPD.
In the specified time interval, the provisioning rule is that up to the defined extra capacity can
be activated if the specified workload is behind its objective.
The rules and conditions are named and stored in the Capacity Provisioning Policy.
For more information about z/OS Capacity Provisioning functions, see z/OS MVS Capacity
Provisioning User’s Guide, SC33-8299.
The provisioning management routines can interrogate the installed offerings, their content,
and the status of the content of the offering. To avoid the decrease in capacity, create only
one On/Off CoD offering on the system by specifying the maximum allowable capacity. Then,
when an activation is needed, the CPM can activate a subset of the contents of the offering
sufficient to satisfy the demand. If more capacity is needed later, the Provisioning Manager
can activate more capacity up to the maximum allowed increase.
Multiple offering records can be pre-staged on the SE HDD. Changing the content of the
offerings (if necessary) also is possible.
Remember: CPM controls capacity tokens for the On/Off CoD records. In a situation
where a capacity token is used, the system deactivates the corresponding offering record.
Therefore, you must prepare routines for catching the warning messages about capacity
tokens being used, and have administrative procedures in place for such a situation.
The messages from the system begin five days before a capacity token is fully used. To
avoid capacity records being deactivated in this situation, replenish the necessary capacity
tokens before they are used.
The CPM operates based on Workload Manager (WLM) indications, and the construct that is
used is the Performance Index (PI) of a service class period. It is important to select service
class periods that are suitable for the business application that needs more capacity.
For example, the application in question might be running through several service class
periods, where the first period is the important one. The application might be defined as
importance level 2 or 3, but might depend on other work that is running with importance
level 1. Therefore, it is important to consider which workloads to control and which service
class periods to specify.
In addition to boosting shutdown and IPL, System Recovery Boost can provide short-term
acceleration for specific system and sysplex recovery and diagnostic capture events in z/OS,
including, with the IBM z16, SVC dumps, HyperSwap configuration load, and middleware
region startup.
IBM z17 provides no new System Recovery Boost enhancements, meaning, all IBM z17
System Recovery Boost enhancements available on z16, are the same enhancements which
are available on IBM z17, with one exception: the System Recovery Boost Upgrade is sunset
on IBM z17.
There are three classes of boost: IPL (startup) boost, recovery process boost, and shutdown
boost. Each class has different capabilities.
1. IPL boost is enabled by default and delivers extra processor capacity after an IPL to get you
back up and running faster.
2. Recovery process boost provides increased short-duration processor capacity for the
acceleration of some sysplex recovery situations. Starting with the IBM z16, additional
recovery events can be boosted. This includes Standalone Dumps, CF Structure Recovery,
SVC Dump and other recovery events
3. Shutdown boost enables a faster shutdown by delivering extra processor capacity upon
indication that a shutdown is in progress.
The increased capacity can be provided in one or more of the following ways:
Speed boost:
Speed boost is a capability of SRB that improves the recovery time of exploiting operating
systems when running on a subcapacity CPC. If you are running on a subcapacity CPC, then
while System Recovery Boost is active, z/OS will request that the CPC firmware increase the
CP speed of the image to full capacity model speed for the duration of the boost. After the
boost ends, the image will return to the subcapacity model speed.
Speed boost applies only to the image being boosted; all other images not being boosted
zIIP boost
If your system has z Integrated Information Processors (zIIPs), then zIIP boost can improve
z/OS recovery time, assuming zIIP capacity is available to the image.
z/OS is the only operating system that can exploit the zIIP boost capability, as it’s the only OS
that can natively exploit zIIPs. While zIIP boost is active, z/OS will make most non-zIIP eligible
work zIIP eligible, thus allowing most work to run on zIIPs if there isn’t sufficient CP capacity
available. This provides additional capacity and parallelism to accelerate processing during
the boost periods. IBM refers to this as blurring the CPs and zIIPs together.
On z15 and z16 IBM also offered a third way to activate additional capacity: the priced feature
System Recovery Boost Upgrade. However, the System Recovery Boost Upgrade is sunset
on IBM z17:
New IBM z17 machines will not be able to order the SRB Upgrade Record
Current IBM z15 or z16 machines with the SRB Upgrade Record configured will not be
able to carry forward the record when upgrading to IBM z17
The SRB Upgrade Record will not be altered on IBM z15 or z16. Current z15 and z16
clients will not be impacted. Currently installed records may be extended, or new records
added
Base SRB functionality will not be impacted
Please refer to the Whitepaper “System Recovery Boost for the IBM z15 and z16” by Kevin
McKenzie for more details and setup.
Note: System Recovery Boost Upgrade was introduced with IBM z15 and carry forward
with IBM z16.
System Recovery Boost Upgrade is not supported with the IBM z17.
IBM Z Flexible Capacity for Cyber Resiliency supports a broad range of use case scenarios:
DR and DR Testing, compliance, facility maintenance, and pro-active avoidance.
12 annual capacity changes per serial 4 annual capacity changes per serial
number number
Stay-out period refers to the time a swapped capacity can stay on the backup server.
Inter-site means capacity can only be swapped between servers in different datacenters -
Intra-site refers to servers in the same datacenter.
Capacity shifts can be done under full customer control without IBM intervention and can be
fully automated by using IBM GDPS automation tools.
Flexible Capacity for Cyber Resiliency supports a broad set of scenarios and can be
combined with other IBM On-Demand offerings.
Flexible Capacity Authorization (#9933)
Flexible Capacity Record (#0376)
Billing feature codes (FC 0317 - 0322, and FC 0378 - 0386)
Flexible Capacity for Cyber Resiliency can be ordered by contacting your IBM hardware sales
representative. The offering requires that an order is placed against each serial number (SN)
that is involved in capacity transfer with one record per SN.
Installation and setup: The new Flexible Capacity Record is installed and set up on each
participating IBM Z server.
Please note that the above setup description is only one example: Flex Cap impresses with
its flexibility and many, also complex configurations, including multiple z System machines in
several data canters are feasible.
For more information and implementation examples, see Appendix C, “Tailored Fit Pricing
and IBM Z Flexible Capacity for Cyber Resiliency” on page 531.
Refer to the Redpaper “IBM Z Flexible Capacity for Cyber Resiliency, REDP-5702.
Important: CBU is for disaster and recovery purposes only. It cannot be used for peak
workload management or for a planned event.
8.11.1 Ordering
The CBU process allows for CBU to activate CPs, ICFs, zIIPs, IFLs, and SAPs. To use the
CBU process, a CBU enablement feature (FC 9910) must be ordered and installed. You must
order the quantity and type of PU that you require by using the following feature codes:
FC 6805: More CBU test activations
FC 6817: Total CBU years ordered
FC 6818: CBU records that are ordered
FC 6820: Single CBU CP-year
FC 6821: 25 CBU CP-year
FC 6822: Single CBU IFL-year
FC 6823: 25 CBU IFL-year
FC 6824: Single CBU ICF-year
FC 6825: 25 CBU ICF-year
FC 6828: Single CBU zIIP-year
FC 6829: 25 CBU zIIP-year
FC 6830: Single CBU SAP-year
FC 6831: 25 CBU SAP-year
FC 6832: CBU replenishment
The CBU entitlement record (FC 6818) contains an expiration date that is established at the
time of the order. This date depends on the quantity of CBU years (FC 6817). You can extend
your CBU entitlements through the purchase of more CBU years.
The number of FC 6817 per instance of FC 6818 remains limited to five. Fractional years are
rounded up to the nearest whole integer when calculating this limit.
If two years and eight months exist before the expiration date at the time of the order, the
expiration date can be extended by no more than two years. One test activation is provided for
each CBU year that is added to the CBU entitlement record.
FC 6805 allows for ordering more tests in increments of one. The maximum number of tests
that is allowed is 15 for each FC 6818.
The processors that can be activated by CBU come from the available unassigned PUs on
any installed processor drawer. A maximum of 208 CBU CP features can be ordered on a
z17. The number of features that can be activated is limited by the number of unused PUs on
the system; for example:
An IBM z17 Max43 with Capacity Model Identifier 401 can activate up to 43 CBU features.
These CBU features can be used to change the capacity setting of the CPs, and to
activate unused PUs.
An IBM z17 Max90 with 15 CPs, 4 IFLs, and 1 ICF has 70 unused PUs available. It can
activate up to 70 CBU features.
The ordering system allows for over-configuration in the order. You can order up to 208 CBU
features, regardless of the current configuration. However, at activation, only the capacity that
is installed can be activated. At activation, you can decide to activate only a subset of the
CBU features that are ordered for the system.
Sub-capacity makes a difference in the way that the CBU features are completed. On the
full-capacity models, the CBU features indicate the amount of extra capacity that is needed. If
the amount of necessary CBU capacity is equal to four CPs, the CBU configuration is four
CBU CPs.
The sub-capacity models feature multiple capacity settings of 4xx, 5yy, or 6yy. The standard
models use the capacity setting 7nn. To change the capacity setting, the number of CBU CPs
must be equal to or greater than the number of CPs in the base configuration.
For example, if the base configuration is a two-way 402, two CBU feature codes are required
to provide a CBU configuration of a four-way of the same capacity setting (404). If the desired
CBU target configuration is a four way 504, going from model capacity identifier 402 to a 504
requires four CBU feature codes (2 CBU features to change from 402 to 502 plus 2 CBU
features to go from 502 to 504).
If the capacity setting of the CPs is changed, more CBU features are required, not more
physical PUs. Therefore, your CBU contract requires more CBU features when the capacity
setting of the CPs is changed.
CBU can add CPs through LICCC only, and the IBM z17 ME1 must have the correct number
of installed processor drawers to allow the required upgrade. CBU can change the model
capacity identifier to a higher value than the base setting (4xx, 5yy, or 6yy), but the CBU
feature cannot decrease the capacity setting.
A CBU contract must be in place before the special code that enables this capability can be
installed on the system. CBU features can be added to an IBM z17 nondisruptively. For each
system enabled for CBU, the authorization to use CBU is available for 1 - 5 years.
The alternative configuration is activated temporarily, and provides more capacity than the
system’s original, permanent configuration. At activation time, determine the capacity that you
require for that situation. You can decide to activate only a subset of the capacity that is
specified in the CBU contract.
The base system configuration must have sufficient memory and channels to accommodate
the potential requirements of the large CBU target system. Ensure that all required functions
and resources are available on the backup systems. These functions include CF LEVELs for
coupling facility partitions, memory, and cryptographic functions, and connectivity capabilities.
When the emergency is over (or the CBU test is complete), the system must be returned to its
original configuration. The CBU features can be deactivated at any time before the expiration
date. Failure to deactivate the CBU feature before the expiration date can cause the system to
downgrade resources gracefully to the original configuration. The system does not deactivate
dedicated engines, or the last of in-use shared engines.
Planning: CBU for processors provides a concurrent upgrade. This upgrade can result in
more enabled processors, changed capacity settings that are available to a system
configuration, or both. You can activate a subset of the CBU features that are ordered for
the system. Therefore, more planning and tasks are required for nondisruptive logical
upgrades. For more information, see “Guidelines to avoid disruptive upgrades” on
page 398.
For more information, see the Capacity on Demand User’s Guide, SC28-7058-00.
CBU activation
CBU is activated from:
The SE by using the HMC and SSO to the SE
By using the Perform Model Conversion task
Through automation by using the API on the SE or the HMC
During a real disaster, use the Activate CBU option to activate the 90-day period.
Image upgrades
After CBU activation, the IBM z17 can have more capacity, more active PUs, or both. The
extra resources go into the resource pools and are available to the LPARs. If the LPARs must
increase their share of the resources, the LPAR weight can be changed or the number of
logical processors can be concurrently increased by configuring reserved processors online.
The operating system must concurrently configure more processors online. If necessary,
more LPARs can be created to use the newly added capacity.
CBU deactivation
To deactivate the CBU, the extra resources must be released from the LPARs by the
operating systems. In some cases, this process involves varying the resources offline. In
other cases, it can mean shutting down operating systems or deactivating LPARs. After the
resources are released, the same facility on the HMC/SE is used to turn off CBU. To
deactivate CBU, select the Undo temporary upgrade option from the Perform Model
Conversion task on the SE.
CBU testing
Test CBUs are provided as part of the CBU contract. CBU is activated from the SE by using
the Perform Model Conversion task. Select the test option to start a 10-day test period. A
standard contract allows one test per CBU year. However, you can order more tests in
increments of one up to a maximum of 15 for each CBU order.
Tip: The CBU test activation is done the same way as the real activation; that is, by using
the same SE Perform a Model Conversion window and selecting the Temporary
upgrades option. The HMC windows were changed to avoid accidental real CBU
activations by setting the test activation as the default option.
The test CBU must be deactivated in the same way as the regular CBU. Failure to deactivate
the CBU feature before the expiration date can cause the system to degrade gracefully back
to its original configuration. The system does not deactivate dedicated engines or the last
in-use shared engine.
CBU example
An example of a CBU operation is shown in Figure 8-14. The permanent configuration is a
504, and a record contains seven CP CBU features. During an activation, multiple target
configurations are available. With 7 CP CBU features, you can add up to 7CPs within the
same MCI, which allows the activation of a 506, 507, through to a 511(the blue path).
Alternatively, 4 CP CBU features can be used to change the MCI (in the example from a 504
to a 704) and then add the remaining 3 CP CBU features to upgrade to a 707 (the red path).
The GDPS service is for z/OS only, or for z/OS in combination with Linux on Z.
IBM z17 allows concurrent upgrades, which means that dynamically adding capacity to the
system is possible. If the operating system images that run on the upgraded system do not
require disruptive tasks to use the new capacity, the upgrade is also nondisruptive. This
process avoids power-on resets (POR), LPAR deactivation, and IPLs.
If the concurrent upgrade is intended to satisfy an image upgrade to an LPAR, the operating
system that is running in this partition must concurrently configure more capacity online. z/OS
operating systems include this capability. z/VM can concurrently configure new processors
and I/O devices online, and memory can be dynamically added to z/VM partitions.
If the concurrent upgrade is intended to satisfy the need for more operating system images,
more LPARs can be created concurrently on the IBM z17 system. These LPARs include all
resources that are needed. These extra LPARs can be activated concurrently.
These enhanced configuration options are available through the HSA, which is an IBM
reserved area in system memory.
In general, Linux operating systems cannot add more resources concurrently. However, Linux
and other types of virtual machines that run under z/VM can benefit from the z/VM capability
to nondisruptively configure more resources online (processors and I/O).
With z/VM, Linux guests can manipulate their logical processors by using the Linux CPU
hotplug daemon. The daemon can start and stop logical processors that are based on the
Linux load average value. The daemon is available in Linux SLES 10 SP2 and later, and in
Red Hat Enterprise Linux (RHEL) V5R4 and up.
8.12.1 Components
The following components can be added, depending on the considerations as described in
this section:
PUs
Memory
I/O
Cryptographic adapters
Special features
PUs
CPs, ICFs, zIIPs, and IFLs, can be added concurrently to an IBM z17 if unassigned PUs are
available on any installed processor drawer.
zIIP Processors require at least one PU characterized as CP, and the total number of zIIPs is
one minus the IBM z17 model. For instance, an IBM z17 Max90 can have up to 89 zIIPs,
considering one required CP. The IBM z17 allows the concurrent addition of a second and
third processor drawer if the CPC reserve features are installed.
If necessary, more LPARs can be created concurrently to use the newly added processors.
The Coupling Facility Control Code (CFCC) also can configure more processors online to
coupling facility LPARs by using the CFCC image operations window.
Memory
Memory can be added concurrently up to the physical installed memory limit. More processor
drawers can be installed concurrently, which allows further memory upgrades by LICCC, and
enables memory capacity on the new processor drawers.
By using the previously defined reserved memory, z/OS operating system images, and z/VM
partitions, you can dynamically configure more memory online. This process allows
nondisruptive memory upgrades. Linux on Z supports Dynamic Storage Reconfiguration.
I/O
I/O features can be added concurrently if all the required infrastructure (I/O slots and PCIe
Fan-outs) is present in the configuration. PCIe+ I/O drawers can be added concurrently
without planning if free space is available in one of the frames and the configuration permits.
Dynamic I/O configuration changes are supported by specific operating systems (z/OS and
z/VM), which allows for nondisruptive I/O upgrades. Dynamic I/O reconfiguration on a
stand-alone coupling facility system also is possible by using the Dynamic I/O activation for
stand-alone CF CPCs features.
Cryptographic adapters
Crypto Express8S features can be added concurrently if all the required infrastructure is in
the configuration.
Special features
Special features, such as zHyperlink, Coupling Express3 LR, and RoCE features can be
added concurrently if all infrastructure is available in the configuration.
Enabling and using the extra processor capacity is not apparent to most applications.
However, specific programs depend on processor model-related information, such as ISV
products. Consider the effect on the software that is running on an IBM z17 when you perform
any of these configuration upgrades.
Processor identification
The following instructions are used to obtain processor information:
Store System Information (STSI) instruction
The STSI instruction can be used to obtain information about the current execution
environment and any processing level that is below the current environment. It can be
used to obtain processor model and model capacity identifier information from the basic
machine configuration form of the system information block (SYSIB). It supports
concurrent upgrades and is the recommended way to request processor information.
Store CPU ID (STIDP) instruction
STIDP returns information that identifies the execution environment, system serial
number, and machine type.
Note: To ensure unique identification of the configuration of the issuing CPU, use the STSI
instruction specifying basic machine configuration (SYSIB 1.1.1).
The model capacity identifier contains the base capacity, On/Off CoD, and CBU. The Model
Permanent Capacity Identifier and the Model Permanent Capacity Rating contain the base
capacity of the system. The Model Temporary Capacity Identifier and Model Temporary
Capacity Rating contain the base capacity and On/Off CoD.
For more information about the STSI instruction, see z/Architecture Principles of Operation,
SA22-7832.
For more information about the STIDP instruction, see z/Architecture Principles of Operation,
SA22-7832.
In a multi-site high-availability configuration, another option is the use of Flexible Capacity for
Cyber Resiliency to move workload to another site while hardware maintenance is performed.
You can minimize the need for these outages by carefully planning and reviewing “Guidelines
to avoid disruptive upgrades” on page 398.
A fourth or fifth processor drawer can be installed at the IBM Manufacturing plant only.
One major client requirement was to eliminate the need for a client authorization connection
to the IBM Resource Link system when activating an offering. This requirement is met by the
IIBM z15, IBM z16, and IBM z17 servers.
After the offerings are installed on the IBM z17 SE, they can be activated at any time at the
customer’s discretion. No intervention by IBM or IBM personnel is necessary.
The IBM z17 ME1 can have up to eight temporary upgrade records (On/Off CoD, CBU,
System Recovery Boost Upgrade) installed or active at any given time. However, you can only
have one On/Off CoD record active at any given time.
The installed offerings can be activated fully or partially, and in any sequence and any
combination. The offerings can be controlled manually through command interfaces on the
HMC, or programmatically through a number of APIs. IBM applications, ISV programs, and
client-written applications can control the use of the offerings.
Resource usage (and therefore, financial exposure) can be controlled by using capacity
tokens in the On/Off CoD offering records.
The CPM is an example of an application that uses the CoD APIs to provision On/Off CoD
capacity that is based on the requirements of the workload. The CPM cannot control other
offerings.
For more information about any of the topics in this chapter, see Capacity on Demand User’s
Guide, SC28-7058.
Note: Throughout this chapter, IBM z17 refers to IBM z17 Model ME1 (Machine Type
9175).
The key objectives, in order of priority, are to ensure data integrity, computational integrity,
reduce or eliminate unscheduled outages, reduce scheduled outages, reduce planned
outages, and reduce the number of Repair Actions.
RAS can be accomplished with improved concurrent replace, repair, and upgrade functions
for processors, memory, drawers, and I/O. RAS also extends to the nondisruptive capability
for installing Licensed Internal Code (LIC) updates. In most cases, a capacity upgrade can be
concurrent without a system outage. As an extension to the RAS capabilities, environmental
controls are implemented in the system to help reduce power consumption and meet cooling
requirements.
The following overriding RAS requirements are principles as shown in Figure 9-1:
Include existing (or equivalent) RAS characteristics from previous generations.
Learn from current field issues and addressing the deficiencies.
Understand the trend in technology reliability (hard and soft) and ensure that the RAS
design points are sufficiently robust.
Invest in RAS design enhancements (hardware and firmware) that provide IBM Z and
Customer valued differentiation.
9.2 Technology
This section introduces some of the RAS features that are incorporated in the IBM z17
design.
X-FBC NXU
PBU
Core L2 L2
36MB 36MB Core
A-Bus
PCI-0
L2 L2
36MB Core
36MB
L2 L2
Core
36MB 36MB
MC
PCI-1
L2 L2
Core 36MB 36MB Core
L2 L2
MCU
Core Core
36MB 36MB
PBU
AIU M-Bus MCD
M-Bus
Two PU chips are packaged on a dual chip module (DCM), as shown in Figure 9-3. PU
chip to PU chip communication is performed through the M-Bus, which is a high-speed
bus that acts as ring-to-ring extension communication at 160 Gbps data rate, with the
following RAS characteristics:
– ECC on data path and snoop bus
– Parity on miscellaneous control lines
– One spare per ~50 data bit bus
Virtual
Vrtual
M-BUS
L3
L3
DPU
Figure 9-3 IBM z17 Dual Chip Module (M-Bus connects the two chips)
The processor recovery logic is on the DCM and includes the following RAS
characteristics:
– PU refresh for soft errors
IBM z17 processor memory and cache structure are shown in Figure 9-4. The physical L3
cache (on chip) and System Controller (SC) SCM (physical L4 cache for IBM z15), which was
implemented in EDRAM, were removed and replaced on IBM z17 virtual L3 and L4 cache
structure.
MEMORY
Virtual L4 2.8 GB 6
DCM 0 DCM 3
.. 4 DCMs ..
L2 L2 L2 L2 L2 L2 L2 L2
L2 L2 L2 L2 L2 L2 L2 L2
36 36 36 36 36 36 36 36
36 MB 36 MB 36 MB 36 MB 36 MB 36 MB 36 MB 36 MB
MB MB MB MB MB MB MB MB
CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU CORE 0 CORE 7 DPU
L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1) L1 I,D L1 I,D (L1)
Note: All of these features are available without Flexible Memory, but all customer
purchased memory is not available for use in most cases. Some work might need to be
shut down or not restarted.
9.3 Structure
The IBM z17 was designed in a 19-inch frames format. The IBM z17 ME1 can be delivered
only as an air-cooled system and fulfills the requirements for ASHRAE A3 class environment.
The IBM z17 ME1 can have up to 12 PCIe I/O drawers delivered with Power Distribution Unit
(PDU). The structure of the IBM z17 is designed with the following goals:
Enhanced system modularity
Standardization to enable rapid integration
Platform simplification
Cables are keyed to ensure that correct lengths are plugged. Plug detection ensures correct
location, and custom latches ensure retention. Further improvements to the fabric bus include
symmetric multiprocessing (SMP-101) cables that connect the drawers.
The thermal RAS design also was improved for the field-replaceable water manifold for PU
cooling.
Independent channel recovery with replay buffers on all interfaces allows recovery of a single
DIMM channel, while other channels remain active. Further redundancies are incorporated in
I/O pins for clock lines to main memory, which eliminates the loss of memory clocks because
of connector (pin) failure. The following RAS enhancements reduce service complexity:
Continued use of RAIM ECC
RAIM logic was moved on DIMM
N+1 on-DIMM voltage regulation
Replay buffer for hardware retry on soft errors on the main memory interface
Redundant I/O pins for clock lines to main memory
Staggered refresh for performance enhancement
The new RAIM scheme achieves a higher ratio of data to ECC symbols, while also
providing another chip mark
Firmware was updated to improve filtering and resolution of errors that do not require action.
The following RAS enhancements reduce service touches:
Improved error resolution to enable filtering
Enhanced integrated sparing in processor cores
Cache relocates
N+1 SEEPROM
N+2 POL - (Point of Load)
DRAM marking
(Dynamic) Spare BUS lanes for PU-PU, PU-MEM, MEM-MEM fabric
N+1 Support Element (SE) with N+1 SE power supplies
Redundant temperature sensor (one SEEPROM and one temperature sensor per I2C
bus)
FICON forward error correction
A-Bus Lane Sparing
OMI (Open Memory Interface) Bus Lane Sparing
PU Core Sparing
This function is used when a PU fails and no spares are available. The state of the failing
PU is passed to another active PU, where the operating system uses it to successfully
resume the task (in most cases, without customer intervention).
Cooling change
The IBM z17 air-cooled configuration includes a front-to-rear radiator cooling system. The
radiator pumps, blowers, controls, and sensors are N+1 redundant. In normal operation,
one active pump supports the system. A second pump is turned on and the original pump
is turned off periodically, which improves reliability of the pumps. The replacement of
pumps or blowers is concurrent with no effect on performance.
FICON Express 32S and FICON Express 32-4P with Forward Error Correction (FEC)
FICON Express32-4P and 32S features continue to provide a new standard for
transmitting data over 32 Gbps links by using 64b/66b encoding. The new standard that is
defined by T11.org FC-FS-3 is more efficient than the current 8b/10b encoding.
FICON Express32-4P and 32S channels that are running at 32 Gbps can take advantage
of FEC capabilities when connected to devices that support FEC.
FEC allows FICON Express32-4P and 32S channels to operate at higher speeds, over
longer distances, with reduced power and higher throughput. They also retain the same
reliability and robustness for which FICON channels are traditionally known.
FEC is a technique that is used for controlling errors in data transmission over unreliable
or noisy communication channels. When running at 32 Gbps link speeds, customers often
see fewer I/O errors, which reduces the potential effect to production workloads from those
I/O errors.
Read Diagnostic Parameters (RDP) improve Fault Isolation. After a link error is detected
(for example, IFCC, CC3, reset event, or a link incident report), link data that is returned
from Read Diagnostic Parameters is used to differentiate between errors that result from
failures in the optics versus failures because of dirty or faulty links.
Key metrics can be displayed on the operator console. The results of a display matrix
command with the LINKINFO=FIRST parameter, which collects information from each
device in the path from the channel to the I/O device (see Figure on page 411):
– Transmit (Tx) and Receive (Rx) optic power levels from the PCHID, Switch Input and
Output, and I/O device
– Capable and Operating speed between the devices
– Error counts
– Operating System requires new function APAR OA49089
z17
32G
PCHID 100
CHP 40
Tx -0.35 Rx 0.66
32G
Rx 1.68 Tx -0.12
12
32G
32G
0C
Tx 0.87 Rx -1.35
32G
Rx 1.04 Tx -0.31
0030
32G
DS8900
The new IBM Z Channel Subsystem Function performs periodic polling from the channel
to the end points for the logical paths that are established and reduces the number of
useless Repair Actions (RAs).
The RDP data history is used to validate Predictive Failure Algorithms and identify Fibre
Channel Links with degrading signal strength before errors start to occur. The new Fibre
Channel Extended Link Service (ELS) retrieves signal strength.
FICON Dynamic Routing
FICON Dynamic Routing (FIDR) enables the use of storage area network (SAN) dynamic
routing policies in the fabric. With the IBM z17, FICON channels are no longer restricted to
the use of static routing policies for inter-switch links (ISLs) for cascaded FICON directors.
FICON Dynamic Routing dynamically changes the routing between the channel and
control unit that is based on the Fibre Channel Exchange ID. Each I/O operation has a
unique exchange ID. FIDR supports static SAN routing policies and dynamic routing
policies.
FICON Dynamic Routing can help clients reduce costs by providing the following features:
– Share SANs between their FICON and FCP traffic.
– Improve performance because of SAN dynamic routing policies that better use all of
the available ISL bandwidth through higher use of the ISLs,
– Simplify management of their SAN fabrics by using static routing policies that assign
different ISL routes with each power-on-reset (POR), which makes the SAN fabric
performance difficult to predict.
Customers must ensure that all devices in their FICON SAN support FICON Dynamic
Routing before they implement this feature.
The difference between scheduled outages and planned outages might not be obvious. The
general consensus is that scheduled outages occur sometime soon. The time frame is
approximately two weeks.
Planned outages are outages that are planned well in advance and go beyond this
approximate two-week time frame. This chapter does not distinguish between scheduled and
planned outages.
Preventing unscheduled, scheduled, and planned outages was addressed by the IBM Z
system design for many years.
IBM z17 ME1 has a fixed size HSA of 884 GB. This size helps eliminate planning
requirements for HSA and provides the flexibility to update dynamically the configuration. You
can perform the following tasks dynamically:2
Add:
– Logical partition (LPAR)
– Logical channel subsystem (LCSS)
– Subchannel set
– Logical PU to an LPAR
– Cryptographic coprocessor
– Memory
– Physical processor
Remove a cryptographic coprocessor
2 Some planning considerations might exist. For more information, see Chapter 8, “System upgrades” on page 353.
By addressing the elimination of planned outages, the following tasks also are possible:
Concurrent driver upgrades
Concurrent and flexible customer-initiated upgrades
For more information about the flexible upgrades that are started by customers, see 8.3.2,
“Customer Initiated Upgrade facility” on page 362.
STP management of concurrent CTN Split and Merge
Dynamic I/O for stand-alone CF, Linux on Z and z/TPF running on IBM z16 and IBM z17
Dynamic I/O configuration changes can be made to a stand-alone CF, Linux on Z and
z/TPF without requiring a disruptive power on reset. A firmware LPAR with a
firmware-based appliance version of an HCD instance is used to apply the new I/O
configuration changes. The firmware-based LPAR is driven by updates from an HCD
instance that is running in a z/OS LPAR on a different IBM z16 or IBM z17 CPC that is
connected to the same IBM z17 HMA3.
System Recovery Boost Stage 3
System Recovery Boost enhancements for IBM z17 allows the possibility of significantly
reducing the effect of these disruptions by boosting a set of recovery processes that
create significant pain points for users.
These recovery processes include the following examples:
– IBM SAN Volume Controller memory dump boost
– Middleware restart or recycle boost
– HyperSwap configuration load boost
For more information about System Recovery Boost, see Introducing IBM Z System
Recovery Boost, REDP-5563.
Coupling Express3 LR (CE3 LR) coupling cards plug into the Gen4 PCIe+ I/O drawer,
which allows more connections with a faster bandwidth.
Coupling Express3 LR does not support the “going away signal”; however, ECAR can be
used to assist with recovery in the following configurations:
Design of pervasive infrastructure controls in processor chips in memory ASICs.
Improved error checking in the processor recovery unit (RU) to better protect against word
line failures in the RU arrays.
The EDA procedure and careful planning help ensure that all the resources are still available
to run critical applications in an (n-1) drawer configuration. This process allows you to avoid
planned outages. Consider the flexible memory option to provide more memory resources
when you are replacing a drawer.
For more information about flexible memory, see 2.5.7, “Flexible Memory Option” on page 53.
To minimize the effect on current workloads, ensure that sufficient inactive physical resources
exist on the remaining drawers to complete a drawer removal. Also, consider deactivating
noncritical system images, such as test or development LPARs. After you stop or deactivate
these noncritical LPARs and free their resources, you might find sufficient inactive resources
to contain critical workloads while completing a drawer replacement.
The following configurations especially enable the use of the EDA function. These IBM z17
features need enough spare capacity so that they can cover the resources of a fenced or
isolated drawer. This configuration imposes limits on the following number of the client-owned
PUs that can be activated when one drawer within a model is fenced:
A maximum of:
– 43 PUs are configured on the Max43
– 90 PUs are configured on the Max90
– 136 PUs are configured on the Max136
– 183 PUs are configured on the Max183
– 208 PUs are configured on the Max208
No special feature codes are required for PU and model configuration.
IBM z17 feature Max43 and Max90 each have 5 SAPs in each drawer. Max136 has three
CPC drawers and a total 16 standard SAPs, Max183 has four CPC drawers and a total of
21 SAPS, and Max208 also has four CPC drawers and 24 SAPs.
The flexible memory option delivers physical memory so that 100% of the purchased
memory increment can be activated even when one drawer is fenced.
The system configuration must have sufficient dormant resources on the remaining drawers
in the system for the evacuation of the drawer that is to be replaced or upgraded. Dormant
resources include the following possibilities:
Unused PUs or memory that is not enabled by LICCC
Inactive resources that are enabled by LICCC (memory that is not being used by any
activated LPARs)
Memory that is purchased with the flexible memory option
Extra drawers
The I/O connectivity also must support drawer removal. Most of the paths to the I/O feature
redundant I/O interconnect support in the I/O infrastructure (drawers) that enable connections
through multiple fan-out cards.
If sufficient resources are not present on the remaining drawers, specific noncritical LPARs
might need to be deactivated. One or more PUs or storage might need to be configured
offline to reach the required level of available resources. Plan to address these possibilities to
help reduce operational errors.
Include the planning process as part of the initial installation and any follow-on upgrade that
modifies the operating environment. A customer can use the IBM Call Home Connect Cloud
reports, tasks on the Support Element (Storage Information, View Hardware Configuration),
and CHPID Mapping Tool reports, to determine the number of drawers, active PUs, memory
configuration, and channel layout.
If the IBM z17 is installed, click Prepare for Enhanced Drawer Availability in the Perform
Model Conversion window of the EDA process on the Hardware Management Appliance
(HMA). This task helps you determine the resources that are required to support the removal
of a drawer with acceptable degradation to the operating system images.
The EDA process determines which resources (including memory, PUs, and I/O paths) are
free to allow for the removal of a drawer. You can run this preparation on each drawer to
determine which resource changes are necessary. Use the results as input in the planning
stage to help identify critical resources.
With this planning information, you can examine the LPAR configuration and workload
priorities to determine how resources might be reduced and still allow the drawer to be
concurrently removed.
When you perform the review, document the resources that can be made available if the EDA
is used. The resources on the drawers are allocated during a POR of the system and can
change after that process.
Perform a review when changes are made to your IBM z17, such as adding drawers, PUs,
memory, or channels. Also, perform a review when workloads are added or removed, or if the
HiperDispatch feature was enabled and disabled since the last time you performed a POR.
For the EDA process, this phase is the preparation phase. It is started from the SE, directly or
on the HMA, by using the Single object operation option on the Perform Model Conversion
window from the CPC configuration task list, as shown in Figure 9-6.
Processor availability
Processor resource availability for reallocation or deactivation is affected by the type and
quantity of the resources in use, such as the following examples:
Total number of PUs that are enabled through LICCC
PU definitions in the profiles that can be dedicated and dedicated reserved or shared
Active LPARs with dedicated resources at the time of the drawer repair or replacement
To maximize the PU availability option, ensure that sufficient inactive physical resources are
on the remaining drawers to complete a drawer removal.
Memory availability
Memory resource availability for reallocation or deactivation depends on the following factors:
Physically installed memory
Image profile memory allocations
Amount of memory that is enabled through LICCC
Flexible memory option
Virtual Flash Memory if enabled and configured
For more information, see 2.7.2, “Enhanced drawer availability” on page 60.
Preparation: The preparation step does not reallocate any resources. It is used only to
record customer choices and produce a configuration file on the SE that is used to run the
concurrent drawer replacement operation.
The preparation step can be done in advance. However, if any changes to the configuration
occur between the preparation and the physical removal of the drawer, you must rerun the
preparation phase.
The process can be run multiple times because it does not move any resources. To view the
results of the last preparation operation, click Display Previous Prepare Enhanced Drawer
Availability Results from the Perform Model Conversion window in the SE.
The preparation step can be run without performing a drawer replacement. You can use it to
dynamically adjust the operational configuration for drawer repair or replacement before IBM
SSR activity. The Perform Model Conversion window in you click Prepare for Enhanced
Drawer Availability is shown in Figure 9-6 on page 419.
After you click Prepare for Enhanced Drawer Availability, the Enhanced Drawer Availability
window opens. Select the drawer that is to be repaired or upgraded; then, select OK, as
shown in Figure 9-7 on page 421. Only one target drawer can be selected at a time.
The system verifies the resources that are required for the removal, determines the required
actions, and presents the results for review. Depending on the configuration, the task can take
from a few seconds to several minutes.
The preparation step determines the readiness of the system for the removal of the targeted
drawer. The configured processors and the memory in the selected drawer are evaluated
against unused resources that are available across the remaining drawers. The system also
analyzes I/O connections that are associated with the removal of the targeted drawer for any
single path I/O connectivity.
If insufficient resources are available, the system identifies the conflicts so that you can free
other resources.
Preparation tabs
The results of the preparation are presented for review in a tabbed format. Each tab indicates
conditions that prevent the EDA option from being run. The following tab selections are
available:
Processors
Memory
Single I/O
Single Domain I/O
Single Alternative Path I/O
Only the tabs that feature conditions that prevent the drawer from being removed are
displayed. Each tab indicates the specific conditions and possible options to correct them.
For example, the preparation identifies single I/O paths that are associated with the removal
of the selected drawer. These paths must be varied offline to perform the drawer removal.
After you address the condition, rerun the preparation step to ensure that all the required
conditions are met.
Important: Consider the results of these changes relative to the operational environment.
Understand the potential effect of making such operational changes. Changes to the PU
assignment, although technically correct, can result in constraints for critical system
images. In specific cases, the solution might be to defer the reassignments to another time
that has less effect on the production system images.
After you review the reassignment results and make any necessary adjustments, click OK
(see Figure 9-9).
By understanding the system configuration and the LPAR allocation for memory, PUs, and
I/O, you can make the best decision about how to free the necessary resources to allow for
drawer removal.
Upon successful completion, the system is ready for the removal of the drawer.
The preparation process can be run multiple times to ensure that all conditions are met. It
does not reallocate any resources; instead, it produces only a report. The resources are not
reallocated until the Perform Drawer Removal process is started.
Review the results. The result of the preparation task is a list of resources that must be made
available before the drawer replacement can occur.
Reserved storage: If you plan to use the EDA function with z/OS LPARs, set up
reserved storage and an RSU value. Use the RSU value to specify the number of
storage units that are to be kept free of long-term fixed storage allocations. This
configuration allows for storage elements to be varied offline.
When correctly configured, IBM z17 supports concurrently activating a selected new LIC
Driver level. Concurrent activation of the selected new LIC Driver level is supported only at
specific released sync points. Concurrently activating a selected new LIC Driver level
anywhere in the maintenance stream is not possible. Specific LIC updates do not allow a
concurrent update or upgrade.
Concurrent crossover from Driver level N to Driver level N+1, then to Driver level N+2,
must be done serially. No composite moves are allowed.
Disruptive upgrades are permitted at any time, and allow for a composite upgrade (Driver
N to Driver N+2).
Concurrently backing up to the previous driver level is not possible. The driver level must
move forward to driver level N+1 after CDM is started. Unrecoverable errors during an
update might require a scheduled outage to recover.
The CDM function does not eliminate the need for planned outages for driver-level upgrades.
Upgrades might require a system level or a functional element scheduled outage to activate
the new LIC. The following circumstances require a scheduled outage:
Specific complex code changes might dictate a disruptive driver upgrade. You are alerted
in advance so that you can plan for the following changes:
– Design data or hardware initialization data fixes
– CFCC release level change
OSA CHPID code changes might require PCHID Vary OFF/ON to activate new code.
Crypto code changes might require PCHID Vary OFF/ON to activate new code.
Note: zUDX clients should contact their User Defined Extensions (UDX) provider
before installing Microcode Change Levels (MCLs). Any changes to Segments 2 and 3
from a previous MCL level might require a change to the customer’s UDX. Attempting to
install an incompatible UDX at this level results in a Crypto checkstop.
The IBM z17 Coupling Express3 LR FC 0498 is the only native PCIe feature managed by
Resource Group code.
Consider the following points for managing native PCIe adapters microcode levels:
Updates to the Resource Group require all native PCIe adapters that are installed in that
RG to be offline.
Updates to the native PCIe adapter require the adapter to be offline. If the adapter is not
defined, the MCL session automatically installs the maintenance that is related to the
adapter.
The PCIe native adapters are configured with Function IDs (FIDs) and might need to be
configured offline when changes to code are needed. To help alleviate the number of
adapters (and FIDs) that are affected by the Resource Group code update, IBM z17 have four
Resource Groups per system (CPC).
Note: Other adapter types, such as FICON Express, Network Express, OSA Express,
zHyperLink2.0, and Crypto Express that are installed in the PCIe+ I/O drawer are not
affected because they are not managed by the Resource Groups.
The front, rear, and top view of the PCIe+ I/O drawer and the Resource Group assignment by
card slot are shown in Figure 9-10. All PCIe+ I/O drawers that are installed in the system
feature the same Resource Group assignment.
SE servers include N+1 redundant power supplies. The SEs hardware also runs the
virtualized HMC code.
For more information, see 10.2.2, “HMC and SE server” on page 444.
4
If HMA feature is installed on the system, special upgrade procedure must be followed to ensure nondisruptive SE
upgrade.
10
This chapter describes the newest and most important elements for the HMC and SE.
Tip: The Help function is a good starting point to get more information about all of the
functions that can be used by the HMC and SE. The Help feature is available by clicking
Help from the drop-down menu that appears when you click the User menu in the top
upper right corner.
For more information, see the IBM Z Documentation , select the applicable server and then
Library Overview, select Hardware Management (HMC) Version 2.17.0 help system
content or Support Element (SE) Version 2.17.0 help system content.
10.1 Introduction
The HMC is a closed system (appliance), which means that no other applications can be
installed on it. The HMC runs a set of management applications.
The HMC code runs on the two integrated 1U rack-mounted servers on the top of the IBM z17
A-frame. Stand-alone HMCs (Tower or Rack Mount) can no longer be ordered and are not
supported.
HMC Driver 61/Version 2.17.0 is introduced with IBM z17. Driver 61 can be installed on z15
and z16 HMA HMCs.
With IBM z17, the HMA feature, installed on top of the A-frame, shares the integrated 1U
server hardware with the SE code. The SE code runs virtualized under the HMC on each of
the two integrated 1U rack-mounted servers. One SE is the Primary SE (active), and the other
is the Alternative SE (backup). As with the HMCs, the SEs are closed systems (appliances),
and other applications cannot be installed.The HMC is used to set up, manage, monitor, and
operate one or more IBM Z CPCs. It manages IBM Z hardware and its logical partitions
(LPARs), and provides support applications. At least one HMC is required to operate an IBM
Z. An HMC can manage multiple IBM Z CPCs.
When tasks are performed at the HMC, the commands are routed to the Primary SE of the
IBM Z. The SE then issues those commands to the target CPC.
Note: The new Driver level for HMC and SE for IBM z17 is Driver 61. Driver 61 is
equivalent to Version 2.17.0.
Since IBM z15, several “traditional” SE-only functions were moved to HMC tasks. On HMC
Driver 61/Version 2.17.0, these functions appear as native HMC tasks but run on the SE.
These HMC functions run in parallel with Single Object Operations (SOOs), simplifying and
streamlining system management. For more information about SOOs, see “Single Object
Operations” on page 456.
For more information about the HMC, see also the IBM Z Hardware Management Console
Videos.
Note: HMC 2.17.0 supports managing IBM z15, IBM z16, and IBM z17 IBM Z server
generations (N-2),
For more information about HMC and SE functions, use the HMC and SE console help
system or see IBM Resource Link (login required), select Library, the applicable server and
then, select Hardware Management Console Operations Guide or Support Element
Operations Guide.
Further, in the HMC Dashboard, under Helpful Links, you will find links to Resource Link,
videos, APIs, and so on under Helpful links. Under About this HMC, you can find information
like the installed Bundle level. If the HMC is part of an HMA, this HMC shows if it is currently
running the Primary or Alternate SE. Also, the name of the peer HMC, where the 2nd SE is
running, is displayed in Figure 10-1.
Figure 10-1 Example of “About this HMC” Driver 61/Version 2.17.0 HMC and SE new features
Reassign HMC
Manage Firmware Features
System I/O Configuration Analyzer
Perform Model Conversion
The SSO possibilities are implemented for the different options in the HMC User
Management. First, you must define your SSO servers in the SSO task, as shown in Figure
10-3 on page 381.
If you create a new User, Template, or Pattern you can allocate the according SSO servers.
External Time Source (ETS) enhancements for STP and HMC NTP
The following enhancements are available in the ETS for STP:
It is now supported to have 3 Network Time Protocol (NTP) ETS servers
Support for Mixed Mode (use of both NTP and PTP ETS servers in parallel)
Allow use of certificates for secure Network Time Security (NTS) NTP communication
between the Central Processor Complex (CPC) and configured ETS(es)
Allow use of advanced monitoring commands for NTP and PTP
NTS support for the STP ETS NTP and for the HMC NTP connection
For PTP you can choose the Domain number and between Multicast and Unicast.
For more information, see the IBM Z Server Time Protocol Guide, SG24-8480.
Figure 10-4 shows an example of the new message if you import an HMC Certificate.
For more information see 10.6.1, “Remote Code Load (RCL)” on page 468.
Figure 10-5 on page 435 shows an example of a Dual Control request for deactivating a
LPAR.
The first release of Dual Control will support the following tasks:
Activate
Deactivate
Stop (DPM)
Reset
Load
Change LPAR Cryptographic controls
Perform Model Conversion
Dual Control is available on IBM z15 and IBM z16 systems when connected to an HMC
updated to Driver 61/Version 2.17.0. For SE versions below 2.17.0, it remains supported but
with certain limitations:
Users should remove the Single Object Operations permission from any role with Dual
Control enabled on an IBM z15 and IBM z16. These systems' SEs do not natively support
dual control, and users could bypass HMC dual control enforcement by performing
specific supported actions, such as activating the SE.
Dual Control is not supported for Change LPAR Cryptographic Controls.
Dual Control is not supported for Perform Model Conversion.
If minimum TLS version is set to 1.2, TLS 1.3 is attempted first and then falls back to TLS 1.2
if required.
The Customize Console Services task (for HMC and SE), Configure TLS Settings, allows you
to set both the minimum TLS protocol version and the enabled TLS cipher suites. This task
identifies the protocol versions each cipher suite is compatible with and only shows those that
can be used with the chosen minimum TLS protocol version.
Figure 10-6 on page 437 shows the HMC window that allows the selection of the wanted TLS
version.
The task identifies the protocol version(s) each cipher suite can use and only displays those
that can potentially be used with the selected minimum TLS protocol version.
In the Audit Log of the HMC and SE you can prove the according settings.
BCPii enhancements
We enable you to handle HMC/SE UI functions automated via BCPii. Since z15 BCPii v2 is
available, BCPii v1 & SNMP are being deprecated. We recommend you move to BCPii v2 if
you have not already. Below, you can find most of the new enhancements with IBM z17 for
BCPii v2.
More information can be found in the Hardware Management Console Web Services API
(Version 2.17.0), SC27-2646 and 10.5.9, “Automated operations via APIs” on page 464. On
the HMC Dashboard -> Helpful links -> APIs is a shortcut to find the appropriate
documentation.
The SE notifications are limited to those objects which are exposed to BCPii. For example
/api/users APIs are not exposed so Property/Inventory Change notifications for users won’t
be generated.
The HMC notifications are only restricted by permissions of the associated HMC identity.
Registration is made available via a set of API operations only available to the BCPii interface,
which can be seen in Table 10-1.
There are no authorization requirements to register for notifications. All permission checking
is done when a notification is to be sent. Clients can only list and modify registrations
pertaining to their own session. In this context a session is related to the z/OS user ID making
the request.
When the HMC receives the request, it authenticates the JWT content (validates JWT is not
expired and has a valid signature and ensures there is an HMC user mapping for the
associated z/OS User ID). If the JWT is valid, there will be an API sessions established for the
HMC identity mapped to the z/OS User ID and use the identity associated with the API
session to authorize the request. The HMC needs to be configured accordingly. The following
tasks has to be followed: Customize Console Services -> BCPii authorization ->
Change.... Here you define the according users.
Data Replication and Save/Restore Customizable Console Data has new data type BCPii
Authorisation Data to replicate, save, and restore.
Note: We recommend to do all HMC user data definitions (not just users) only on the HMC.
User Management on the SE should no longer be necessary.
Things to consider:
Locally defined SE users/patterns/templates take priority over inherited HMC user
definitions
HMC user definitions do not become managed on the SE and will not show in User
Management
LDAP authentication is executed through an HMC
MFA enabled users are not supported when logging on locally on the SE, only HMC users
via Single Object Operations to SE
LOADDEV
DUMPDEV (DUMPDEV is not a valid operand for IPL directory statements)
In these cases, the IPL command or IPL directory statement operands do not provide all the
required information for the IPL.
List-directed IPL parameters (LOADDEV and DUMPDEV parameters) provide the IPL
information that the IPL command or IPL directory statement operands don't offer. We have
added the ability to customize the timeout value for this IPL. Up until now, it has been a
hardcoded value of 60 seconds. Since most list-directed IPL’s occur on FCP devices, this
value was not always sufficient.
The new timeout value defaults to 300 seconds, but you can now choose from a range of 60
to 600 seconds.
The timeout value is supported through the LOAD and IMAGE profiles and if you are using
the Web-Services API (WSAPI).
Hardware message delete traps have the same format as hardware message traps; these will
also have the new Case Number field.
SNMP traps are defined in SNMP MIB file format; the hardware message trap has the
following fields, each field has an SNMP object identifier (OID) and a value.
OID for the object the message is associated with, Value: 1 (hardware message)
OID for the message type, Value: 1 (hardware message)
OID for the message text, Value: Message text string
OID for the message refresh indicator, Value: Boolean
OID for the message time stamp, Value: Time stamp string
OID for the list of CPC images associated with the message, Value: Image list string
OID for the name of the object associated with the message, Value: Name of the
associated object
OID for the case number associated with the message, Value: case number string
This is a new field that will contain the Case Number if the message is related to a case or
the empty string if not related to a case.
If you are already sending SNMP Traps, then you have nothing to implement.
In Figure 10-8 you can see an example of these settings on the HMC.
Figure 10-8 HMC Change Remote Power Off and Restart Setting
Note: To change these settings you have to login locally on the HMC.
I/O Element Firmware Updates require that all associated adapters are Configured Off/On.
This may be a disruptive action to the I/O adapters depending on redundancy. We expect very
rare cases in which I/O Element Firmware Updates are required.
Figure 10-9 on page 442 show an example of the task I/O Element Management.
Suppose you start an Update Firmware for an I/O Element ID. In that case, all associated
Adapters to this I/O Element ID may be taken offline via the Operating System - if you
continue, they will be forced offline. A disruptive confirmation panel will be displayed to
confirm potentially disruptive changes to online I/O Adapters. The action has to be confirmed
via password.
These new and other possible pending conditions can also be checked via System
Information -> Query Additional Actions.
For more information on this topic see “IBM HMC Mobile” on page 457.
Now you can also perform these tasks with the Web Services API interface:
– Report a Console Problem can be invoked with the following URI:
• /api/console/operations/report-problem
– Report a CPC Problem can be invoked with the following URI:
• /api/cpcs/{cpc-id}/operations/report-problem
Where {cpc-id} is the Object ID of the CPC object.
– Report a Logical Partition Problem can be invoked with the following URI:
• /api/logical-partitions/{logical-partition-id}/operations/report-problem
Where {logical-partition-id} is the Object ID of the Logical Partition object.
– Report a Partition Problem can be invoked with the following URI:
• /api/partitions/{partition-id}/operations/report-problem
Where {partition-id} is object ID of the Partition object.
The SE interface can be accessed from the HMC using the Single Object Operation on the
HMC.
The HMA feature FC 0355 consists of the HMC code that is installed on the two 1U
rack-mounted servers on the top of the A-frame, which are collocated with the SE code. The
servers are configured with processor, memory, storage, and networking resources to support
all processing and security requirements for running HMC and SE code.
The two HMCs (HMC1 and HMC2; these names can be changed) from manufacturing are
configured as independent HMCs. They are not Primary or Alternative HMCs. HMC Data
Replication can be established, if wanted.
The SE runs as a guest of the HMC. The two SE code instances are clustered for high
availability. One SE code runs the Primary SE the other Alternative SE. These SEs perform
data mirroring and their role can be switched for maintenance purposes.
Switching the Primary and Alternative SE roles is essential because HMC microcode
maintenance can be performed only on the server that runs the Alternative SE as a guest.
Important: With the IBM HMA, shutdown or restart of the HMC that includes the Primary
SE code as guest also restarts the Primary SE code. An application restart of the HMC is
not disruptive to the guest SE code.
If the HMC, which receiving microcode updates, runs the Primary SE guest, an SE switchover
must be performed. Figure 10-10 shows the HMA relation to the HMCs and SEs.
For more information about the KMM and how to attach it to the A frame, see 9175 Installation
Manual, GC28-7050.
Microcode load
Microcode can be loaded by using the following options:
USB
If the HMC and SE code is included with a USB drive when a new system is ordered, the
load procedures can be done using the USB drive
Electronic
If USB load is not allowed or if FC 0846 (no physical media options) is ordered, an ISO
image is used for a firmware load over a local area network (LAN). The ISO image can be
downloaded through zRSF or an FTP (/FTPS/SFTP) server accessible by the LAN.
Important: The ISO image server must be in the same IP subnet with the target system to
load the HMC or SE code.
10.2.4 SE driver and version support with the HMC Driver 61/Version 2.17.0
The driver of the HMC and SE is equivalent to a specific HMC and SE version
Driver 41 is equivalent to Version 2.15.0
Driver 51 is equivalent to Version 2.16.0
Driver 61 is equivalent to Version 2.17.0
An HMC with Driver 61/Version 2.17.0 supports N-2 IBM Z server generations. Some
functions that are available on Driver 61/Version 2.17.0 and later are supported only when the
HMC is connected to an IBM z17 with Driver 61/Version 2.17.0.
The SE drivers and versions that are supported by the HMC Driver 61/Version 2.17.0 are
listed in Table 10-2.
Table 10-2 SEs that are supported by HMC Driver 61/Version 2.17.0
IBM Z product name Machine type SE driver SE version
For more information about the HMC settings related to access and security, see IBM
Resource Link. On that web page, select Library, IBM Z, IBM z17 then Library Overview.
With IBM z17 and HMA, the SE code runs virtualized on the two integrated HMCs on the two
integrated 1U rack-mounted servers on the top of the IBM z17 A-frame. One SE is the
Primary SE (active), and the other is the Alternative SE (backup).
Note: The HMA HMCs must be connected to the SEs using a customer-provided switch
infrastructure. Direct connection between the HMCs and the SEs is not supported.
The connectivity for multiple IBM Z generations HMAs environments (IBM z17 N-2 only) is
shown in Figure 10-16.
Various methods are available to set up the network. Designing and planning the HMC and
SE connectivity is the customers’ responsibility, based on the environment’s connectivity and
security requirements.
The following functions are examples that depend on the HMC connectivity:
Lightweight Directory Access Protocol (LDAP) support, which can be used for HMC user
authentication
Network Time Protocol (NTP)
RSF through broadband
HMC remote access and HMC Mobile
RSA SecurID support
MFA with Time-based One Time Password (TOTP)
FTPS is based on Secure Sockets Layer (SSL) cryptographic protocol and requires
certificates to authenticate the servers. SFTP is based on Secure Shell protocol (SSH) and
requires SSH keys to authenticate the servers. Certificates and key pairs are hosted on the
IBM z17 HMC.
The HMC file transfer protocol choices for the backup task are shown in Figure 10-17.
For more information about HMC networks, see the following resources:
The HMC and SE Driver 61/Version 2.17.0 console help system
IBM Resource Link. On the web page, select Library, the applicable server, and then
select Hardware Management Console Operations Guide or Support Element
Operations Guide. IBM Z 9175 Installation Manual for Physical Planning, GC28-7049.
Ethernet switches
The customer provides ethernet switches for HMC and SE connectivity. Existing supported
switches can still be used.
For ETS configuration please see IBM Z Server Time Protocol Guide, SG24-8480
The HMC uses IPv4 and IPv6 multi-casting1 to automatically discover the SEs. The HMC
Network Diagnostic Information task can be used to identify the IP addresses (IPv4 and IPv6)
that the HMC uses to communicate to the SEs (of a CPC).
IPv6 addresses are easily identified. A fully qualified IPV6 address features 16 bytes. It is
written as eight 16-bit hexadecimal blocks that are separated by colons, as shown in the
following example:
2001:0db8:0000:0000:0202:b3ff:fe1e:8329
Because many IPv6 addresses are not fully qualified, shorthand notation can be used. In
shorthand notation, the leading zeros can be omitted, and a series of consecutive zeros can
be replaced with a double colon. The address in the previous example also can be written in
the following manner:
2001:db8::202:b3ff:fe1e:8329
If an IPv6 address is assigned to the HMC for remote operations that use a web browser,
browse to it by specifying that address. The address must be surrounded with square
brackets in the browser’s address field, as shown in the following example:
https://[fdab:1b89:fc07:1:201:6cff:fe72:ba7c]
The MFA first factor is the combination of login ID and password; the second factor is TOTP
(Time-based One-Time Password) that is sent to your smartphone, desktop, or application
(for example, Google Authenticator or IBM Verify). This TOTP is defined in RFC 6238
standard and uses a cryptographic hash function that combines a secret key with the current
time to generate a one-time password.
1 For the customer-supplied switch, multicast must be enabled at the switch level.
The secret key is generated by HMC/SE/TKE while the user is performing the first-factor
log-on. The secret key is known only to HMC/SE/TKE and to the user’s device. For that
reason, it must be protected as much as your first-factor password.
MFA code that was generated as a second factor is time-sensitive. Therefore, it is essential to
remember that it must be used as soon as possible after it is generated.
The algorithm within the HMC that is responsible for MFA code generation changes the code
every 30 seconds. However, the HMC and SE console accepts current, previous, and next
MFA codes to ease the process.
Having HMC, SE, and device clocks synchronized is also important. If the clocks are not
synchronized, the MFA log-on attempt fails. Time zone differences are irrelevant because the
MFA code algorithm uses UTC.
IBM z15, HMC Driver 41/Version 2.15.0 provided the integration of HMC authentication and
z/OS MFA support. Therefore, RSA SecurID authentication is achieved through centralized
support from IBM MFA for z/OS, with the MFA policy defined in RACF and the HMC IDs
assigned to RACF user IDs. The RSA authentication server verifies the RSA SecurID
passcode (from an RSA SecurID Token). This authentication is supported only on HMC, not
on SE.
The following support was added with IBM z16 Driver 51/Version 2.16.0:
Enhanced MFA functions
MFA via Time-based One-time Password (TOTP) or IBM Z MFA (z/OS) and RSA Secure
ID is possible on the HMC.
New with Driver 51/Version 2.16.0, the following further MFA possibilities are supported:
– Certificates:
• Personal Identity Verification (PIV)
• Common Access Card (CAC)
• Certificates on USB keys
– Generic Remote Authentication Dia-In User Service (RADIUS) allows for support of all
various RADIUS factor types. Involves customer provided RADIUS server.
Also, Driver 51/Version 2.16.0 provides support of IBM Z MFA for Red Hat Enterprise Linux
Server or SUSE Linux Enterprise Server that runs on z/VM or native in an LPAR.
RSF requests are always started from the HMC to IBM. An inbound connection is never
started from the IBM Service Support System.
All data that is transferred between the HMC and the IBM Service Support System is
encrypted with high-grade SSL/Transport Layer Security (TLS) encryption.
When starting the SSL/TLS-encrypted connection, the HMC validates the trusted host
with the digital signature that is issued for the IBM Service Support System.
Data sent to the IBM Service Support System consists of hardware problems and
configuration data.
For more information about the benefits of Broadband RSF and the SSL/TLS-secured
protocol, as well as a sample configuration for the Broadband RSF connection, see
Integrating IBM Remote Support into your Enterprise, SC28-7060.
10.4.2 RSF connections to IBM and Enhanced IBM Service Support System
To ensure the best availability and redundancy and be prepared for the future, the HMC
must access IBM through the Internet through RSF in the following manner: Transmission
to the enhanced IBM Support System requires a domain name server (DNS). If a proxy for
RSF is not used, the DNS must be configured on the HMC. If a proxy for RSF is used, the
proxy must provide the DNS.
The following hostnames and IP addresses are used, and your network infrastructure
must allow the HMC to access RSF:
esupport.ibm.com on port 443
The use of IPv4 requires outbound connectivity to the following IP addresses:
– 129.42.19.70
– 129.42.18.70
– 129.42.54.189
– 192.148.6.11
The use of IPv6 requires outbound connectivity to the following IP addresses:
– 2620:1f7:c010:1:1:1:1:11
– 2607:f0d0:2601:13:129:42:19:70
– 2607:f0d0:1f01:9f:129:42:18:70
– 2620:0:6c0:200:129:42:56:189
For more information about these capabilities, see the HMC and SE Driver 61/Version 2.17.0
console help system or see the IBM Resource Link. At this web page, select Library, the
applicable server and then select Hardware Management Console Operations Guide or
Support Element Operations Guide.
With the introduction of the DPM mode, which is mainly for LinuxONE management, the user
interface and user interaction with the HMC for hardware configuration changed significantly.
The figures and descriptions in this section cover only the traditional Processor
Resource/Systems Manager (PR/SM) mode.
The HMC is used to start the system's power-on reset (POR). During the POR, processor
units (PUs) are characterized and placed into their respective pools, memory is put into a
single storage pool, and the IOCDS is loaded into and started in the hardware system area
(HSA).
The hardware messages task displays hardware-related messages at the CPC, LPAR, or SE
level. It also displays hardware messages that relate to the HMC.
Because PR/SM must manage LPAR access to processors and the initial weights of each
partition, weights are used to prioritize partition access to processors.
You can use the Load task on the HMC to perform an IPL of an operating system. This task
causes a program to be read from a designated device and starts that program. You can
perform the IPL of the operating system from storage, the USB flash memory drive (UFD), or
an FTP server.
When an LPAR is active, and an operating system is running, you can use the HMC to
dynamically change specific LPAR parameters. The HMC provides an interface to change
partition weights, add logical processors to partitions, and add memory.
LPAR weights can also be changed through a scheduled operation. Use the Customize
Scheduled Operations task to define the weights set to LPARs at the scheduled time.
Channel paths can be dynamically configured on and off (as needed for each partition) from
an HMC.
Partition capping values can be scheduled and are specified on the Change LPAR Controls
scheduled operation support. More information about a Change LPAR Controls scheduled
operation is available on the SE.
One example of managing the LPAR settings is the absolute physical hardware LPAR
capacity setting. Driver 15 (zEC12/zBC12) introduced the capability to define (in the image
profile for shared processors) the absolute processor capacity that the image is allowed to
use (independent of the image weight or other cappings).
To indicate that the LPAR can use the non-dedicated processor absolute capping, select
Absolute Capping in the Image Profile Processor settings to specify an absolute number
of processors to cap the LPAR’s activity. The absolute capping value can be “None” or a
value for the number of processors (0.01 - 255.0).
Following on to LPAR absolute capping, LPAR group absolute capping uses a similar
method to enforce the following components:
Customer licensing
Non-z/OS partitions where group soft capping is not an option
z/OS partitions where ISV does not support software capping
A group name, processor capping value, and partition membership are specified at the
hardware console, along with the following properties:
Set an absolute capacity cap by CPU type on a group of LPARs.
Allow each partition to use capacity up to its individual limits if the group’s aggregate
consumption does not exceed the group's absolute capacity limit.
Include updated SysEvent QVS support (used by vendors who implement software
pricing).
Only shared partitions are managed in these groups.
Specify caps for one or more processor types in the group.
Specify the absolute processor capacity (for example, 2.5 processors).
Use Change LPAR Group Controls (as with windows that are used for software
group-defined capacity), as shown in Figure 10-18.
Absolute capping is specified as the absolute number of processors to which the group's
activity is capped. The value is specified as a hundredth of a processor's capacity (for
example, 4.56 processors).
The value is not tied to the Licensed Internal Code (LIC) configuration code (LICCC). Any
value 0.01 - 255.00 can be specified. This configuration makes the profiles more portable, so
you do not encounter problems when profiles are migrated to new machines.
Although the absolute cap can be specified to a hundredths of a processor, the exact amount
might not be that precise. The same factors that influence the “machine capacity” also affect
the precision with which the absolute capping works.
because the SE cannot be directly accessed by using a browser, the Single Object
Operations (SOO) task on the HMC must be used.
Note: Remote web browser access is the default for the HMA HMCs.
The local HMC user provides log-on security for web browser log-on procedures. Certificates
for secure communications are provided and can be changed by you.
Web browser access can be limited by specifying an IP address from the Customize Console
Services task. To turn the Remote operation service on or off, click Change in the Customize
Console Services window, as shown in Figure 10-19.
Note: If the Change Remote Access Setting → IP Access Control is set to Allow
specific IP addresses, but none or incorrect IP addresses are in the list, a remote HMC
connection is not available by using a web browser.
Microsoft Edge, Mozilla Firefox, Safari, and Goggle Chrome were tested as remote web
browsers. For more information about web browser requirements, see the HMC and SE
console help system or see the IBM Resource Link. On this web page, select Library, the
applicable server and then, select Hardware Management Console help system content
or Support Element help system content.
Note: Beginning with HMC Driver 41/Version 2.15.0, specific tasks previously required to
access the SE in SOO mode were implemented as HMC tasks. With this enhancement,
the HMC runs the tasks on the SE directly without accessing the SE in SOO mode.
You also can start, stop, or change the activation profile for a partition.
The HMC provides a full set of granular security controls, including MFA. This mobile interface
is optional and disabled by default. More functions from the HMC are also planned to be
available on IBM HMC Mobile.
For more information about HMC Mobile, see the following resources:
This IBM video
The HMC Mobile website
The HMC also provides integrated 3270 and ASCII consoles. These consoles allow an
operating system to be accessed without requiring other network or network devices, such as
TCP/IP or control units.
Use the Certificate Management feature if the certificates returned by the 3270 server are not
signed by a well-known trusted certificate authority (CA) certificate, such as VeriSign or
Geotrust. An advanced action within the Certificate Management task, Manage Trusted
Signing Certificates, adds trusted signing certificates.
For example, if the certificate that is associated with the 3270 server on the IBM host is
signed and issued by a corporate certificate, it must be imported, as shown in Figure 10-20.
The import from the remote server option can be used if the connection between the console
and the IBM host can be trusted when the certificate is imported, as shown in Figure 10-21.
Otherwise, import the certificate using removable media.
10.5.5 Monitoring
This section describes monitoring considerations.
Monitors Dashboard
The Monitors Dashboard in the Monitor group provides a tree-based view of resources.
Multiple graphical views are available for displaying data, including history charts. The
Monitors Dashboard monitors processor and channel usage. It also produces data that
includes power monitoring information, power consumption, and the air input temperature for
the system.
For more information please see “Monitors Dashboard task” on page 485.
Environmental Dashboard
The Environmental Efficiency Statistics is part of the Monitor group. It provides the IBM Z
CPC's historical power consumption and thermal information and is available on the HMC.
For more information please see “Environmental dashboard task” on page 486.
With IBM z17, you can generate system power consumption notifications if a certain power
consumption threshold is reached on the LPAR or System level.
An example of setting an Event Monitor for the power consumption on the LPAR level can be
seen in Figure 10-24.
Figure 10-24 Example of Event Monitor for power consumption on LPAR level
The HMC for IBM z17 features the following CoD capabilities:
SNMP API support:
– API interfaces for granular activation and deactivation
– API interfaces for enhanced CoD query information
– API event notification for any CoD change activity on the system
– CoD API interfaces, such as On/Off CoD and Capacity Back Up (CBU)
HMC / SE window features:
– Window controls for granular activation and deactivation
– History window for all CoD actions
– Description editing of CoD records
HMC/SE provides the following CoD information:
– Millions of service units (MSU) and processor tokens
– Last activation time
– Pending resources that are shown by processor type instead of only a total count
– Option to show more information about installed and staged permanent records
– More information for the Attention state by providing seven more flags
HMC and SE are a part of the z/OS Capacity Provisioning environment. The Capacity
Provisioning Manager (CPM) communicates with the HMC through IBM Z APIs, and enters
CoD requests.
Notes:
The HWMCA_ADD_CAPACITY_COMMAND and
HWMCA_REMOVE_CAPACITY_COMMAND APIs allow applications to add and
remove temporary capacity for defined CPC objects. You can use the
HWMCA_ACTIVATE_CBU_COMMAND and
HWMCA_ACTIVATE_OOCOD_COMMAND APIs to allow applications to activate a
CBU or On/Off CoD record for a defined CPC object.
When activating a CBU record, the API activates all the resource in the default CBU
record. If there is no default CBU record specified, the oldest CBU record is used. To
set a CBU record as the default CBU record, select the Set as Default CBU button
located at the bottom of the Record Details window.
For more information about the use of and setting up CPM, see the following publications:
z/OS MVS Capacity Provisioning User’s Guide, SC34-2661
Capacity on-Demand User’s Guide, SC28-7058
Important: The Sysplex Time task on the SE was discontinued with IBM z15.
Starting with IBM z16 and also with IBM z17, the following significant enhancements are
essential for the support of the Server Time Protocol function:
CPC direct Ethernet connectivity for the external time source (ETS)
In previous IBM Z generations, the ETS for the STP was provided by connecting the Support
element to the client network.
Starting with IBM z16, the ETS, PTP (IEEE 1588), or NTP network connectivity is provided by
using the CPC oscillator (OSC) cards dedicated network ports (RJ45) to client LAN for
accessing the time synchronization information. ETS direct connectivity to the IBM z17 CPC
is provided for both supported ETS protocols: NTP and PTP.
Pulse-per-second connectivity is also provided for the highest accuracy in timing information.
Connection of the ETS directly to the IBM Z CPC provides less delay in accessing the time
source that connection through the Support Element.
For more information, please see 2.2.2, “Oscillator (OSC) and Baseboard Management
Controller (BMC) cards” on page 31.
n-mode power sensing for STP recovery / Set time server power failover
On IBM Z, losing a Preferred Time Server (PTS) has significant consequences for the timing
network and the overall workload execution environment of the IBM Z sysplex.
Starting with IBM z16, because an integrated battery facility (IBF) is no longer available,
support was added to allow you to configure an option to monitor for n-mode power conditions
(wall power or power cord loss). If detected, an automated failover occurs to the Backup Time
Server (BTS) from the Preferred Time Server (PTS) / the Current Time Server (CTS) changes
from the BTS to the PTS.
In the task Manage System Time -> Advanced Action -> Set time server power failover,
you can turn automatic failover on or off when STP detects a loss of power on the PTS. STP
can detect various power losses, including a loss as small as a single line cord failure, which
could leave a portion of the system without power.
IBM supports the use of an external battery or Uninterruptible Power Supply (UPS) to ensure
that the system remains running in the event of a power outage.The PTS and BTS should be
running with an external power source and have automatic failover enabled to be completely
safe from power interruptions. The failover to the BTS occurs within 1 minute after the power
loss is detected and the external power supply takes over.
After resolving the power issue, you should inspect the system before returning it to
operation. For this reason, an option allows you to specify the time to wait before the system
reacts to the restored power. Then, after the specified waiting period has ended, the Current
Time Server (CTS) is automatically switched back to the PTS.
Figure 10-25 on page 462 shows your options to configure Set time server power failover.
Figure 10-25 Task Set time server power failover for STP
More enhancements specific with IBM z17 can be found at “External Time Source (ETS)
enhancements for STP and HMC NTP” on page 433.
For more information about planning and understanding STP and ETS, see the following
publication: IBM Z Server Time Protocol Guide, SG24-8480
HDD encryption uses Trusted Platform Module (TPM) and Linux Unified Key Setup (LUKS)
technology.
The Monitor System Events task allows Security Logs to send e-mail notifications by using
the same type of filters and rules that are used for hardware and operating system messages.
With IBM z17, you can off load the following HMC and SE log files for customer audit:
Console event log
Console service history
Tasks performed log
Security logs
System log
Full log off load and delta log off load (since the last off load request) are provided. Off loading
to removable media and to remote locations by FTP is available. The off loading can be
manually started by the new Audit and Log Management task or scheduled by the Customize
Scheduled Operations task. The data can be off-loaded in the HTML and XML formats.
Each HMC user ID template defines the specific authorization levels for the tasks and objects
for the user who is mapped to that template. The HMC user is mapped to a specific user ID
template. The system then obtains the name of the user ID template from content in the
LDAP server schema data.
Any default user IDs that are part of a previous HMC level can be carried forward to new HMC
levels as part of a MES Upgrade or by way of selecting User Profile Data for the Save/Restore
Customizable Console Data or Configure Data Replication tasks.
We recommend that you create your own individual user IDs using “New based on”, except
for the local ACSADMIN or a concept to have a user ID with admin role to change/create
users as an emergency.
The Secure FTP infrastructure allows HMC and SE applications to query whether a public key
is associated with a host address and to use the Secure FTP interface with the suitable public
key for a host. Tasks that use FTP now provide a selection for the secure host connection.
When selected, the task verifies that a public key is associated with the specified hostname. If
a public key is not provided, a message window opens that points to the Manage SSH Keys
task to enter a public key. The following tasks provide this support:
Import/Export IOCDS
Advanced Facilities FTP IBM Content Collector Load
Audit and Log Management (Scheduled Operations only)
FCP Configuration Import/Export
OSA view Port Parameter Export
OSA-Integrated Console Configuration Import/Export
With IBM z17 Driver 61/Version 2.17.0 you have the option for Import/Export from Remote
Browsing File System (see more at “Import/Export from Remote Browsing File System” on
page 431), so maybe secure FTP is no longer needed.
On IBM z17, the APIs provide monitoring and control functions through SNMP, Web Services,
and BCPii.
Cryptographic hardware
IBM z17 systems include standard cryptographic hardware and optional cryptographic
features for flexibility and growth capability.
The Crypto Express8S, which is the new Peripheral Component Interconnect Express (PCIe)
cryptographic coprocessor, is an optional IBM z17 feature. Crypto Express8S provides a
secure programming and hardware environment on which crypto processes are run.
Each Crypto Express8S adapter can be configured by the installation as a Secure IBM CCA
coprocessor, a Secure IBM Enterprise Public Key Cryptography Standards (PKCS) #11
(EP11) coprocessor, or an accelerator.
When EP11 mode is selected, a unique Enterprise PKCS #11 firmware is loaded into the
cryptographic coprocessor. It is separate from the Common Cryptographic Architecture
(CCA) firmware that is loaded when a CCA coprocessor is selected. CCA firmware and
PKCS #11 firmware cannot coexist in a card.
An example of the Cryptographic Configuration window on the HMC is shown in Figure 10-26
on page 465.
The Usage Domain Zeroize feature is provided to clear the suitable partition crypto keys for a
usage domain when you remove a crypto card from a partition. Crypto Express8S or 7S in
EP11 mode is configured to the standby state after the Zeroize process.
For more information, see IBM z17 (9175) Configuration Setup, SG24-8960.
A system can be configured in DPM mode or in PR/SM mode (POR is required to switch
modes). In general, DPM supports the following functions:
Create, provision, and manage partitions (processor, memory, and adapters) and storage
Monitor and troubleshoot the environment
If DPM is enabled, the IBM z17 system cannot run z/OS, IBM z/VSE, and z/TPF LPARs.
The IBM z17 can be initialized in PR/SM mode or in DPM mode, but not both.
DPM provides a GUI for PR/SM (to manage resources). Tools, such as HCD are in DPM not
necessary.
This IBM Redbooks publication does not cover scenarios that use DPM.
For more information about the use of DPM, see IBM Dynamic Partition Manager (DPM)
Guide, SB10-7188.
When a driver is upgraded, always check the Driver (61) Customer Exception Letter option in
the Fixes section at the IBM Resource Link.
Microcode terms
The microcode features the following characteristics:
The driver contains engineering change (EC) streams.
Each EC stream covers the code for a specific component of IBM z17. It includes a
specific name and an ascending number.
The EC stream name and a specific number are one MCL.
MCLs from the same EC stream must be installed in sequence.
MCLs can include installation dependencies on other MCLs.
Combined MCLs from one or more EC streams are in one Bundle.
An MCL contains one or more Microcode Fixes (MCFs).
How the driver, bundle, EC stream, MCL, and MCFs interact with each other, is shown in
Figure 10-27.
Driver / Version
bundle
Bundle
EC stream
MCL
MCF
MCF
MCF
MCF
Bundle
EC stream
MCL
MCF
MCF
MCF
MCF
EC stream
MCL
MCF
MCF
MCF
MCF
MCL EC stream
MCL
MCF
MCF
MCL MCF
MCF
MCF MCF
MCF
MCF MCF
MCF
MCF
MCF
EC stream
MCL
MCF
MCL MCF
MCF
MCF
MCF
MCF
MCF
MCF
bundle
Bundle bundle
Bundle
EC stream EC stream
MCL MCL
MCF MCF
MCF MCF
MCF MCF
MCF MCF
MCL Activation
By design and with planning, MCLs can be activated concurrently. Some MCLs need a
disruptive configure off/on of resources (PCHIDs, cards, I/O elements etc.) to activate the
new loaded microcode.
To check pending condition, you can got to System Information -> Query Additional
Actions.... or you can check on Call Home Cloud Connect (CHCC)
https://www.ibm.com/support/call-home-connect/cloud/ Information Reports (Reports ->
Pending firmware updates).
The System Information window is enhanced to show a summary Bundle level for the
activated level, as shown in Figure 10-28.
This feature allows an authorized client user to remotely schedule an automated firmware
update operation. This secure firmware update operation will call home with live progress
updates that are monitored by IBM support. RCL eliminates the need to schedule on-site
access for an IBM System Services Representative (SSR), for the duration of the firmware
updates.
For a comprehensive guide on how to utilize RCL, please see the Remote Code Load for IBM
Z Firmware publication (SC28-7044-02) on Resource Link found here: Remote Code Load for
IBM Z.
The client generates a token on the HMC/HMA and then use the IBM Resource Link site to
create a scheduling request. The process flow is show in Figure 10-29.
Both RCL and IBM SSR onsite are options for the installation of Firmware update of MCL
Bundles.
The IBM z17 RCL heath checks are categorized as blocking conditions and warning
conditions.
A blocking condition will terminate the scheduled RCL, and a warning condition will allow the
scheduled RCL to go ahead, but there may be potential issues if it is not addressed before the
RCL.
One or more channels have not completed concurrent patch from a previous Bundle
installation.
Some of the enhancements to RCL with the IBM z17 can be found “Remote Code Load
(RCL) enhancements” on page 434.
11
Naming: Throughout this chapter, IBM z17 refers to IBM z17 Model ME1 (Machine Type
9175), unless otherwise specified.
11.1 Introduction
The following options are available for physically installing the server:
Radiator cooling
Power Distribution Unit (PDU)
On a raised floor or nonraised floor
I/O and power cables can exit under the raised floor or off the top of the server frames
Bulk Power Assembly (BPA): IBM z17 does not support the BPA.
For more information about physical planning, see 9175 Installation Manual for Physical
Planning, GC28-7049.
IBM z17 servers support installation on a raised floor or nonraised floor and are only available
with the following power and cooling options:
Intelligent Power Distribution Unit-based power (iPDU) or PDU
All IBM z17 models include radiator-based cooling (air cooled system).
A PDU-based system can have 2 - 8 power cords, depending on the configuration. The use of
iPDU on IBM z17might enable fewer frames, which allows for more I/O slots to be available
and improves power efficiency to lower overall energy costs. It also offers some
standardization and ease of data center installation planning, which allows the IBM z17 to
easily coexist with other platforms within the data center.
Power requirements
The IBM z17 operates from 2 or 4 fully redundant power supplies. These redundant power
supplies each have their own power cords, or pair of power cords, which allows the system to
survive the loss of customer power to either power cord or power cord pair.
The IBM z17 is designed with a fully redundant power system. To make full use of the
redundancy that is built into the server, the PDUs within one pair must be powered from
different power distribution panels. In that case, if one PDU in a pair fails, the second PDU
ensures continued operation of the server without interruption.
The second, third, and fourth PDU pairs are installed dependent on other CPC or PCIe+ I/O
drawers installed. The locations of the PDU pairs and frames are listed in Table 11-1.
Power cords for the PDUs are attached to the options that are listed in Table 11-2.
A rear view of a maximum configured PDU-powered system with four CPC drawers and 12
PCIe+ I/O drawers is shown in Figure 11-1 on page 474.
:
The number of PDUs and power cords that are required based on the number of CPC
drawers and PCIe+ I/O drawers are in Table 11-3.
1 2 2 2 2 6 6 6 6
2 4 4 4 4 4 4 4 4 6 6 6 6 6
3 4 4 4 4 4 4 4 6 6 6 6 6 N/A
4 6 6 6 6 6 6 6 6 6 6 8 8 8
Power consumption
The utility power consumption for the IBM z17 for PDU option is listed in Table 11-4 on
page 475.
max90 8.8 9.8 10.8 11.7 12.6 13.5 14.5 15.4 16.4 17.3 18.2 19.1 20.0
FC0572 kw kw kw kw kw kw kw kw kw kw kw kw kw
max136 12.7 13.7 14.6 15.6 16.5 17.4 18.3 19.3 20.2 21.1 22.0 22.9
N/A
FC0573 kw kw kw kw kw kw kw kw kw kw kw kw
max183 13.5 14.5 15.6 16.6 17.7 18.7 19.8 20.8 21.9 22.9 24.0 25.0 25.8
FC0574 kw kw kw kw kw kw kw kw kw kw kw kw kw
max208 17.1 18.1 19.0 20.0 20.9 21.8 22.7 23.6 24.5 25.4 26.3 27.2 28.1
FC0575 kw kw kw kw kw kw kw kw kw kw kw kw kw
Power estimation for any configuration, power source, and room condition can be obtained
by using the power estimation tool that is available at the IBM Resource Link website (login
required).
On the Resource Link page, click Tools → Power and weight estimation.
Power requirements
The 9175 operates from 2 or 4 fully redundant power supplies. These redundant power
supplies each have their own power cords, or pair of power cords, which allows the system to
survive the loss of customer power to either power cord or power cord pair.
If power is interrupted to one of the power supplies, the other power supply assumes the
entire load and the system continues to operate without interruption. Therefore, the power
cords for each power supply must be wired to support the entire power load of the system.
For the most reliable availability, the power cords in the rear of the frame should be powered
from different PDUs. All power cords exit through the rear of the frame. The utility current
distribution across the phase conductors (phase current balance) depends on the system
configuration. Each front-end power supply is provided with phase switching redundancy.
The loss of an input phase is detected and the total input current is switched to the remaining
phase pair without any power interruption. Depending on the configuration input power draw,
the system can run from several minutes to indefinitely in this condition. Because most single
phase losses are transients that recover in seconds, this redundancy provides protection
against virtually all single phase outages.
The IBM z17 servers include a recommended (long-term) ambient temperature range of
18°C (64.4°F) - 27°C (80.6°F). The minimum allowed ambient temperature is 5°C (41°F); the
maximum allowed temperature is 40°C (104°F).
For more information about cooling requirements, see 9175 Installation Manual for Physical
Planning, GC28-7049.
The radiator cooling system requires chilled air to fulfill the air-cooling requirements. IBM z17
system airflow is from the front (intake, chilled air) to the rear (exhausts, warm air) of the
frames. The chilled air is provided through perforated floor panels in front of the system
The hot and cold airflow and the arrangement of server aisles are shown in Figure 11-3 on
page 477.
Cooling design
The IBM z17 continues to offer a client air cooled (RCU, internal radiator) system for cooling
the high performance processor modules.
Processor heat is picked up by an internal liquid cooling loop and transferred to data center
air. The cooling loop contains a 40% Propylene Glycol - 60% water solution. This solution
allows IBM to ship IBM z17 systems filled with liquid. IBM z17 will not have an IBM Fill and
Drain tool (FDT), and will not require the handling or storage of an liquid in the field.
The IBM z17 is a closed loop, preassembled and pretested enterprise-class cooling solution
that once filled requires no scheduled maintenance.
All liquid carrying components are very robust and designed not to leak, qualified not to leak
and tested not to leak. IBM z17 is designed to detect, report and contain a leak if in the
extremely unlikely case one were to occur. With this design, no additional data center
infrastructure is needed to support the IBM z17 radiator based cooling system.
As shown in Figure 11-3, rows of servers must be placed front-to-front. Chilled air is provided
through perforated floor panels that are placed in rows between the fronts of servers (the cold
aisles). Perforated tiles generally are not placed in the hot aisles.
If your computer room causes the temperature in the hot aisles to exceed a comfortable
temperature, add as many perforated tiles as necessary to create a satisfactory comfort level.
Heated exhaust air exits the computer room above the computing equipment.
For more information about the requirements for air-cooling options, see 9175 Installation
Manual for Physical Planning, GC28-7049.
The IBM z17 can be installed on a raised or nonraised floor. (For more information about
weight distribution and floor loading tables, see IBM 9175 Installation Manual for Physical
Planning, GC28-7049. This data is used with the maximum frame weight, frame width, and
frame depth to calculate the floor loading.
Weight estimates for the maximum system configurations on the 9175 PDU-based system
are listed in Figure 11-10 on page 485.
A - 795 kg - - 795 kg
(1753 lbs) (1753 lbs)
The power and weight estimation tool for IBM Z servers on IBM Resource Link (log in
required) covers the estimated weight for your designated configuration.
On the Resource Link web page, click Tools → Power and weight estimation.
Note: On the IBM z17, all I/O cabling and power cords enter the rear of the machine;
therefore, all related features for Bottom and Top Exit cabling are in the rear of the frame.
IBM z17 servers can be installed on either a raised or a nonraised floor. Figure 11-4 shows all
Feature Codes combinations for Raised and Non-Raised Floor installations.
Yes, with Top Exit Ships with Bottom Seal Pate and
Raised Floor No 5823 & 7803
Enclosure only supports FQC FC 5824 & 5826
No Yes, with Top Exit Ships with Bottom Seal Plate and &
Non-Raised Floor 7998* & 5823
(not supported) Enclosure supports FQC FCs 5824 & 5826
*FC 7998: Non-Raised Floor Support (flag)
– Important: The Bottom Exit Cabling Feature Code 7804 is REQUIRED for routing any
external cables through the bottom of the system frame(s). By itself, Feature Code
7804 supports routing of power line cords and point-to-point (pass-through) customer
cables. To support structured cabling (fiber quick-connect) with Key Up / Key Down
interconnect polarity, Feature Code 5827 must also be ordered. Key Up / Key Up
interconnect polarity is not supported in bottom exit cabling.
To help manage the cabling when using the Top Exit Enclosure or the Bottom Exit Cabling
features, the following optional Fiber Quick Connect (FQC) features are available:
FC 5824: key-up/key-down cabling polarity
FC 5826: key-up/key-up cabling polarity
Important: The Fiber Quick-Connect Bracket Feature Code 5824 is an optional addition to
Top Exit Enclosure Feature Code 5823, supporting structured cabling with only Key Up / Key
Down interconnect polarity. These Brackets must be ordered in quantities of 2, 4, or 6 per
server frame, and the customer must specify the total order quantity for their entire IBM Z
system. Each Bracket provides 16 fiber quick-connect ports, for a maximum of 96 ports per
system frame.
If this Top Exit Cabling feature is not ordered in conjunction wit the Bottom Exit Cabling
feature, a bottom seal plate will be shipped in order to seal of the bottom side of the frame.
Note: The optional Top Exit Cabling feature is mandatory when ordering one of the Top
Exit Fiber Quick Connect (FQC) features FC 5824 and FC 5826.
Note: This feature is mandatory when ordering the Bottom Exit Fiber Quick Connect
(FQC) feature FC 5827.
Notes:
The same extra hardware is provided for every frame in the configuration.
This feature is mandatory when ordering one of the Top Exit Fiber Quick Connect
(FQC) features FC 5824 and FC 5826.
The Top Exit Enclosure feature adds 177.5 mm (6.98 in.) to the height of the frame and
approximately 12.2 kg (27 lbs) to the weight.
If the Top Exit Enclosure feature is not ordered, two sliding plates are available on the top of
the frame (one on each side of the rear of the frame) that can be partially opened. By opening
these plates, I/O cabling and power cords can exit the frame. The plates should be removed
to install the Top Exit Enclosure feature as shown in Figure 11-8 on page 483.
foam
remove plates
slide plates to install FC 5823
for access
foam
The frame tie-down kit can be used on a nonraised floor (FC 8015) where the frame is
secured directly to a concrete floor, or on a raised floor (FC 8014) where the frame is secured
to the concrete floor underneath the raised floor.
Raised floors 241.3 mm (9.5 inches) - 1270 mm (50 inches) are supported.
The kits help secure the frames and their contents from damage when they are exposed to
shocks and vibrations, such as in a seismic event. The frame tie-downs are intended for
securing a frame that weighs up to 1308 kg (2885 lbs).
For more information, see IBM 9175 Installation Manual for Physical Planning, GC28-7049.
For more information, see IBM 9175 Installation Manual for Physical Planning, GC28-7049.
The hardware components in the IBM z17 are monitored and managed by the energy
management component in the Support Element (SE) and Hardware Management Console
(HMC). The user interfaces (UIs) of the SE and HMC provide views, such as the Monitors
Dashboard, Environmental Dashboard and Energy Optimization Advisor.
Also via API these information are available. For more information see 10.5.9, “Automated
operations via APIs” on page 464.
The following tools are available to plan and monitor the energy consumption of IBM z17
servers:
Power and Weight estimation tool on Resource Link.
Management -> Energy Optimization Advisor task for maximum potential power on
HMC
Monitor -> Monitors Dashboard and Environmental Dashboard tasks on HMC
Select the advice hyperlinks to provide specific recommendations for your system, as shown
in Figure 11-10.
An example of the Monitors Dashboard task is shown inFigure 11-11 on page 486.Figure
The data is presented in table format and graphical “histogram” format. The data also can be
exported to a .xlsx-formatted file so that the data can be imported into a spreadsheet. For
this task, you must use a web browser to connect to an HMC.
Figure 11-13
12
Note: Throughout this chapter, IBM z17 refers to IBM z17 Model ME1 (Machine Type
9175) unless otherwise specified.
Single processor capacity of IBM z17 for equal n-way at common client configurations is
approximately 11% greater than on IBM z16 with some variation based on workload and
configuration.
z13 190-way
196 engines
170-way
zEC12 168 engines
141-way
z196
120 engines
z10EC
101-way
z9EC
96 engines
77 engines 80-way
64 engines 64-way
54-way
1832 2055 2253 2477
920 1202 1514 1695
Minimum 581
1way PCI
z9 EC z10 EC z196 zEC12 z13 IBM z14 IBM z15 IBM z16 IBM z17
z/OS 1.6 z/OS 1.8 z/OS 1.11 z/OS 1.13 z/OS 2.1 z/OS 2.2 z/OS 2.3 z/OS 2.5 z/OS 3.1
PCI - Processor Capacity Index
Note*: IBM 17 Max208 has 208 configurable PUs, 24 SAPs, 2 IFPs and 2 Spares
Operating system support varies for the number of “engines” that are supported.
The IBM z17 processor chip runs at 5.5 GHz clock speed, which is a 5.8% improvement over
the 5.2 GHz IBM z16 processor chip, and the performance is increased. For N-way
processors model, it increases 1.11x plus +/- 2.0% on average at equal N-way configuration.
These numbers differ depending on the workload type and LPAR configuration.
The zEDC Express feature was adopted by enterprises because it helps with software costs
for compression and decompression operations (by offloading these operations), and
increases data encryption (compression before encryption) efficiency.
With IBM z15, the zEDC Express functions were moved off from the PCIe infrastructure into
the processor chip. By moving the compression and decompression into the processor
on-chip, IBM z15, IBM 16 and IBM z17 processors provides a new level of performance for
these tasks and eliminates the need for the zEDC Express feature virtualization. It also brings
new use cases to the platform.
The IBM z17 continues to support IBM Integrated Accelerator for zEDC. For more
information, see Chapter B, “IBM Integrated Accelerator for zEnterprise Data Compression”
on page 525.
Memory:
– DDR5 DIMMs (DDR4 Carry-forward)
For z/OS studies, the capacity scaling factor that is commonly associated with the reference
processor is set to a 2094-701 with a Processor Capacity Index (PCI) value of 593. This value
is unchanged since z/OS V1R11 LSPR. The use of the same scaling factor across LSPR
releases minimizes the changes in capacity results for an older study and provides more
accurate capacity view for a new study.
Performance data for IBM z17 servers were obtained with z/OS 3.1 (running DB2 for z/OS
V13, CICS TS6.1, Enterprise COBOL V6.4, and Websphere Application Server for z/OS
V9.0.5.14). All IBM Z server generations are measured in the same environment with the
same workloads at high usage.
Note: If your software configuration is different from what is described here, the
performance results might vary.
The largest IBM z17 208 way configuration (9175-7K8) is expected to provide approximately
15% more capacity than the largest IBM z16 200 way (3931-7K0). However, the observed
performance increase varies depending on the workload type.
Consult the LSPR when you consider performance on the IBM z17. The range of
performance ratings across the individual LSPR workloads is likely to include a large spread.
Performance of the individual logical partitions (LPARs) varies depending on the fluctuating
resource requirements of other partitions and the availability of processor units (PUs).
Therefore, it is important to know which LSPR workload type suite your production
environment. For more information, see 12.8, “Workload performance variation” on page 499.
For more information about performance, see this web page of the IBM Support Docs
website.
For more information about millions of service units (MSU) ratings, see this IBM Z resources
web page.
The CPU Measurement Facility (CPU MF) data that was introduced on the z10 provides
insight into the interaction of workload and hardware design in production workloads. CPU
MF data helps LSPR to adjust workload capacity curves that are based on the underlying
hardware sensitivities; in particular, the processor access to caches and memory.
This processor access to caches and memory is called Relative Nest Intensity (for more
information, see 12.4, “Relative Nest Intensity” on page 495). By using this data, LSPR
introduces three workload capacity categories that replace all older primitives and mixes.
LSPR contains the internal throughput rate ratios (ITRRs) for the IBM z17 and the previous
generation processor families. These ratios are based on measurements and projections that
use standard IBM benchmarks in a controlled environment.
Note: The throughput that any user experiences can vary depending on the amount of
multiprogramming in the user’s job stream, the I/O configuration, and the workload that is
processed. Therefore, no assurance can be given that an individual user can achieve
throughput improvements that are equivalent to the performance ratios that are stated.
However, the path length that is associated with the operating system or subsystem can vary
based on the following factors:
Competition with other tasks in the system for shared resources. As the total number of
tasks grows, more instructions are needed to manage the resources.
The number of logical processors (n-way) of the image or LPAR. As the number of logical
processors grows, more instructions are needed to manage resources that are serialized
by latches and locks.
As workloads are moved between microprocessors with various designs, performance varies.
However, when on a processor, this component tends to be similar across all models of that
processor.
With IBM z17, physical L3/L4 caches no longer exist. L2 caches that are on each processor
core are virtual L3/L4 caches on IBM z17. For more information, see Chapter 3, “Central
processor complex design” on page 71.
Figure 12-2 IBM z17 physical and virtual single drawer memory hierarchy
Workload performance is sensitive to how deep into the memory hierarchy the processor
must go to retrieve the workload instructions and data for running. The best performance
occurs when the instructions and data are in the caches that are nearest the processor
because little time is spent waiting before running. If the instructions and data must be
retrieved from farther out in the hierarchy, the processor spends more time waiting for their
arrival.
As workloads are moved between processors with various memory hierarchy designs,
performance varies because the average time to retrieve instructions and data from within the
memory hierarchy varies. Also, when on a processor, this component continues to vary
because the location of a workload’s instructions and data within the memory hierarchy is
affected by several factors that include, but are not limited to, the following factors:
Locality of reference
I/O rate
Competition from other applications and LPARs
The term Relative Nest Intensity (RNI) indicates the level of activity to this part of the memory
hierarchy. By using data from CPU MF, the RNI of the workload that is running in an LPAR can
be calculated. The higher the RNI, the deeper into the memory hierarchy the processor must
go to retrieve the instructions and data for that workload.
RNI reflects the distribution and latency of sourcing data from shared caches and memory, as
shown in Figure 12-3.
The “Nest”
How intensely this part of the
How Often? How Far? architecture is utilized
RNI
L1MP
L2LP L2RP
L1
L3P L4LP L4RP MEMP
Many factors influence the performance of a workload. However, these factors often are
influencing the RNI of the workload. The interaction of all these factors results in a net RNI for
the workload, which in turn directly relates to the performance of the workload.
These factors are tendencies, not absolutes. For example, a workload might have a low I/O
rate, intensive processor use, and a high locality of reference, which all suggest a low RNI.
However, it might be competing with many other applications within the same LPAR and many
other LPARs on the processor, which tends to create a higher RNI. It is the net effect of the
interaction of all these factors that determines the RNI.
The traditional factors that were used to categorize workloads in the past are shown with their
RNI tendency in Figure 12-4.
Little can be done to affect most of these factors. An application type is whatever is necessary
to do the job. The data reference pattern and processor usage tend to be inherent to the
nature of the application. The LPAR configuration and application mix are mostly a function of
what must be supported on a system. The I/O rate can be influenced somewhat through
buffer pool tuning.
However, one factor, software configuration tuning, is often overlooked but can have a direct
effect on RNI. This term refers to the number of address spaces (such as CICS
application-owning regions [AORs] or batch initiators) that are needed to support a workload.
This factor always existed, but its sensitivity is higher with the current high frequency
microprocessors. Spreading the same workload over more address spaces than necessary
can raise a workload’s RNI. This increase occurs because the working set of instructions and
data from each address space increases the competition for the processor caches.
Tuning to reduce the number of simultaneously active address spaces to the optimum number
that is needed to support a workload can reduce RNI and improve performance. In the LSPR,
the number of address spaces for each processor type and n-way configuration is tuned to be
consistent with what is needed to support the workload. Therefore, the LSPR workload
capacity ratios reflect a presumed level of software configuration tuning. Retuning the
software configuration of a production workload as it moves to a larger or faster processor
might be needed to achieve the published LSPR ratios.
The LSPR now runs various combinations of former workload primitives, such as CICS, Db2,
IMS, OSAM, VSAM, WebSphere, COBOL, and utilities, to produce capacity curves that span
the typical range of RNI.
These categories are based on the L1MP and the RNI. The RNI is influenced by many
variables, such as application type, I/O rate, application mix, processor usage, data reference
patterns, LPAR configuration, and the software configuration that is running. CPU MF data
can be collected by z/OS System Measurement Facility on SMF 113 records or by z/VM
Monitor starting with z/VM V5R4.
For more information about how z/VM Monitor captures CPU MF records visit the following
link: https://www.vm.ibm.com/perf/tips/cpumf.html
I/O rate (the low I/O rates that are used a mix of low I/O rate LSPR workloads)
The IBM z Processor Capacity Reference (IBM zPCR) tool supports the following workload
categories:
Low
Low-Average
Average
Average-high
High
For more information about the no-charge IBM zPCR tool (which reflects the latest IBM LSPR
measurements), see the Getting Started with IBM z Processor Capacity Reference.
As described in 12.5, “LSPR workload categories based on L1MP and RNI” on page 497, the
underlying performance sensitive factor is how a workload interacts with the processor
hardware.
For more information about RNI, see 12.5, “LSPR workload categories based on L1MP and
RNI” on page 497.
The AVERAGE RNI LSPR workload is intended to match most client workloads. When no
other data is available, use the AVERAGE RNI LSPR workload for capacity analysis.
Low-Average and Average-High categories allow better granularity for workload
characterization but these categories can apply on IBM zPCR only.
The CPU MF data can be used determine workload type. When available, this data allows the
RNI for a production workload to be calculated.
By using the RNI and another factor from CPU MF, the L1MP (Level 1 Miss Per 100
instructions or percentage of data and instruction references that miss the L1 cache), a
workload can be classified as LOW, AVERAGE, or HIGH RNI. This classification and resulting
hit are automated in the IBM zPCR tool. It is preferable to use IBM zPCR for capacity sizing.
Refer to Table 12-1 for the LSPR Workload Decision Table, based in L1MP and RNI.
Reminder:
RNI is not a performance metric.
RNI and L1MP allows one to match their workload to an LSPR workload
– Any other use of RNI is not valid
Starting with z/OS V2R1 with APAR OA43366, zFS file is no longer required for CPU MF and
Hardware Instrumentation Services (HIS). HIS is a z/OS function that collects hardware event
data for processors in SMF records type 113, and a z/OS UNIX System Services output files.
Only SMF 113 record is required to know proper workload type by using CPU MF counter
data. CPU overhead of CPUMF is minimal. Also, the amount of SMF 113 record is 1% of
typical SMF 70 and 72 which RMF writes.
CPU MF and HIS can by used for deciding workload type and other purposes. For example,
starting with z/OS V2R1, you can record Instruction Counts in SMF type 30 record when you
activate CPU MF. Therefore, we strongly recommend that you always activate CPU MF.
For more information about getting CPUMF counter data, see CPU MF - 2022 Update and
WSC Experiences at the IBM Techdoc Library website.
A holistic performance approach is required when the performance gains are reduced
because of frequency. Therefore, hardware and software synergy becomes an absolute
requirement.
Starting with z13, Instructions Per Cycle (IPC) improvements in core and cache became the
driving factor for performance gains. As these microarchitectural features increase (which
contributes to instruction parallelism), overall workload performance variability also increases
because not all workloads react the same way to these enhancements.
Because of the nature of the IBM z17 multi-CPC drawer system and resource management
across those drawers, performance variability from application to application is expected.
Also, the memory and cache designs affect various workloads in many ways. All workloads
are improved, with cache-intensive loads benefiting the most. For example, having more PUs
per CPC drawer, each with higher capacity than IBM z16, more workload can fit on an IBM
z17 CPC drawer. This configuration can result in better performance. For example, IBM z16
two drawer system model A01 can populate maximum 82 PUs (Max82).
In contrast, IBM z17 two drawer system Max90 can populate maximum 90 PUs. Therefore,
eight more PUs can share caches and memories within the first and second drawers
respectively, so the performance improvements is expected.
The workload variability for moving from IBM z16 to IBM z17 is expected to be stable.
Workloads that are migrating from z10 EC, z196, and zEC12 to IBM z17 can expect to see
similar results with slightly less variability than the migration from IBM z16 and IBM z17.
Experience demonstrates that IBM Z servers can be run at up to sustained 100% utilization
levels. However, most customers prefer to leave some room and run at 90% or slightly under.
Do not use MIPS or MSUs for capacity planning: Do not use “one number” capacity
comparisons, such as MIPS or MSUs. IBM does not officially announce the processor
performance as “MIPS”. MSU is only a number for software license charge and it does not
represent the processor's performance.
Note: You should create an EDF file for each z/OS SYSID and load all the EDFs for the
same CPC into IBM zPCR at the same time so to ensure that the correct LSPR Workload
is assigned to each LPAR. IBM zPCR supports using drag-n-drop for multiple EDF files.
CP3KEXTR is offered as a no-charge application. It can also create the EDF files for
IBMzCP3000. IBM zCP3000 is an IBM internal tool, but you can create the EDF files for it on
your system.
For more information about CP3KEXTR, see the IBM Techdoc z/OS Data Extraction Program
(CP3KEXTR) for IBM zPCR and IBM zBNA.
Note: You should create an EDF file for each z/VM system and load all the EDFs for the
same CPC into IBM zPCR at the same time so to ensure that the correct LSPR Workload
is assigned to each LPAR. IBM zPCR supports using drag-n-drop for multiple EDF files.
For additional information, see CP3KVMXT - VM Extract Utility for zCP3000 and zPCR
Capacity Planning Support Tools.
Figure 12-5 on page 502 shows a sample IBM zPCR window of a workload type. In this
example, the workload type displays in the “Assigned Workload” column. The example shows
only one partition, PX11, selected. Note that all active partitions can be selected on this
panel. When you load the EDF file to IBM zPCR, it automatically sets your LPAR
configuration. It also makes easy to define the LPAR configuration to the IBM zPCR.
Set your weights and Logicals for all your partitions to match your business needs.
Set the efficient number of logicals to support the engines by weight
– Assign one or two more logicals than engines by weight.
• For instance, a 20 GCP LPAR processors with a 50% weight, set your logicals to
one or two 2 more than 10. Assign the weights and logicals also applies to specialty
engines.
Assign a suitable number of logical CPs. If you assign too many logical CPs to the LPAR,
PR/SM may place them at further distances, reducing efficiency and increasing
unnecessary LPAR management. This issue reduces the efficiency of the cache. For more
information about the number of Logical CPs defined for an LPAR, please refer to this
document: Number of Logical CPs Defined for an LPAR.
The server capacity declines relative to the LCP:RCP ratio (sum of logical CPs that is
defined in all LPARs: the number of physical CPs on your configuration). Therefore,
assigning the correct number of logical CPs to an LPAR is important.
Utilize IBM zPCR to size IBM Processors. (Don’t use MIPS tables to do capacity sizing).
Design your LPARs to “fit” in a single drawer with room to grow.
– When the number of logicals exceeds the drawer boundary all physics come into play,
slowing down, and that CPU time is clocked to your applications and to your bill.
For the larger partitions, start the strategy to split them into smaller partitions.
– Use IBM zPCR to show potential capacity savings for “more smaller” LPARs.
Utilize HiperDispatch in every z/OS and z/VM LPAR. HiperDispatch optimizes processor
cache usage by creating an affinity between a PU and the workload.
Figure 12-6 Selecting HiperDispatch Assignment for Shared Partitions from the Partition Detail Report
Both windows shown in Figure 12-6 will remain visible. Changes to the Partition Detail Report
will reflect in the HiperDispatch window.
Figure 12-7 on page 504 shows all defined partitions and how HiperDispatch is expected to
manage their logical CPs.
HiperDispatch supports logical CPs running z/OS v1.7 and later and z/VM v6.3 and later. For
z/OS partitions, zIIPs and shared CPs are affected similarly. For z/VM partitions, IFLs and
associated logical CPs are also affected similarly.
Figure 12-8 HiperDispatch Graph for the GP LCP HD Assignments Processor Topology Support
The HiperDispatch window shown in Figure 12-7 on page 504, contains most of the
information from the Partition Detail Report, plus:
Engines by Weight: Partition Weight% times the number of real CPs in the pool
VHs: Number of LCPs categorized as Vertical High
VMs: Number of LCPs categorized as Vertical Medium.
VM%: Percent of time the partition’s Vertical Medium LCPs are committed
VLs: Number of LCPs categorized as Vertical Low
VL Nvr Pk: Number of LCPs categorized as Vertical Low Never Parked
VL Nvr Pk%: Percent of time the partition’s Vertical Low Never Parked LCPs are
committed.
When input fields are modified on the Partition Detail Report window, results on the
HiperDispatch window will also be updated. Note that when exiting the HiperDispatch window,
any changes made to the Partition Detail Report window are not automatically reset.
Note: For GP or IFL partitions where HiperDispatch is not supported, only the VMs and
VM% columns apply. For ICF partitions, none of the HiperDispatch columns apply.
The IBM zPCR topology report is shown in Figure 12-10 on page 507, where:
LPARs are identified by row
IBM z17 Drawer/DCM/CHIP appears at the top lines
Topology report displays warning messages
LPARs Totals by Pool table is displayed at the bottom with support to filter by partition
Report is accessed from the Partition Detail Window
Latest versions of extract are required:
– available here: IBM Support page.
Note: The Topology report in Figure 12-10 on page 507 is showing all active partitions.
Information about a specific partition can be obtained by clicking on the “Remove all”
button to the left of the Partition Totals by Pool table at the bottom right and then clicking
on the “View” check-box for a specific partition.
IBM zPCR Topology Report is based in the z/OS new Data Gatherer functionality delivered
with APAR A62064, PTF available for z/OS 2.5 and z/OS 2.4. z/OS data is in the SMF 70.1
record.
z/VM support is provided in the 7.3 version and later, and APARs are available for z/VM 7.1
and 7.2.
Additionally, consider collecting the z/OS SMF 99 Subtype 14 records for all LPARs in the IBM
z17. Record has a single LPAR scope, so need all LPARs to get the total picture.
LPAR topology may have a very significant impact on processor CPU efficiency. Remote
cache accesses may take hundreds of machine cycles. SMF 99.14 records are produced
every 5 minutes and capture drawer/DCM/chip location data for each logical CP.
In the Topology Reports, IBM zPCR reports “measured” data and it shows the “what-is” and
not the “what-if” topology scenarios.
Note: To access HMC Partition Resource Assignments, you must use the “Service” logon
id or any other user id with the same authority.
At the HMC “Home” screen click Systems Management, under the task bar and then select
your listed target system.
Under “Tasks” at the bottom right click the Configuration (+) sign. Next, click “View Partition
Resource Assignments”. The panel shown in Figure 12-11 will display:
Figure 12-11 HMC - View Partition Resource Assignments for the IBM z177
Use the Partition Resource Assignments to view processor allocations to partitions in your
system. The active logical partitions are identified at the top of the table. The Node and Chip
numbers associated with each active logical partition are identified on the left. You can view
the Node and Chips assignments using the Expand All and Collapse All options under the
“Actions” pull down view or hide sections.
– Displays the active logical partition and if HiperDispatch (image icon of the logical
partition name entry) is enabled.
Node:
– Displays the processor Node number in your system.
Chip:
– Displays the processor Chip number associated with the Node and lists the processor
types associated with each active logical partitions. The Chip Collapse All icon displays
a summary view. The following physical processor types are:
• General processors: (G)
• Coupling facility processors: (C)
• Integrated Facilities for Linux IFLs): (I)
• z Integrated Information Processors (zIIPs): (Z)
• Integrated Firmware Processor (IFPs): (F)
The physical processor types may have some of the following conditions:
Indicates the physical processor types are shared: (SH)
Indicates the physical processor is dedicated: (D)
Indicates the vertical polarity for the physical processor types (H / M / L)
As shown in Figure 12-13 on page 511, the subject IBM z16 has 165 CPs, the PU resources
are distributed according to the Figure 12-12 on page 510, and are also shown in the
Estimated Distribution of RCPs Across Drawers graph in Figure 12-14 on page 512.
8
Your currently installed version of IBM zPCR must be uninstalled before installing IBM zPCR 9.7.1. This step is
necessary to facilitate conversion to the latest IBM Java 17 Semeru 64-bit runtime environment that is included with
IBM zPCR.
Figure 12-12 IBM z16 configurable PU distribution per Model (feature) and CPC drawer
Note: Screen Captures from Figures 12-12 to Figure 12-15 were taken from an IBM z16
server zPCR study. All the examples shown are valid and compatible with IBM z17.
Additionally, IBM zPCR version 9.6.4 and newer implements a new notice for partitions
approaching, within 10%, the maximum drawer size. This critical notice indicates that one or
more partitions partition are getting close to a drawer boundary. When that happens, capacity
growth by adding LCPs is very limited.
The new notice appears as a “Note” message in the Partition Detail Report. The “Note” and
the partition LCPs are shaded with the same violet color, as shown for partition IFL-01 in
Figure 12-12 above and for partition GP-02 in Figure 12-13 on page 511.
Figure 12-13 on page 511 shows an example of a Partition Detail Report this time for an IBM
z16, 3931-A01, (Max200)/700 with 165 CPS (45 GPs, 44 zIIPs, 60 IFLs and 16 ICFs), and six
active partitions (2 GP, 2 zIIP, 1 IFL and 1 ICF). Resources are allocated as shown in the
Partition Identification and Partition Configuration fields.
As the IBM z16 has 165 CPs, the PU resources are distributed according to the Figure 12-12,
and are shown in the Estimated Distribution of RCPs Across Drawers graph in Figure 12-14
on page 512.
Important: Please pay special attention to the colors assigned to the Logical CPs column
and relate them to the “warnings” and “notes” at the bottom of the report. See Figure 12-13
on page 511.
Figure 12-13 Partition Detail Report example (Max200)/700 with 165 CPs
The Partition Detail Report in Figure 12-13 above highlights the partition GP-02 to indicate it
is within 10% of the maximum drawer size in the number of CPs. The GP-02 partition and the
“Note” at the bottom are shaded with the same violet color.
Figure 12- Figure 12-15 on page 512 expands the information about the highlighted
messages shown at the bottom of the Figure 12-13.
On IBM z16 and IBM z17 models, the best practice is a partitions’s logical CP count should
not exceed the number of RCP (Real CPs) in the largest drawer. Partitions which exceed a
drawer boundary have special capacity considerations. For IBM z16 see Figure 12-12 on
page 510, and for IBM z17 see Figure 12-16 on page 513.
IBM z17 continues the IBM z15 and IBM z16 NUMA9 design. IBM z17 has two clusters and
four DCMs per drawer (refer to Figure 12-16) for the number of Configurable PUs per drawer).
Figure 12-16 IBM z17 configurable PU distribution per Model (feature) and CPC drawer
In the case where a single partition spans from one drawer into a second, the cross-drawer
penalty has increased on IBM z17. However, this is offset by more cores per drawer and
higher capacity than IBM z15, which allows more work to “fit” on a single drawer.
As discussed in 3.5.9, “Processor unit assignment” on page 115, and under “Memory
allocation” on page 117, PR/SM memory and logical processor allocation goal is to place all
logical partition resources on a single CPC drawer, if possible. There can be negative impacts
on a logical partition’s performance when CPs are allocated in different drawers.
IBM zPCR implements a warning notice when logical partition’s number of Logical CPs
defined is larger than the number of Real CPs in a single drawer. When that scenario occurs,
it is advisable to contact IBM to review your configuration. See Figure 12-15 on page 512.
9
NUMA, Non Uniform Memory Access, is a computer memory design used in multiprocessing where the access
time varies according to the memory location relative to the processor.
A.1 Overview
Each generation of the IBM Z processing chip is enhanced with new on-chip functions, such
as compression, sort, cryptography, and vector processing. The purpose-built accelerators
that provide these functions mean lower latency and higher throughput for specialized
operations. These accelerators work together with advanced chip design features such as
data prefetch, high capacity L1 and L2 caches, branch prediction, and other innovations.
The on-chip accelerators provide support for and enable compliance with security policies
because the data is not leaving the platform to be processed. The hardware, firmware, and
software are vertically integrated to seamlessly deliver this function to the applications.
In August 2021, IBM announced a new generation of IBM Z processor, Telum, with a new
Artificial Intelligence (AI) accelerator (see Figure A-1). This innovation brings incredible value
to the applications and workloads that are running on IBM Z.
With the new IBM Z Integrated Accelerator for AI, customers can benefit from the acceleration
of AI operations, such as fraud detection, customer behavior predictions, and streamlining of
supply chain operations, all in real time. Customers can derive the valuable insights from their
transactional data instantly.
The Integrated Accelerator for AI delivers AI inference in real time, at large scale and rate,
with no transaction left behind, all without needing to offload data from IBM Z to perform AI
inference.
The AI capability is applied directly into the running transaction, shifting the traditional
paradigm of applying AI to the transactions that were completed. This innovative technology
can be used for intelligent IT workloads placement algorithms, which contributes to the better
overall system performance. The accelerator is driven by new Neural Networks Processing
Assist (NNPA) instructions.
Figure A-2 shows the IBM z16 AI accelerator and its components: the data movers that
surround the compute arrays are composed of the Processor Tiles (PT), Processing
Elements (PE), and Special Function Processors (SFP).
Intelligent data movers and prefetchers are connected to the chip by way of a ring interface for
high-speed, low-latency, read/write cache operations (200+ GBps read/store bandwidth, and
600+GBps bandwidth between engines).
Compute Arrays consist of 128 processor tiles with 8-way FP-16 FMA SIMD, which are
optimized for matrix multiplication and convolution, and 32 processor tiles with 8-way
FP-16/FP-32 SIMD, which is optimized for activation functions and complex functions.
On the IBM z16, the integrated AI accelerator delivered more than 6 TOPS (trillion operations
per second) per chip and over 192 TOPS in the 32 chip system (a fully configured IBM z16
with four CPC drawers).
The AI accelerator on the IBM z16 is shared by all cores on the chip. The firmware, which is
running on the cores and accelerator, orchestrates and synchronizes the execution on the
accelerator.
Appendix A. IBM Z Integrated Accelerator for AI and IBM Spyre AI Accelerator 517
8579ax001.fm Draft Document for Review April 11, 2025 7:26 am
Acknowledging the diverse AI training frameworks, customers can train their models on
platforms of their choice, including IBM Z (on-premises and in hybrid cloud) and then, deploy
it efficiently on IBM Z in colocation with the transactional workloads. No other development
effort is needed to enable this strategy.
To allow this flexible “Train anywhere, Deploy on IBM Z” approach, IBM invests in the Open
Neural Network Exchange (ONNX) technology (see Figure A-3).
This standard format represents AI models, with which a data scientist can build and train a
model in the framework of choice without worrying about the downstream inference
implications. To enable deployment of ONNX models, IBM provides an ONNX model compiler
that is optimized for IBM Z. In addition, IBM is optimizing key open source AI and model
serving frameworks, such as PyTorch, TensorFlow, TensorFlow Serving, and Nvidia Triton
Inference Server for use on IBM Z.
IBM’s open source zDNN library provides common APIs for the functions that enable
conversions from the Tensor format to the accelerator required format. Customers can run
zDNN under z/OS1 and Linux on IBM Z. A Deep Learning Compiler (DLC) for z/OS and Linux
also is available to compile ONNX deep learning models into shared libraries which can be
integrated into C, C++, Java, or Python applications. In addition to leveraging the Integrated
Accelerator for AI, the compiled models take advantage of other hardware acceleration
capabilities such as SIMD (Single Instruction Multiple Data).
Developed using Samsung 5nm technology, the new IBM Telum II processor features eight
high-performance cores running at 5.5GHz. Telum II will include a 40% increase in on-chip
cache capacity, with the virtual L3 and virtual L4 growing to 360MB and 2.88GB respectively.
The processor integrates a new data processing unit (DPU) specialized for IO acceleration
and the next generation of on-chip AI acceleration. These hardware enhancements are
designed to provide significant performance improvements for clients over previous
generations.
Infusing AI into enterprise transactions has become essential for many clients’ workloads. For
instance, AI-driven fraud detection solutions save clients millions of dollars annually. With the
introduction of the AI accelerator on the Telum processor, there has been active AI adoption
on the z16 platform with the implementation of unique use cases by clients belonging to a
wide variety of industries. Building on this success and to meet growing performance
demands, IBM has significantly enhanced the Integrated AI accelerator on the Telum II
processor.
The compute power of each accelerator is improved by 4x, reaching 24 trillion operations per
second (TOPS). But TOPS alone don’t tell the whole story, additional enhancements have
been made to the accelerator’s architectural design plus optimizations to the AI ecosystem
that sits on top of the accelerator. Additionally, support for INT8 as a data type has been
added to enhance compute capacity and efficiency for applications where INT8 is preferred,
thereby enabling the use of newer models.
New NNPA instructions have also been incorporated to better support large language models
(LLMs) within the accelerator. They are designed to support an increasingly broader range of
AI models for a comprehensive analysis of both structured and textual data.
L2 L2
Core Cache Cache Core
(36 MB) (36 MB)
L2 L2
Cache Cache Core
(36 MB) (36 MB)
DPU L2 L2
Cache Cache Core
(36 MB) (36 MB)
L2 L2
Core Cache Cache Core
(36 MB) (36 MB)
L2 L2
Core Cache Cache Core
(36 MB) (36 MB)
AI
Appendix A. IBM Z Integrated Accelerator for AI and IBM Spyre AI Accelerator 519
8579ax001.fm Draft Document for Review April 11, 2025 7:26 am
On Telum, the cores on each processor chip only had access to their local AI accelerator, and
in the event where two cores on the same chip issued an NNPA instruction at the same time,
access to the AI accelerator was time-sliced, resulting in a wait-time to get access to the AI
accelerator.
Telum II was designed so that a processor core can offload AI operations to any of the other
integrated AI accelerators in the 7 adjacent processor chips in the drawer. This architectural
design provides each core access to a much larger pool of AI compute resources, and
reduces the contention for an Integrated Accelerator for AI. This represents a significant
enhancement to the on-processor AI capabilities.
CF CF CF CF
On the IBM z17, the integrated AI accelerator delivered more than 24 TOPS (trillion
operations per second) per chip and over 192 TOPS per drawer and 768 TOPS in the 32 chip
system (a fully configured IBM z17 with four CPC drawers).
The Spyre card has 32 accelerator cores, which share a similar architecture to the Telum II AI
accelerator. Multiple Spyre Accelerator cards can be clustered together, for example a cluster
of eight cards enables workloads to transparently leverage 256 accelerator cores. Scalable by
card and drawer, clients are able to take advantage of this efficient solution for
next-generation AI workloads.
The IBM Spyre accelerator assembly is a PCI adapter plugged in the I/O drawer. Up to 48
features are offered with a maximum of eight per I/O drawer. The Spyre accelerator is an
Enterprise Grade AI chip which enables generative AI capabilities on the IBM z17.
The adapter has 1 TB of Memory with 1.6 TB per second aggregate memory bandwidth.
Figure 12-17 shows the Spyre adapter and the IBM Spyre chip.
Telum II and the Spyre Accelerator card together support a broader set of models, enabling
composite AI use cases. Composite AI refers to leveraging the strengths of multiple AI
models, traditional and foundational, to improve the overall accuracy of a prediction, as
compared to just using one. Composite AI can be used in insurance claim fraud detection,
with a traditional nueral network model being used to analyze and make a prediction based
on structured data, while a Large Language Model (LLM) could then be used to extract
information from the unstructured textual data associated with a claim. This composite AI
method can be applied for advanced detection of suspicious financial activities, support
compliance with regulatory requirements, and mitigate the risk of financial crimes.
Appendix A. IBM Z Integrated Accelerator for AI and IBM Spyre AI Accelerator 521
8579ax001.fm Draft Document for Review April 11, 2025 7:26 am
Two Spyre Support Appliances will be necessary to ensure that IBM Spyre Accelerator cards
provide the expected IBM Z level of enterprise grade reliability, availability, and security. A pair
of Spyre Support Appliances can manage multiple Spyre Accelerator cards on the CEC. An
organization may consider provisioning more Spyre Support Appliances if there are isolation
requirements for development, test, and production environments.
Spyre Support Appliances will need to run in dedicated Secure Service Container type
LPARs. The IBM Secure Service Container is a container technology through which you can
quickly and securely deploy firmware and software appliances on the server. Unlike most
other types of partitions, a Secure Service Container partition contains its own embedded
operating system, security mechanisms, and other features that are specifically designed for
simplifying the installation of appliances, and for securely hosting them.
A Secure Service Container partition is a specialized container for installed and running
specific software appliances. An appliance is an integration of operating system, middleware,
and software components that work autonomously and provide core services and
infrastructures that focus on consumability and security.
In addition to the Spyre Support Appliances, an additional appliance will be needed, the
Appliance Control Center, a new component responsible for managing all appliances under
an HMC, even those that span CECs. The Appliance Control Center will be responsible for
appliance management tasks such as performing updates to the Spyre Support Appliance
and pulling code dumps. The Appliance Control Center will need to run in a dedicated Secure
Service Container type LPAR. Only one Appliance Control Center is needed as long as there
is network connectivity to where the Spyre Support Appliances will be deployed.
The table below summarizes the LPAR hardware requirements for the Spyre Support
Appliances and the Appliance Control Center. Note that these values could change, and are
provided to assist in the planning process for Spyre Accelerator Cards.
In addition to assigning the necessary resources to the LPARs, the LPARs will also need to
have network connections defined.
I/O Configuration
One physical function and one virtual function will need to be assigned to each Spyre Support
Appliance LPAR. In addition, a virtual function will need to be assigned to the z/OS or Linux
LPAR which will host the workload which uses Spyre Accelerator Cards.
Spyre Accelerator Cards can be added to the system in sets of eight, from one to six sets.
This means that a minimum of 8 Spyre Accelerator Cards can be installed on a z17 system. A
maximum of 8 Spyre Accelerator Cards can be placed in a single I/O drawer. A z17 system
will support a maximum of 48 total Spyre Accelerator cards, which could be installed in as
little as 6 I/O drawers.
IBM Spyre Accelerator Cards will need to be placed in certain slots due to power and cooling
impacts, with the goal of spreading across the fewest I/O drawers. IBM will provide guidance
regarding which slots to use for Spyre Accelerator Cards.
When ordering a z17, there is an option to select a feature code which will allow the
reservation of I/O capacity for a variable number of Spyre Accelerator Card sets, from one to
six sets. Note that this process is necessary for ensuring an adequate number of I/O drawers
and enough capacity in an I/O drawer(s) to accommodate the cards.
To assist with planning, IBM will provide guidance regarding the number of Spyre Accelerator
Cards needed to support certain use cases.
Clients will need to refrain from applying the hardware Miscellaneous Equipment
Specifications (MES) that add any adapters to the I/O drawer(s), until the Spyre Accelerator
Cards become Generally Available. This is to prevent interfering with reserved I/O slots for
Spyre Accelerator cards, as any unrelated I/O MES will default placement in the first available
I/O slot(s).
Appendix A. IBM Z Integrated Accelerator for AI and IBM Spyre AI Accelerator 523
8579ax001.fm Draft Document for Review April 11, 2025 7:26 am
An optional PCIe feature that was available for IBM z14 servers, zEDC Express, addressed
customer requirements by providing hardware-based acceleration for data compression and
decompression. zEDC provided data compression with lower CPU consumption than
compression technology that was available on the IBM Z server.
Although the zEDC PCIe feature provided CPU savings by offloading the select compression
and decompression operations, it included the drawback of limited virtualization capabilities
(one zEDC PCIe feature can be shared across a maximum of 15 LPARs) and limited
bandwidth.
IBM z15 introduced an on-chip accelerator (implemented in the PU chip) for compression and
decompression operations, which was tied directly into processor’s L3 cache. As a result, it
provided much higher bandwidth and removed the virtualization limitations of a PCIe feature.
The IBM z17 further addresses the growth of data compression requirements with the
integrated on-chip compression unit (implemented in processor Nest, one per PU chip) that
significantly increases compression throughput and speed compared to previous zEDC
deployments.
IBM z17 compression and decompression are both implemented in the Nest Accelerator Unit
(NXU, see Figure 3-12 on page 91) on each processor chip.
One NXU is available per processor chip, which is shared by all cores on the chip and
features the following benefits:
New concept of sharing and operating an accelerator function in the nest
Supports DEFLATE-compliant compression and decompression and GZIP CRC/ZLIB
Adler
Low latency
High bandwidth
Problem state execution
Hardware and firmware interlocks to ensure system responsiveness
Designed instruction
Run in millicode
The IBM z17 Integrated Accelerator for zEDC has an Improved Gasp hashing algorithm
which increases the compression ratio.
1
Virtual L3 (shared victim) cache for IBM z17. For more information, see Chapter 2, “Central processor complex
hardware components” on page 19.
The IFAPRDxx chargeable feature is still required for authorized services. For problem state
services, such as zlib use of Java, it is not required.
The current zlib and the new zlib function are available for the IBM z15, z16 and z17
hardware. It functions with or without the IBM z15 / IBM z16 / IBM z17 z/OS PTFs on IBM z14
and below.
A specific fix category that is named IBM.Function.zEDC identifies the fixes that enable or use
the zEDC and On-Chip Compression function.
z/OS guests that run under z/VM V7R3 with PTFs and later can use the zEDC Express
feature and IBM z17 On-Chip Compression.
For more information about how to implement and use the IBM Z compression features, see
Reduce Storage Occupancy and Increase Operations Efficiency with IBM zEnterprise Data
Compression, SG24-8259.
zBNA can run on Microsoft Windows or Apple MacOS. It provides graphical and text reports,
including Gantt charts, and support for alternative processors.
zBNA can be used to analyze client-provided System Management Facilities (SMF) records
to identify jobs and data sets that are candidates for zEDC and IBM z17 On-Chip
Compression across a specified time window (often a batch window).
Therefore, zBNA can help you estimate the use of On-Chip Compression features and help
identify savings.
IBM z17 On-Chip Compression is available to open source applications by way of zlib.
Although these are two different offerings, Tailored Fit Pricing for Software is a prerequisite for
Flexible Capacity for Cyber Resiliency (or ‘Flexible Capacity’ in short).
Note: For more information about Tailored Fit Pricing, see this IBM Z Resources web page.
The information that is included in this section about TFP is taken in part from the IBM
publication Tailored Fit Pricing - A sensible pricing model, 75034875USEN-01.
IBM Z platform users were used to paying for IBM Z software and hardware at peak capacity
required, and to manage their software costs by capping the machine usage. Traditionally,
they capped the machine usage by using one or more of the following methods:
Running batch workloads during off-shift hours
Reducing machine resources accessed by development and test workloads
Not introducing new workloads or applications onto the platform, even when it was the
most logical technology for such workloads
Investing in tools and resources to manage subcapacity capping
IBM introduced Tailored Fit Pricing (originally for IBM Z software) as a simpler pricing model
to allow Z customers to better use their platform investments as their business demands, and
in a more cost competitive way. Building on the success of this program, a variable IBM Z
hardware model was introduced to extend the value of Tailored Fit Pricing for IBM Z.
With Tailored Fit Pricing models now available across hardware and software, customers can
gain more flexibility and control with pricing solutions that can be tailored for business
demands, which helps to balance costs while deriving even more value from hybrid cloud.
IBM’s Tailored Fit Pricing model can address the following key concerns:
Complexity of subcapacity pricing model that lead to IBM Z being managed as a cost
center
Difficulty in establishing the cost of new workload deployment and the effect on cost of
existing workloads
Investment in tools and resources to manage subcapacity that can inflate costs
Lack of development and test resources
Purchasing hardware for peak capacity to handle short term spikes
The software and hardware pricing models provide customers an opportunity to grow and
more fully use their IBM Z investment for new opportunities.
IBM originally introduced DevTest Solution and New Application Solution in 2017. These
solutions further evolved when in May 2019, IBM announced two significant other solutions:
Enterprise Consumption (since renamed to the Software Consumption Solution), and
Enterprise Capacity Solution.
In May 2021, IBM announced a new hardware solution, called Hardware Consumption
Solution. All of these options were gathered into a new family of IBM Z pricing called Tailored
Fit Pricing (TFP).
The Software Consumption Solution and the Hardware Consumption Solution are discussed
next.
Customers typically transition onto TFP Consumption for the following reasons:
It is a software pricing model that is better suited to today’s workloads profiles (typically,
where they are increasingly spiky). Also, it is a pricing model that is better suited to future
uses; for example, inclusion in Hybrid Cloud architectures.
A customer on TFP Consumption can confidently remove all forms of capping and expose
all their workloads to all of the hardware infrastructure they own.
Any form of growth (from a new workload to a 30-year-old COBOL application that is used
more often) qualifies for a much-improved Price per MSU.
One key concept in a Software Consumption Model is the customer baseline. The IBM team
works with the customer to review their previous 12 months production MSU consumption
and billing to determine an effective price per MSU and establish a predictable price for all
growth at discounted rate (see Figure C-1).
With the Software Consumption model, no concept of peaks or white space is used that
previously were integral to the sub-capacity-based model. Therefore, customers are free to
remove capping and can use all of the owned capacity without worrying about penalties for
peaking or spiking.
Although a customer does commit to an MSU baseline, if the MSUs for a specific year are not
fully used, the customer can carry over any unused MSUs for use in the following year for the
life of the contract. TFP consumption encourages and rewards growth; that is, MSUs that are
processed above the baseline are charged at an aggressive Growth Price per MSU.
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 533
8579ax003.fm Draft Document for Review April 11, 2025 7:26 am
All workload processing can benefit by running with no capping and having all owned
infrastructure available. Batch processing can also take advantage and reduce batch
windows dramatically.
Without capping in place, customers can expect jobs to finish faster, yet at the same cost.
Online processing can process more transactions simultaneously, which improves response
times. This result is a function of the new billing approach, which is based on the actual
amount of work that is performed, rather than the peak capacity consumption that is reached.
To provide improved economics for growth, TFP Consumption customers pay preferential
pricing on the MSUs that are used above their baseline, regardless of whether that growth
came from existing or new workloads. No other approval, qualification, or processing is
required to use the growth pricing rate.
With coverage of both MLC and capacity-based IPLA products, the Software Consumption
Model offers a single and comprehensive software solution to IBM Z customers.
To meet the demands of modern workloads, IBM Z hardware can now include, in addition to
the base capacity, a subscription-based corridor of pay-for-use capacity. This always-on
corridor of consumption-priced capacity helps alleviate the effect of short, unpredictable
spikes in workload that are becoming more common in today’s digital world.
The usage charges feature a granularity of 1 hour and are based on MSUs that are used, as
measured by the subcapacity reporting Tool (SCRT), not full engine capacity.
Tailored Fit Pricing for IBM Z hardware enables customers to be ready for the unknown and
unexpected. The presence of the always-on capacity contributes to better efficiency, reduced
overhead, and shorter response times. This offering is available to IBM Z customers, who are
using Tailored Fit Pricing for IBM Z software, for machines starting with the IBM z15, for z/OS
general-purpose CP capacity. Starting from the z17 announced in 2025, Tailored Fit Pricing
capacity will also become available for Integrated Facility for Linux (IFL) and IBM Z Integrated
Information Processor (zIIP) specialty engines.
One of the most important facts about TFP Hardware is that a single system or central
processing complex (CPC) is always considered. It is on this level that usage is measured.
The blue bars that are shown in Figure C-2 represent individual 15-minute intervals, and the
usage is measured in each one.
In Figure C-2, the dark green line shows the average use of the machine over the entire
period that is measure (normally a month). The black line shows the Rolling 4 Hour Average
(R4HA), and it is displayed for information purposes only because the R4HA is not used for
TFP-Hardware calculations.
The red line is the customer owned capacity (or Base capacity). The light green line shows
the activated capacity of the machine, as reported by Sub-Capacity Reporting Tool (SCRT).
As shown in Figure C-2, some of the blue bars reach above the red line, into the corridor
between the red line and light green lines, which represents the TFP-Hardware capacity.
Therefore, use over the customer’s owned (Purchased) capacity is measured, so a Usage
charge is incurred. If no use is measured over the customer’s owned (Purchased) capacity, no
Usage charge is incurred.
In addition to the Usage charge, which might not be relevant, a Subscription charge also can
be assessed. The Subscription charge is a flat, per system, per month payment that is based
on the amount of TFP-Hardware capacity that is provided on the system. The Subscription
charge is invoiced prepaid for the contract term, or postpaid monthly.
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 535
8579ax003.fm Draft Document for Review April 11, 2025 7:26 am
Because only entire engines can be activated, it is always full engine sizes that are based on
IBM Large System Performance Reference (LSPR) capacity levels. The Subscription charge
covers the value that the extra activated capacity brings, even if no measured usage exists
(see Figure C-3).
The green bar in Figure C-3 represents the capacity that the customer owns (the so-called
Base capacity). The transparent bars represent activated TFP-Hardware corridor.
The blue bars show the measured usage within the 15-minute intervals that are greater than
the customer-owned capacity. The yellow bar is the highest measured TFP-Hardware use
within the defined hour.
The spikes that are counted for invoicing are the yellow bars (the highest within each hour).
The yellow bars hold a measured million of service units (MSU) value. If the total MSU use of
the yellow bars is 250 MSU, the customer receives an invoice of 250 times the hourly usage
charge per MSU.
Higher number of active processor engines have a positive n-way effect (higher
parallelization) and delivers more cache, less contention, and overhead
Optimized workload handling under customer-defined use thresholds
Improved insight for future capacity planning
Improved balance between physical and logical Central Processor (CPs)
Reduced Processor Resource/System Manager (PR/SM) logical partitions (LPAR)
management, less overhead
C.4 Conclusion
The combination of TFP for the hardware and software is a powerful one for our customers to
maximize their investment in IBM Z. For eligible customers, it allows them to use their
hardware for their general capacity requirements with a consumption-based corridor on top.
When this TFP for hardware is combined with the TFP-SW solutions, it allows customers to
unleash the full power of their IBM Z machine.
Flexible Capacity (Flex Cap) is designed to provide increased flexibility and control to shift
workloads between participating IBM z16 and IBM z17 machines at different locations and
stay for up to 12 months. This helps organizations to improve their resiliency and continue to
comply to evolving regularly mandates by running a real failover scenario for workloads that
demonstrates business continuity.
By using IBM GDPS, scripts can be run to fully automate Flexible Capacity for Cyber
Resiliency workload shifts between participating IBM z16 and z17 machines at different
locations.
The following Flexible Capacity features where introduced with IBM z16:
Flexible Capacity for Cyber Resiliency Enterprise Edition
Allows active capacity flexibility for all engine types to allow reallocating workloads
(including production) for a maximum period of 12 months and a maximum of 12 events
(activations or deactivations) per contract year to facilitate intra- and inter-site
workload/role swaps. A migration period of 24 hours is permitted when the flexible
capacity record is activated on both machines.
Flexible Capacity for Cyber Resiliency Limited Term
Allows active capacity flexibility for all engine types to allow reallocating workloads
(including production) for maximum period of 30 to comply with regulator requirements,
pro-active DR, facility maintenance, and 90 days in case of a real DR. A maximum of 4
events (activations or deactivations) per contract year, but only across different data
centers (inter-site). A migration period of 24 hours is permitted when flexible capacity
record is activated on both machines.
With IBM z17 the Flexible Capacity capabilities were extended with the following features:
Flexible Capacity Infrastructure Testing
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 537
8579ax003.fm Draft Document for Review April 11, 2025 7:26 am
Enabling clients to perform infrastructure testing only with a copy of your workload within
an isolated (network) environment, for example testing whether processes, connections,
and automation works (SAN, Network, Distributed Systems, 3rd party license keys,
lights-on, GDPS scenarios, etc.). This can be done for a maximum of 10 days and a
minimum of 72 hours between tests. This feature can be ordered at increments of 1 per
machine, with a maximum of 10.
Additional Flexible Capacity Events
Enabling clients to optionally acquire additional events (activations or deactivations)
beyond the standard 4 or 12 per contract year that Flexible Capacity for Cyber Resiliency
offers by default. This can be achieved by purchasing prepaid features. The prepaid
feature is available for the lifespan of the Flex Cap record.
Overrun features (for Flex Cap, Tailored Fit Pricing HW, and Test and Stress Testing)
These are billing features that will be used in case a client:
– exceeds the migration time of 24 hours (Flex Cap)
– has more capacity active after the migration period than their owned or contracted
capacity (Flex Cap / TFP HW / TST)
– runs an infrastructure test for longer than 10 days (Flex Cap)
– consumes more Stress Test days than contracted (TST)
– is running TST during the migration period of Flex Cap
Note: TST (Test and Stress Testing) capacity must be deactivated in case the TST Logical
Partitions (LPARs) are being moved to a different machine as part of the migrations.
In the first step of the setup process (see Figure C-4), the active capacity of the participating
IBM z16 and IBM z17 machines is changed to a base machine plus the temporary entitlement
record (TER) up to the High Water Mark (HWM) of the machine. The base capacity is defined
by the customer. The machines’ HWM remains unchanged.
In the next step (see Figure C-5), the now unassigned capacity is restored with a new Flexible
Capacity record. Another Flexible Capacity record is installed on System B to increase the
capacity to the amount of MIPS that the customer licensed.
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 539
8579ax003.fm Draft Document for Review April 11, 2025 7:26 am
The IBM Flex Capacity record on System A then is activated until the machine’s HWM. On
System B, the Flexible Capacity record is installed, but not activated. The setup process is
now complete (see Figure C-6).
After the Flexible Capacity record is active on both sites, a time limit of 24 hours begins in
which workloads can be transferred from Site A to Site B without leading to more charges.
During this time window, Flexible Capacity can remain active on both sites, as shown in
Figure C-7
After 24 hours, the Flexible Capacity in Site A must be deactivated and System A reduced to
base capacity (see Figure C-8).
Flexible Capacity from several systems can be consolidated to single systems in Site B or
vice-versa (see Figure C-9).
Partial activations and deactivations of the Flexible Capacity record can be done.
However, the total capacity that is active on all participating systems after the migration period
cannot exceed the total capacity owned by the client on those participating systems.
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 541
8579ax003.fm Draft Document for Review April 11, 2025 7:26 am
Flexibly Capacity is available for all engine types; exceeding the purchased capacity incurs
charged. Monitoring to ensure compliance is done by way of Call Home data that IBM
receives periodically. The new Flexible Capacity Overrun features will be used to invoice
additional costs associated with the customer overrunning agreed terms and conditions:
Migration period longer than 24 hours
More capacity active than the customer owned capacity after the migration period
FlexCap for Infrastructure testing is active longer than 10 days
The Flexible Capacity Transfer record always is considered to be the first record that was
activated, regardless of the order in which temporary records or TFP-Hardware were
activated.
The example that is shown in Figure C-10 shows an active Sysplex across Site A and Site B.
System A has a Base capacity of 401 and the Flexible Capacity Record is activated up to a
710. On top of the active Flexible Capacity sits a TFP Hardware capacity of two extra CPs to
a maximum of 712.
Now, the data-center operator decides to perform maintenance on System A and activates
the Flexible Capacity and TFP-HW capacity on System B. After migration of the workloads,
the active Flexible Capacity and TFP-HW capacity on System A needs to be deactivated. This
operation needs to be completed within the 24 hours of the migration period.
After the migration period System A has deactivated all of its Flexible Capacity and TFP for
HW capacity and is left with only the base capacity of 401. On System B all Flexible Capacity
is active and on top of the Flexible Capacity record sits the TFP Hardware capacity.
System B has now the entire Flexible Capacity record activated and shows an active capacity
of 710. The TFP-Hardware capacity sits again on top of the active capacity and adds now
another 2 CPs to the 712.
This shows that the presence or activation of TFP-Hardware does not impact the amount of
capacity that can be activated by a Flexible Capacity Transfer record.
The TFP-Hardware capacity always “floats on top” of any other activated capacity for
TFP-Hardware usage charging. Therefore, no double charging can occur.
FC 9933 and 0376 must be ordered on each machine that is participating in Flexible Capacity
for Cyber Resiliency.
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 543
8579ax003.fm Draft Document for Review April 11, 2025 7:26 am
FC 0824 (10 days test) must be purchased on each machine where Infrastructure Testing is
required. This feature can be ordered at increments of 1 per machine, with a maximum of 10,
and requires FC 9933 and FC 0376.
A new Capacity on-Demand contract attachment also must be signed by the customer.
Cross machine (de)activation Enterprise Edition: Inter- and intra-site workload moves can be
done, regardless of distance, mirroring, or coupling technology.
Entitlement The owner of the machine holds a title to the physical hardware.
The capacity of that machine is enabled and controlled by way of
the LIC of the machine, which is licensed, not sold.
Overlap period A 24-hour period in which the temporary record can be active on
both systems.
Activation period Keep the flexible capacity record active on your alternative site for
up to 12 months.
License transfer LIC is licensed only to one serial numbered machine, and its
transfer to another machine is not permitted (but can be carried
forward in case of generation upgrade).
License expiration The record associated with the Flexible Capacity license can be
ordered for a maximum period of 5 years, and up until Withdrawn
from Marketing it can be extended, but always to a maximum of 5
years total.
TFP for software Offering requires TFP for software; CMP is grandfathered in.
Microcode only IBM Z Flexible Capacity for Cyber Resiliency is Microcode only.
Additional Memory, I/O Cards, drawers and other infrastructure
related components are not part of this solution.
Call home Customer agrees to use Call Home data to monitor capacity
usage.
Charges for capacity Capacity that is used beyond the purchased capacity is charged at
exceeding the temp record previously defined overrun prices.
Appendix C. Tailored Fit Pricing and IBM Z Flexible Capacity for Cyber Resiliency 545
8579ax003.fm Draft Document for Review April 11, 2025 7:26 am
IBM Z Flexible Capacity for IBM Z Flexible Capacity for Capacity Back Up
Cyber Resiliency Infrastructure Testing
Contracts LIC is licensed to only one A supplement to the Flexible Capacity Back Up Agreement
and specific serial-numbered Capacity attachment is CBU capacity provided in
features machine, and its transfer to needed annual increments
another machine is not CBU tests provided in
permitted individual increments
Offering requires TFP for
software
The LIC license expires up to
5 years past WFM,
depending on renewal date.
An LIC license can be carried
forward under the condition
there is an announced
upgrade path to the new
machine and the contract is
not expired.
The MTP connector of the zHyperLink and the ICA SR connection feature two rows of 12
fibers and are interchangeable.
The electrical Ethernet cable for the Open Systems Adapter (OSA) connectivity is connected
through an RJ45 jack.
The attributes of the channel options that are supported on IBM z17 are listed in Table D-1.
zHyperlink Express2.0 0351 8 GBps OM3, OM4 See Table D-2 on page 550 New build
FICON Express32-4P SX 0388 8, 16, or 32 OM1, OM2, See Table D-3 on page 550. New build
OM3, OM4
FICON Express32S SX 0462 8, 16, or 32 OM1, OM2, See Table D-3 on page 550. Carry forward
OM3, or OM4
Network Express, Open Systems Adapter (OSA) and Remote Direct Memory over Converged Ethernet (RoCE)
OSA-Express7S 1.2 10GbE SR 0457 10 MM 62.5 µm OM1: 33 m (108 feet) New build,
MM 50 µm OM2: 82 m (269 feet) Carry
OM3: 300 m (984 feet) Forward
OM4: 400 m (1312 feet)
OSA-Express7S 1.2 GbE SX 0455 1.25 MM 62.5 µm OM1: 275 m (902 feet) New build,
MM 50 µm OM2: 550 m (1804 feet) Carry
Forward
OSA-Express7S GbE SX 0443 1.25 MM 62.5 µm OM1: 275 m (902 feet) Carry forward
MM 50 µm OM2: 550 m (1804 feet) from z15 only
OSA-Express7S 1000BASE-T 0446 1000 Mbps Cat 5, Cat 6 100 m (328 feet) Carry forward
unshielded from z15 only
twisted pair
(UTP)
Parallel Sysplex
ICA SR2.0 0216 8 GBps OM3, OM4 See Table D-2 on page 550 New Build
a. The link data rate does not represent the performance of the link. The performance depends on many factors,
including latency through the adapters, cable lengths, and the type of workload.
b. Where applicable, the minimum fiber bandwidth distance in MHz-km for multi-mode fiber optic links is included in
parentheses.
c. For 32 Gbps links, point to point (to another switch, director, DWDM equipment or another FICON Express32S or
FICON Express32-4P) distance is limited to 5 km (3.1 miles).
The unrepeated distances for different multimode (MM) fiber optic types for zHyperLink
Express and ICA SR are listed in Table D-2 on page 550.
The maximum unrepeated distances for FICON SX features are listed in Table D-3.
The common building blocks are displayed and range from 1 - 4 frames, with various numbers
of CPC drawers and PCIe+ I/O drawers.
For this feature, the configuration needs to have at least two frames (ZA).
Due to Z-First, PCHID numbers are now assigned to fixed I/O drawer locations rather than
being associated with the logical plug sequence of I/O drawers. This allows a Z-First and
non-Z-First config to have the same PCHIDs for the same I/O drawers.
Appendix F. Sustainability
This section discusses sustainability aspects. It comprises information on:
Sustainability improvements
Sustainability instrumentation
The VCL algorithm monitors power and thermal sensors within each processor core to detect
when margins are approached on the voltage level. This might result in rare instances of
processor pipeline throttling and adjusts the voltage level to alleviate said conditions.
This instrumentation is ideal to enable Performance people to talk to such business people as
their organisation's Chief Sustainability Officer.
For machines with multiple LPARs a comprehensive break down of power usage requires
collecting data from each LPAR and aggregating. (Individual LPARs’ consumption can be
subtracted from the “partition power” consumption of the machine to provide power usage by
those LPARs not enabled for data collection, such as Internal Coupling Facility LPARs.)
Similarly, new for z17 is the addition of System and Partition event monitors.
Events are triggered when power levels exceed customer defined thresholds over a specified
period of time.
The system monitor supports total system power, total partition power, infrastructure power,
and unassigned power.
F.4.1 z/OS
For z/OS RMF support adds fields to the SMF 70 Subtype 1 CPU Control Section in four
categories:
Total machine power
Partition power
Infrastructure power - for components (including infrastructure switches, SE/HMAs, and
PDUs) that should not be accounted to individual partitions.
Unassigned power for unused I/O adapters and components that are not assigned to any
partition (including standby components).
Total machine power is, of course, the sum of the other three.
Machine power and partition power is further broken down into CPU, memory, and I/O.
It's important to enable SMF 70-1 on all z/OS LPARs on a machine - so each can report its
own power consumption.
RMF support doesn’t explicitly provide for workload-level reporting at the Service Class or
Report Class level. You can pro-rate it from SMF Type 72 Subtype 3 records and the LPAR's
consumption, as reported in SMF 70 Subtype 1.
If the record level (SMF70SRL in the RMF Product Section) is X’93’ or higher the following
fields are added to the CPU Control Section:
432 1B0 SMF70_CPUPower Accumulated microwatts readings taken for all CPUs of
the LPAR during the interval. Divide by
SMF70_PowerReadCount to retrieve the average power
measurement of the interval.
440 1B8 SMF70_StoragePower Accumulated microwatts readings taken for storage of the
LPAR during the interval. Divide by
SMF70_PowerReadCount to retrieve the average power
measurement of the interval.
448 1C0 SMF70_IOPower Accumulated microwatts readings for I/O of the LPAR
during the interval. Divide by SMF70_PowerReadCount
to retrieve the average power measurement of the
interval.
456 1C8 SMF70_CPCTotalPower Accumulated microwatts readings for all electrical and
mechanical components in the CPC. Divide by
SMF70_PowerReadCount to retrieve the average power
measurement of the interval.
480 1E0 SMF70_PowerReadCount Number of power readings for the LPAR during the
interval. (2 byte integer)
488 1E8 SMF70_PowerPartitionName The name of the LPAR to which the LPAR-specific power
fields apply. (8 EBCDIC characters)
F.4.2 z/VM
The z/VM Performance Data Pump will include power usage for the z/VM host for z/VM 7.3 or
7.4. The z/VM Performance Data Pump Power Monitor Grafana Dashboard will include
LPAR level CPU, memory, and I/O power information
Guest level apportionment approximation details
CPC level information details when global performance data has been enabled on at least
one LPAR
F.4.3 Linux on Z
Linux will obtain CEC and LPAR power consumption readings for the CEC and LPAR in which
a Linux image is running, and expose this information in binary, raw (i.e., unmodified) format
through a file in sysfs.
In addition, a tool will be provided through our s/390-tools package to display in human
readable format.
Information to be provided includes LPAR level CPU, memory, and I/O power information.
Guest level apportionment will not be provided, but instructions will be provided regarding
how to complete the calculation.
Information will also include CPC level details when global performance data has been
enabled on the respective LPAR.
Related publications
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topic in this
document. Note that some publications that are referenced in this list might be available in
softcopy only:
IBM z16 Technical Introduction, SG24-8950
IBM Z Connectivity Handbook, SG24-5444
IBM z16 (3931) Configuration Setup, SG24-8960
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other publications
The publication IBM Z 8561 Installation Manual for Physical Planning, GC28-7002, also is
relevant as another information source.
Online resources
The following online resources are available:
The IBM Resource Link for documentation and tools website:
http://www.ibm.com/servers/resourcelink
IBM Telum II Processor: The next-gen microprocessor for IBM Z and IBM LinuxONE:
https://www.ibm.com/blogs/systems/ibm-telum-processor-the-next-gen-microprocess
or-for-ibm-z-and-ibm-linuxone/
Leveraging ONNX Models on IBM Z and LinuxONE:
https://community.ibm.com/community/user/ibmz-and-linuxone/blogs/andrew-sica/20
21/10/29/leveraging-onnx-models-on-ibm-z-and-linuxone
Jump starting your experience with AI on IBM Z:
https://blog.share.org/Article/jump-starting-your-experience-with-ai-on-ibm-z
563
8579spine.fm Draft Document for Review April 11, 2025 7:26 am
565
8579back.fm Back cover
Draft Document for Review April 11, 2025 7:26 am
SG24-8579-00
ISBN 0738460788