IBM DS8890 Architecture and Implementation
IBM DS8890 Architecture and Implementation
Peter Kimmel
Daniel Beukers
Jeff Cook
Bozhidar Feraliev
Jörg Klemm
Connie Riggins
Gauurav Sabharwal
Redbooks
IBM Redbooks
August 2022
SG24-8456-03
Note: Before using this information and the product it supports, read the information in “Notices” on
page xi.
This edition applies to DS8900F systems with IBM DS8000 Licensed Machine Code (LMC) 7.9.30 (bundle
version 89.30.xx.x), referred to as Release 9.3.
© Copyright International Business Machines Corporation 2020, 2022. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Contents v
6.1 DS8900F Management Console overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.1.1 Management Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
6.1.2 Management Console hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
6.1.3 Private and Management Ethernet networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
6.2 Management Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
6.2.1 DS Storage Management GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.2.2 Data Storage Command-Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.2.3 RESTful application programming interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.2.4 IBM Copy Services Manager interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.2.5 Updating the embedded IBM Copy Services Manager . . . . . . . . . . . . . . . . . . . . 176
6.2.6 Web User Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
6.2.7 IBM ESSNI server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.3 Management Console activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.3.1 Management Console planning tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
6.3.2 Planning for Licensed Internal Code upgrades . . . . . . . . . . . . . . . . . . . . . . . . . . 183
6.3.3 Time synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.3.4 Monitoring DS8900F with the Management Console . . . . . . . . . . . . . . . . . . . . . 184
6.3.5 Event notification through syslog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6.3.6 Call Home and remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.4 Management Console network settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
6.4.1 Private networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
6.5 User management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.5.1 Password policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
6.5.2 Remote authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
6.5.3 Service Management Console User Management . . . . . . . . . . . . . . . . . . . . . . . 189
6.5.4 Service Management Console LDAP authentication . . . . . . . . . . . . . . . . . . . . . 195
6.6 Secondary Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
6.6.1 Management Console redundancy benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Contents vii
10.1.10 Command structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10.1.11 Using the DS CLI application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
10.1.12 Return codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.1.13 User assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
10.2 I/O port configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
10.3 DS8900F storage configuration for Fixed-Block volumes . . . . . . . . . . . . . . . . . . . . . 358
10.3.1 Disk classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
10.3.2 Creating the arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
10.3.3 Creating the ranks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
10.3.4 Creating the extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
10.3.5 Creating the FB volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
10.3.6 Creating the volume groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370
10.3.7 Creating host connections and clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372
10.3.8 Mapping open system host disks to storage unit volumes . . . . . . . . . . . . . . . . 380
10.4 DS8900F storage configuration for the CKD volumes . . . . . . . . . . . . . . . . . . . . . . . 380
10.4.1 Creating the arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
10.4.2 Creating the ranks and extent pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
10.4.3 Logical control unit creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382
10.4.4 Creating the CKD volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
10.4.5 Resource groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
10.4.6 IBM Easy Tier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
10.5 Metrics with DS CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
10.5.1 Offload performance data and other parameters . . . . . . . . . . . . . . . . . . . . . . . 397
10.6 Private network security commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398
10.7 Copy Services commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
10.8 Earlier DS CLI commands and scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
10.9 For more information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
viii IBM DS8900F Architecture and Implementation: Updated for Release 9.3
12.3.1 SNMP preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
12.3.2 SNMP configuration with the HMC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
12.3.3 SNMP configuration with the DS CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
12.4 Introducing remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
12.5 IBM policies for remote support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
12.6 Remote support advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
12.7 Remote support and Call Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
12.7.1 Call Home and heartbeat: Outbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
12.7.2 Data offload: Outbound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
12.7.3 Outbound connection types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
12.8 Remote Support Access (inbound) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
12.8.1 Assist On-site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
12.8.2 DS8900F-embedded AOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
12.8.3 IBM Remote Support Center for DS8900F . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
12.8.4 Support access management through the DS CLI and DS GUI . . . . . . . . . . . . 441
12.9 Audit logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
12.10 Using IBM Storage Insights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
12.10.1 IBM Storage Insights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
12.10.2 Getting started with IBM Storage Insights. . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
12.10.3 Case interaction with IBM Storage Insights . . . . . . . . . . . . . . . . . . . . . . . . . . 449
12.10.4 IBM Storage Insights: Alert Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
12.11 IBM Call Home Connect Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 457
Contents ix
x IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
AIX® IBM Research® PowerPC®
Db2® IBM Security® PowerVM®
DS8000® IBM Services® RACF®
Easy Tier® IBM Spectrum® Redbooks®
Enterprise Storage Server® IBM Z® Redbooks (logo) ®
FICON® IBM z13® WebSphere®
FlashCopy® IBM z14® z/Architecture®
GDPS® IBM z16™ z/OS®
Guardium® Parallel Sysplex® z/VM®
HyperSwap® POWER® z13®
IBM® Power Architecture® z15™
IBM Cloud® POWER8® z16™
IBM FlashSystem® POWER9™ zEnterprise®
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Red Hat and OpenShift are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the
United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
VMware, VMware vSphere, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or
its subsidiaries in the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
xii IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Preface
This IBM® Redbooks® publication describes the concepts, architecture, and implementation
of the IBM DS8900F family. The book provides reference information to assist readers who
need to plan for, install, and configure the DS8900F systems. This edition applies to DS8900F
systems with IBM DS8000® Licensed Machine Code (LMC) 7.9.30 (bundle version
89.30.xx.x), referred to as Release 9.3.
The DS8900F systems are all-flash exclusively, and they are offered as three classes:
DS8980F: Analytic Class
The DS8980F Analytic Class offers best performance for organizations that want to
expand their workload possibilities to artificial intelligence (AI), Business Intelligence (BI),
and machine learning (ML).
IBM DS8950F: Agility Class
The Agility Class consolidates all your mission-critical workloads for IBM Z®,
IBM LinuxONE, IBM Power, and distributed environments under a single all-flash storage
solution.
IBM DS8910F: Flexibility Class
The Flexibility Class reduces complexity while addressing various workloads at the lowest
DS8900F family entry cost.
The DS8900F architecture relies on powerful IBM POWER9™ processor-based servers that
manage the cache to streamline disk input/output (I/O), which maximizes performance and
throughput. These capabilities are further enhanced by High-Performance Flash Enclosures
(HPFE) Gen2.
Like its predecessors, the DS8900F supports advanced disaster recovery (DR) solutions,
business continuity solutions, and thin provisioning.
The IBM DS8910F Rack-Mounted model 993 is described in IBM DS8910F Model 993
Rack-Mounted Storage System Release 9.1, REDP-5566.
Authors
This book was produced by a team of specialists from around the world.
xiv IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Gauurav Sabharwal joined IBM in 2009. He has 18 years of
experience in high-end file and block storage with different
vendor products. He works at IBM LBS in India. His areas of
expertise include performance analysis; establishing high
availability and disaster recovery (HADR) solutions; complex
data migration; and implementing storage systems. Gauurav
holds a degree in information technology. He also acquired a
Post-Graduate Program in Artificial Intelligence and Machine
Learning from Texas McCombs Business School.
Find out more about the residency program, browse the residency index, and apply online at:
https://redbooks.ibm.com/residencies
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
redbooks.ibm.com/contact
Send your comments in an email to:
[email protected]
Preface xv
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xvi IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Part 1
The DS8900F models support the most demanding business applications with their
exceptional all-around performance and data throughput. Some models are shown in
Figure 1-1.
The DS8000 offers features that clients expect from a high-end storage system:
High performance
High capacity
HA
Security
Cost efficiency
Energy efficiency
Scalability
Business continuity and data protection functions
The DS8900F is an all-flash storage system that is equipped with encryption-capable flash
drives. High-density storage enclosures offer a considerable reduction in the footprint and
energy consumption. Figure 1-2 shows the front view of a fully configured DS8950F frame.
Reduced footprint: The DS8900F is housed in a 19-inch wide rack, and it has a smaller
depth than its predecessor, which is the DS8880.
I/O enclosures are attached to the POWER9 processor-based servers with Peripheral
Component Interconnect Express (PCIe) Generation 3 cables. The I/O enclosure has six
PCIe adapter slots, and two zHyperLink ports.
The rack models also have an integrated keyboard and display that can be accessed from the
front of the rack. A pair of small form-factor (SFF) Hardware Management Consoles (HMCs)
are installed in the base rack management enclosure. The height of the rack is 40U for all
units.
The DS8910F Rack-Mounted model comes without its own rack. It can attach to both
mainframe or to distributed systems. When placed in a customer-supplied rack, a keyboard
and display can be ordered. When integrated into the IBM 3907 z15™ T02 or LinuxONE LR1
business-class mainframe models, the DS8910F Rack-Mounted model shares the rack and
the management keyboard and display.
Note: The DS8910F and its specific features are described in the IBM Power Systems
Enterprise AI Solutions, REDP-5556.
Figure 1-3 on page 7 shows the various components within the base frame. The DS8950F
and DS8980F expansion frame enables clients to add more capacity to their storage systems
in the same footprint.
High-Performance
Flash Enclosure Pairs
Management Enclosure
P9 Servers
Ethernet switches
(with expansion rack)
Note: For more information about the IBM Z synergy features, see IBM DS8900F and
IBM Z Synergy DS8900F: Release 9.3 and z/OS 2.5, REDP-5186.
Note: Some technical aspects are specific to the DS8910F rack-mounted model. For a full
overview of the architectural aspects of the DS8910F rack-mounted model, see IBM
DS8910F Model 993 Rack-Mounted Storage System Release 9.1, REDP-5566.
A CPC is also referred to as a storage server. For more information, see Chapter 3, “IBM
DS8900F reliability, availability, and serviceability” on page 71.
For more information, see Chapter 2, “IBM DS8900F hardware components and architecture”
on page 25.
HPFE Gen2
The HPFE flash RAID adapters are installed in pairs and split across an I/O enclosure pair.
They occupy the third and sixth PCIe slots according to the adapter pair plugging order.
HPFE drive enclosures are also installed in pairs, and connected to the corresponding flash
RAID adapter pair over eight 6 Gbps SAS cables for high bandwidth and redundancy. Each
drive enclosure can contain up to twenty-four 2.5-inch SAS flash drives. Flash drives are
installed in groups of 16, and split evenly across the two drive enclosures in the pair.
Each flash adapter pair and HPFE pair deliver up to 900 K IOPS reads, 225 K IOPS writes,
and up to 14 GBps (read) and 9.5 GBps (write) bandwidth.
For more information, see Chapter 2, “IBM DS8900F hardware components and architecture”
on page 25.
Drive options
Flash drives provide up to 100 times the throughput and 10 times lower response time than
15 K revolutions per minute (RPM) hard disk drives (HDDs). They also use less power than
traditional HDDs. For more information, see Chapter 5, “IBM DS8900F physical planning and
installation” on page 141.
Flash drives are grouped into three tiers, based on performance and capacity. These flash
Drives are supported across all DS8900F models.
2.5-inch flash Tier 0 high-performance flash drives:
– 800 GB
– 1.6 TB
– 3.2 TB
2.5-inch flash Tier 1 high-capacity flash drives:
– 3.84 TB
2.5-inch flash Tier 2 high-capacity flash drives:
– 1.92 TB
– 7.68 TB
– 15.36 TB
All flash drives in the DS8900F are encryption-capable. Enabling encryption is optional, and
requires at least two external key servers or the local key management feature.
Note: Easy Tier Server was removed from marketing support. It was replaced by the Flash
Cache option of IBM AIX® 7.2.
For more information about Easy Tier features, see the following resources:
IBM DS8000 Easy Tier (Updated for DS8000 R9.0), REDP-4667
DS8870 Easy Tier Application, REDP-5014
IBM DS8870 Easy Tier Heat Map Transfer, REDP-5015
Host adapters
The DS8900F offers 32 Gbps and 16 Gbps host adapters. Both types have four ports each,
and each port can be independently configured for either FCP or FICON:
The 32 Gbps adapter has four ports. Each port independently auto-negotiates to an 8, 16,
or 32 Gbps link speed.
The 16 Gbps adapter has four ports. Each port independently auto-negotiates to a 4, 8, or
16 Gbps link speed.
For more information, see Chapter 2, “IBM DS8900F hardware components and architecture”
on page 25.
Every DS8900F includes two HMCs for redundancy, which are installed in the management
enclosure in the base rack. DS8900F HMCs support IPv4 and IPv6 standards. For more
information, see Chapter 6, “IBM DS8900F Management Console planning and setup” on
page 167.
You can also encrypt data before it is transmitted to the cloud when using the TCT feature.
For more information, see the IBM DS8000 Encryption for Data at Rest, Transparent Cloud
Tiering, and Endpoint Security (DS8000 Release 9.2), REDP-4500.
The available drive options provide industry-class capacity and performance to address a
wide range of business requirements. The DS8000 storage arrays can be configured as
RAID 6, RAID 10, or RAID 5, depending on the drive type.
RAID 6 is now the default and preferred setting for the DS8900F. RAID 5 can be configured
for drives of less than 1 TB, but this configuration is not preferred and requires a risk
acceptance, and a field Request for Price Quotation (RPQ) for enterprise hard disk drive
(HDD) drives. Flash Tier 0 drive sizes larger than 1 TB can be configured by using RAID 5,
but require an RPQ and an internal control switch to be enabled. RAID 10 continues to be an
option for all-flash drives.
For more information, see 2.2, “DS8900F configurations and models” on page 28.
This rich support of heterogeneous environments and attachments, with the flexibility to
partition easily the DS8000 storage capacity among the attached environments, can help
support storage consolidation requirements and dynamic environments.
Tip: Copy Services (CS) are currently supported for LUN sizes of up to 4 TB.
The maximum CKD volume size is 1,182,006 cylinders (1 TB), which can greatly reduce the
number of volumes that are managed. This large CKD volume type is called a 3390 Model A.
It is referred to as an Extended Address Volume (EAV).
OpenStack
The DS8000 supports the OpenStack cloud management software for business-critical
private, hybrid, and public cloud deployments. The DS8900F supports features in the
OpenStack environment, such as volume replication and volume retype. The Cinder driver for
DS8000 is now open source in the OpenStack community. The /etc/cinder.conf file can be
directly edited for the DS8000 back-end information.
For more information about the DS8000 and OpenStack, see Using IBM DS8000 in an
OpenStack Environment, REDP-5220.
DS8900F supports the Container Storage Interface (CSI) specification. IBM released an open
source CSI driver for IBM storage that allows dynamic provisioning of storage volumes for
containers on Kubernetes and IBM Red Hat OpenShift Container Platform (OCP).
The CSI driver for IBM block storage systems enables container orchestrators such as
Kubernetes to manage the lifecycle of persistent storage. This CSI is the official operator to
deploy and manage the IBM block storage CSI driver.
For more information about the RESTful API, see Exploring the DS8870 RESTful API
Implementation, REDP-5187.
Thin-provisioning features
Volumes in the DS8900F can be provisioned as full or thin. When clients plan capacity, they
must consider the number of volumes in the extent pool (or overall storage system) and the
degree of over-allocation that is planned for.
These volumes feature enabled over-provisioning capabilities that provide more efficient
usage of the storage capacity and reduced storage management requirements. For more
information, see Chapter 4, “Virtualization concepts” on page 107 and IBM DS8880 Thin
Provisioning (Updated for Release 8.5), REDP-5343.
To meet the challenges of cybersecurity, the Safeguarded Copy function, based on the
FlashCopy technology, can create and retain hundreds of PTCs for protection against logical
data corruption. Release 9.2 added the capability to restore a recovered Safeguarded Copy
to a production copy of the data. For more information, see IBM DS8000 Safeguarded Copy
(Updated for DS8000 R9.2), REDP-5506.
For data protection and availability needs, the DS8900F provides MM, GM, GC, MGM, and
z/OS Global Mirror (zGM), which are Remote Mirror and Remote Copy functions. These
functions are also available and are fully interoperable with previous models of the DS8000
family. These functions provide storage mirroring and copying over large distances for DR or
availability purposes.
For more information about CS, see IBM DS8000 Copy Services: Updated for IBM DS8000
Release 9.1, SG24-8367.
!"!!#$#$
%$ "$$&!&#'##"$$
(
!)"$*$$+
'#$"%#$+$
"#$"!"!*$'$"
"#$"!"!*$*&&
Physical installation of the DS8000 is performed by IBM by using the installation procedure for
this system. The client’s responsibility is the installation planning, retrieval, and installation of
feature activation codes, logical configuration, and execution.
The storage system HMC is the focal point for maintenance and service operations. Two
HMCs are inside the DS8900F management enclosure, and they continually monitor the state
of the system. HMCs notify IBM, and they can be configured to notify you when service is
required.
The remote connection between the HMC and IBM Support is performed by using the Assist
On-site (AOS) feature. AOS offers more options, such as Transport Layer Security (TLS), and
enhanced audit logging. For more information, see IBM Assist On-site for Storage Overview,
REDP-4889.
IBM Remote Support Center (RSC) is also an available option for providing IBM Support
remote support access to systems.
For Release 9.3 systems, IBM provides three options for microcode updates:
Customer Code Load
Remote Code Load (RCL)
Onsite SSR Code Load
For customers who choose the base warranty service or Expert Care Advanced, Customer
Code Load is the default method for performing concurrent microcode updates:
Microcode bundles are downloaded and activated by the customer by using the standard
DS Storage Manager GUI.
The download defaults to the current recommended bundle, or an alternative compatible
bundle may be chosen.
Health checks are run before the download, and again before activation to ensure that the
system is in good health.
If a problem is encountered anywhere in the process, a ticket is opened automatically with
IBM Support, and the ticket number is provided in the GUI for reference.
After the problem is corrected, the code load can be restarted, and automatically resumes
after the last successful step.
Customers who want to have an IBM Systems Service Representative (IBM SSR) perform
Onsite Code Load may purchase Feature Code #AHY2 with Expert Care Premium, or
Feature Code #AHY3 with Expert Care Advanced.
With all of these components, the DS8900F is positioned at the top of the high-performance
category.
The 16 Gbps host bus adapter (HBA) supports IBM Fibre Channel Endpoint Security
authentication. The 32 Gbps HBA supports both IBM Fibre Channel Endpoint Security
authentication and line-rate encryption.
SARC, AMP, and IWC play complementary roles. Although SARC carefully divides the cache
between the RANDOM and the SEQ lists to maximize the overall hit ratio, AMP is managing
the contents of the SEQ list to maximize the throughput that is obtained for the sequential
workloads. IWC manages the write cache and decides the order and rate to destage to disk.
The DS8900F flash storage can be tiered, with three tiers of flash storage that are available.
Then, you can use Easy Tier to optimize the storage. The DS8900F offers automated
algorithms that optimize the tiering and place hot areas onto higher-tiered flash arrays.
To improve data transfer rate (IOPS) and response time, the DS8900F supports flash drives
and high-performance flash drives, which are based on NAND technology. With the flash
drives and the specific architecture that is used in the HPFEs, much higher IOPS densities
(IOPS per GB) are possible than with ordinary solid-state drives (SSDs).
Flash drives sharply improve I/O transaction-based performance over traditional HDDs in
standard drive enclosures.
High-performance flash drives use the flash RAID adapters in the I/O enclosures, and PCIe
connections to the processor complexes. The high-performance flash drive types are
high-IOPS class enterprise storage devices that are targeted at flash Tier 0, for I/O-intensive
workload applications that need high-level, fast-access storage. The high-capacity flash drive
types for flash Tiers 1 and 2 often have an acquisition price point that helps eliminate HDDs
when replacing a storage system.
Flash drives offer many potential benefits over HDDs, including better IOPS, lower power
consumption, less heat generation, and lower acoustical noise. For more information, see
Chapter 5, “IBM DS8900F physical planning and installation” on page 141.
For more information about performance on IBM Z, see IBM DS8900F and IBM Z Synergy
DS8900F: Release 9.3 and z/OS 2.5, REDP-5186.
Note: The IBM DS8910F model 993 Rack-Mounted system has some configuration
differences from the racked model 994. For more information, see IBM DS8910F Model
993 Rack-Mounted Storage System Release 9.1, REDP-5566.
Base frame
The DS8900F has four available base frame models in three DS8900F families (Analytic
class, Agility class, and Flexibility Class). The model numbers depend on the hardware
configuration for each: DS8980F, DS8950F, and DS8910F. In this chapter, the DS8900F
family name, or model number, are used interchangeably. Table 2-1 lists each of the frame
models.
Each base frame is equipped with dual Hardware Management Consoles (HMCs).
To increase the storage capacity and connectivity, an expansion frame can be added to any
DS8980F model 998, or a DS8950F model 996 with 40 cores, and at least 1 TB system
memory.
For more information about the base frame configuration, see 2.2.5, “DS8900F base frames”
on page 35.
For more information about the DS8910F model 993 Rack-Mounted system,
see IBM DS8910F Model 993 Rack-Mounted Storage System Release 9.1, REDP-5566.
Expansion frame
The DS8980F and DS8950F support one optional expansion frame, which provides space for
extra storage capacity and also supports up to two extra I/O enclosure pairs. To add an
expansion frame to the DS8950F, the storage system must first be configured with 1024 GB
of memory and 40 processor cores.
With these models, you can place the expansion frame up to 20 meters away from the base
frame. To use this feature, use the optical Peripheral Component Interconnect Express (PCIe)
I/O Bay interconnect. The Copper PCIe I/O Bay interconnect is used when the expansion
frame is physically next to the base frame. For more information about the expansion frame
connections, see 2.2.6, “DS8900F expansion frame” on page 38.
All DS8900F system memory and processor upgrades can be performed concurrently.
A monitor and keyboard with trackpad are provided for local management. The ME also
provides up to two Ethernet connections from each HMC for remote management.
The characteristics for CPCs for each model type are listed in Table 2-2.
2048 GB
3456 GB
512 GB
Both CPCs in a DS8900F system share the system workload. The CPCs are redundant, and
either CPC can fail over to the other for scheduled maintenance, upgrade tasks, or if a failure
occurs. The CPCs are identified as CPC 0 and CPC 1. A logical partition (LPAR) in each CPC
runs the AIX V7.x operating system (OS) and storage-specific Licensed Internal Code (LIC).
This LPAR is called the storage node. The storage servers are identified as Node 0 and
Node 1 or server0 and server1.
The main variations between models are the combinations of CPCs, I/O enclosures, storage
enclosures, and flash drives. System memory, processors, storage capacity, and host
attachment upgrades from the smallest to the largest configuration can be performed
concurrently.
Beginning with Release 9.3 new builds, DS8900F storage systems use machine type 5341.
The former warranty and service options are now offered as part of Expert Care. Options
range from a 1-year base warranty to a 5-year Expert Care Advanced or Premium.
Figure 2-2 shows the maximum configuration of a DS8980F with a model E96 expansion
frame. The DS8950 model 996 and E96 are similar.
HMC Two in ME
Keyboard/monitor 1
a. For more information, see 2.4.2, “I/O enclosure adapters” on page 51.
b. For more information, see Getting Started with IBM zHyperLink for z/OS, REDP-5493.
Note: The DS8900F hardware uses iPDUs, non-volatile dual inline memory modules
(NVDIMMs) and Backup Power Modules (BPMs) to replace the internal DC-UPSs in prior
generations.
Figure 2-2 on page 28 shows the maximum configuration of a DS8950F model 996 and
DS8950F model E96.
Table 2-4 lists the hardware along with the minimum and maximum configuration options for
the DS8950F model 996.
HMC Two in ME
Keyboard/Monitor 1
a. For more information, see 2.4.2, “I/O enclosure adapters” on page 51.
b. For more information, see Getting Started with IBM zHyperLink for z/OS, REDP-5493.
Table 2-5 lists the hardware and the minimum and maximum configuration options for the
DS8910F model 994.
Frames 1
HMC 2 in ME
Keyboard/Monitor 1
a. For more information, see 2.4.2, “I/O enclosure adapters” on page 51.
b. For more information, see Getting Started with IBM zHyperLink for z/OS, REDP-5493.
Note: The DS8910F Flexibility Class Rack-Mounted system has hardware specifications
that differ slightly from the rack-based models. Specific information about the model 993
can be found in IBM DS8910F Model 993 Rack-Mounted Storage System Release 9.1,
REDP-5566.
DS8910F model 993 can be integrated into existing IBM Z models T02 or ZR1 (#0937),
IBM LinuxONE III model LT2 or LinuxONE II model LR1 (#0938), or any other standard
19-inch rack that conforms to EIA 310D specifications (#0939).
The model 993 uses the same hardware components that are found in the rack-based
DS8910 systems and offers all the same advanced features while reducing data center
footprint and power infrastructure requirements.
The DS8910F model 993 is equipped with two CPCs, each with dual quad-core processors,
and it can be scaled up to 96 Tier 0, Tier 1, or Tier 2 flash drives; up to 512 GB system
memory and 32 host adapter ports; and four zHyperLink adapters.
The console is shared with ZR1 or LR1 through the IBM Z KVM. Feature Codes 0610 and
0620 are required for integration of DS8910F model 993 into a ZR1 or LR1.
The other DS8910F Rack-Mounted model 993 server that is shown is the maximum
configuration (two HPFE pairs) with a pair of optional iPDUs that can be integrated into a
standard 19-inch rack. The DS8910F model 993 iPDUs support single or three-phase power.
Figure 2-5 DS8910F model 993 for installation in a ZR1 or LR1 (left) and standard rack (right)
Table 2-6 lists all the hardware components and maximum capacities that are supported for
the DS8910F model 993. When integrated into the entry models of IBM Z family of servers,
T02, LT2, ZR1, or LR1, the DS8910F model 993 uses two IBM Z iPDUs, A3 and A4, for
power. It shares the display and keyboard with ZR1 or LR1 through IBM Z KVM. It has a
dedicated display and keyboard when it is integrated into a T02 or LT2.
HMC Two in ME
Note: The DS8900F models 998, 996, and 994 use a high-end 40U rack with a reduced
footprint.
Note: The Fibre Channel Arbitrated Loop (FC-AL) topology is no longer supported
on DS8900F host adapters.
For more information about I/O enclosures and I/O adapters, see 2.4, “I/O enclosures and
adapters” on page 48.
Note: Only the DS8980F and DS8950F models support an expansion frame.
All DS8980F systems support the installation of an expansion frame without any additional
features. DS8950F systems require at least 20 cores per CPC and 1 TB system memory
(system cache) to support the extra throughput that is provided by the installation of I/O and
storage enclosures in the expansion frame.
For more information, see 2.4, “I/O enclosures and adapters” on page 48.
To ease floor planning for future expansions, an available optical PCIe cable allows a distance
up to 20 m. The cable set contains optical cables and transceivers. One cable set is required
for each installed I/O enclosure pair in the expansion frame.
As shown in Figure 2-7 on page 41, this extension makes the positioning of an expansion
frame more flexible, especially for future capacity expansion. An extra rack side cover pair is
available if needed.
Figure 2-8 on page 43 and Figure 2-9 on page 44 show the CPC configured for the DS8980F
and DS8950F systems.
For more information about the server hardware that is used in the DS8910F and DS8950F,
see IBM Power Systems S922, S914, and S924 Technical Overview and Introduction,
REDP-5497.
In the DS8900F, processor core and system memory configurations dictate the hardware that
can be installed in the storage system. Processors and memory can be upgraded
concurrently as required to support storage system hardware upgrades. The supported
maximum system hardware components depend on the total processor and system memory
configuration.
Figure 2-11 shows the supported components for the DS8900F processor and memory
options. NVS values are typically 1/16th of installed system memory, except for the smallest
systems with 192 GB system memory, where only 8 GB, that is, 4 GB per CPC, is used as
NVS, and for the biggest systems of 3.4 or 4.3 TB memory, where NVS remains at 128 GB.
Model Processor Processor SMT Max Max System NVS Expansion Max Max I/O Max Max Max
cores per sockets config available zHyperLink memory (GB) Frame HA Enclosure zHyper HPFE Flash
CPC per CPC threads threads (GB) (ports) pairs Links Gen 2 Drives
pairs
Figure 2-11 Supported components for the DS8900F processor and memory options
Each CPC contains half of the total system memory. All memory that is installed in each CPC
is accessible to all processors in that CPC. The absolute addresses that are assigned to the
memory are common across all processors in the CPC. The set of processors is referred to
as a symmetric multiprocessor (SMP) system.
The DS8900F configuration options are based on the total installed memory, which in turn
depends on the number of installed and active processor cores.
The DS8980F configuration comes standard with 22 cores per server, and 4.3 TB of total
system memory. No processor core or system memory upgrades are supported at the time of
writing.
Note: System memory and processor upgrades are tightly coupled. They cannot be
ordered or installed independently from each other.
Caching is a fundamental technique for reducing I/O latency. Like other modern caches, the
DS8900F system contains volatile memory (RDIMM) that is used as a read/write cache, and
NVDIMM that is used for a persistent memory write cache. (A portion of the NVDIMM
capacity is also used for read/write cache.) The NVDIMM technology eliminates the need for
the large backup battery sets that were used in previous generations of DS8000. If power is
lost, the system shuts down in 20 ms, but power is maintained to the NVDIMMs, and data in
the NVS partition is hardened to onboard NAND flash.
NVS scales according to the processor memory that is installed, which also helps to optimize
performance. NVS is typically 1/16th of installed CPC memory, with a minimum of 8 GB and a
maximum of 128 GB.
The FSP controls power and cooling for the CPC. The FSP performs predictive failure
analysis (PFA) for installed processor hardware, and performs recovery actions for processor
or memory errors. The FSP monitors the operation of the firmware during the boot process,
and can monitor the OS for loss of control and take corrective actions.
A pair of optional adapters is available for TCT as a chargeable Feature Code. Each adapter
provides two 10 Gbps small form-factor pluggable plus (SFP+) optical ports for short
distances, and a pair of 1 Gbps (RJ45 copper) connectors. The standard 1 Gbps Ethernet
adapter is in the same slot (P1-C11) in DS8980F, DS8950F, and DS8910F systems. The
optional 10 Gbps Ethernet adapter is in the P1-C10 slot for DS8980F and DS8950F systems,
and is in the P1-C4 slot in DS8910F systems. For more information about TCT,
see IBM DS8000 Transparent Cloud Tiering (DS8000 Release 9.2), SG24-8381.
Figure 2-12 shows the location codes of the CPCs in DS8980F and DS8950F systems.
Figure 2-13 on page 47 shows the location codes of the CPC in a DS8910F system.
Figure 2-12 Location codes of the CPC in DS8980F and DS8950F systems in the rear
The I/O enclosures are PCIe Gen3-capable, and are attached to the CPCs with 8-lane PCIe
Gen3 cables. The I/O enclosures have six PCIe adapter slots, plus six CXP connectors.
DS8980 and DS8950F CPCs have up to six 1-port and one 2-port PCIe adapters that
provide connectivity to the I/O enclosures.
DS8910F CPCs have up to four 1-port PCIe adapters that provide connectivity.
Figure 2-16 on page 49 - Figure 2-18 on page 50 show the DS8900F CPC to I/O enclosure
connectivity.
The DS8950F configuration requires two 10-core processors per CPC and 1 TB system
memory to support an expansion frame.
Figure 2-16 shows the DS8980F and DS8950F CPC to I/O enclosure connectivity.
HPFE Gen2
DS8950F Model
996 + E96
rear view HPFE Gen2
Bay 0 Bay 1
HPFE Gen2
HPFE Gen2
HPFE Gen2
CPC 0 KVM
HPFE Gen2
4
Bay 4 Bay 5
6 5 7 0 1 2 3
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
Bay 2 Bay 3
HPFE Gen2
Figure 2-16 DS8980F and DS8950F I/O enclosure connections to the CPCs
Bay 0 Bay 1
HPFE Gen2
KVM
HPFE Gen2
CPC 0
Bay 2 Bay 3
0 1 2 3
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 Management Enclosure
CPC 1 CPC 0
0 1 2 3
CPC 1
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
Base Frame
Figure 2-17 DS8910F model 994 I/O enclosure connections to the CPC
Figure 2-18 shows the DS8910F model 993 CPC to I/O enclosure connectivity.
HPFE Gen2
KVM
HPFE Gen2
CPC 0
Bay 2 Bay 3
2 3
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 Management Enclosure
CPC 1 CPC 0
2 3
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
CPC 1
Figure 2-18 DS89010F model 993 I/O enclosure connections to the CPC
C2-T1 C2-T1
bay06
P9 C3-T2 P9
SCM #1 C3-T2 SCM #1
C3-T1 C3-T1
bay05
bay04
bay03
C12-T1 C12-T1
bay02
P9 P9
SCM #0
C9-T1 C9-T1 SCM #0
bay01
C8-T1 C8-T1
The DS8900F uses the PCIe paths through the I/O enclosures to provide high-speed
communication paths between the CPCs. Normally, the lowest available even-numbered I/O
enclosure is used for communication from server 0 to server 1, and the lowest available
odd-numbered I/O enclosure is used for communication from server 1 to server 0.
If a failure occurs in one or more I/O enclosures, any of the remaining enclosures can be used
to maintain communication between the servers.
The I/O bay can contain up to four host adapters that provide attachment to host systems and
up to two flash RAID DAs to provide attachment to the HPFE Gen2 enclosures. Each I/O bay
has six PCIe x8 CXP connectors on the I/O bay PCIe module. Two ports (T1 and T2) are for
the internal PCIe fabric connections to CPC 0 and CPC 1. Two ports (T3 and T4) are for
attachment of zHyperLink to IBM Z and two ports (T5 and T6) are unused.
Devi ce Adapter
Devi ce Adapter
FICON / FCP
FICON / FCP
FICON / FCP
FICON / FCP
DS8900F Internal zHyperLink Ports Unused
PCIe Fabric
Figure 2-20 DS8900F I/O bay adapter layout
Two different types of host adapters are available: 32 Gbps and 16 Gbps. Both have four
ports. The 32 Gbps adapters can auto-negotiate their data transfer rate down to 8 Gbps
full-duplex data transfer. The 16 Gbps adapters can auto-negotiate down to 4 Gbps
full-duplex data transfer.
Figure 2-21 on page 53 shows the 32 Gbps FCP or FICON host adapter. It provides faster
single stream and per-port throughput and reduces latency compared to the 16 Gbps
adapter. The 32 Gbps host adapter is equipped with a quad-core 2 GHz PowerPC processor
that delivers dramatic (2 - 3 times) full adapter I/O operations per second (IOPS)
improvements compared to the 16 Gbps adapter. The 32 Gbps adapter is required to enable
IBM Fibre Channel Endpoint Security encryption.
The 16 Gbps host adapter supports only IBM Fibre Channel Endpoint Security
authentication.
The 32 Gbps host adapter supports both IBM Fibre Channel Endpoint Security authentication
and line-rate encryption.
For more information, see IBM Fibre Channel Endpoint Security for IBM DS8900F and IBM Z,
SG24-8455.
Each host adapter port can be configured as either FICON or FCP. For both host adapters,
the adapter optics can be either LW or SW.
The DS8980F and DS8950F configurations support a maximum of 16 host adapters in the
base frame and 16 extra host adapters in the model E96 expansion frame. The DS8910F
model 994 configuration supports a maximum of 16 host adapters. The DS8910F model 993
configuration supports a maximum of eight host adapters.
Optimum availability: To obtain optimum availability and performance, one host adapter
must be installed in each available I/O enclosure before a second host adapter is installed
in the same enclosure.
Table 2-9 shows the preferred host adapter installation order for the DS8900F system. The
host adapter locations and installation order for the four I/O enclosures in the base frame are
the same for the I/O enclosures in the first expansion frame.
C1 C2 C3 C4 C5 C6
Bottom I/O 3 7 1 5
bay 02 / 06
Bottom I/O 2 6 4 8
bay 03 / 07
For four I/O enclosures (Model 998, Model 996, Model 994 Model E96a)
Top I/O bay 7 15 3 11
00 / 04
Bottom I/O 5 13 1 9
bay 02 / 06
Bottom I/O 2 10 6 14
bay 03 / 07
a. For the DS8950F model E96, the enclosure numbers are in emphasized text, and the plug
order is the same as the other models.
Each of the ports on a DS8900F host adapter can be configured for FCP or FICON, but a
single port cannot be configured for both concurrently. The port topology can be changed by
using the DS GUI or DS CLI.
The DAs are installed in the I/O enclosures and are connected to the CPCs through the PCIe
network. The DAs are responsible for managing and monitoring the flash RAID arrays. The
DAs provide remarkable performance because of a high-function and high-performance
ASIC. To ensure maximum data integrity, the adapter supports metadata creation and
checking.
For more information about the flash RAID adapters, see IBM DS8000 High-Performance
Flash Enclosure Gen2 (DS8000 R9.0), REDP-5422.
HPFE Gen2 enclosures are always installed in pairs. Each enclosure pair supports 16, 32, or
48 flash drives. A single Gen2 enclosure is shown in Figure 2-22.
Each HPFE Gen2 pair is connected to a redundant pair of Flash-optimized RAID controllers.
The PCIe flash RAID adapters are installed in the DS8900F I/O enclosures.
The DS8980F and DS8950 configurations can support up to four HPFE Gen2 pairs in the
base frame, and up to four HPFE Gen2 pairs in the expansion frame for a total of eight HPFE
Gen2 pairs, with a maximum of 384 flash drives.
To learn more about the HPFE Gen2, see IBM DS8000 High-Performance Flash Enclosure
Gen2 (DS8000 R9.0), REDP-5422.
Storage-enclosure fillers
Storage-enclosure fillers occupy empty drive slots in the storage enclosures. The fillers
ensure consistent airflow through an enclosure. For HPFE Gen2, one filler feature provides a
set of 16 fillers.
Note: To learn more about the DS8900F drive features, see the IBM System Storage
DS8900F Introduction and Planning Guide, SC27-9560.
Note: For all drive types, RAID 6 is the default in DS GUI and DS CLI, but RAID 10 is
optional. For flash drives smaller than 1 TB, RAID 5 is also optional, but is not
recommended.
Table 2-13 Maximum usable and provisioned capacity based on system cache size
Cache Max. usable size Max. provisioned size Max. usable Max. provisioned size
with large extents with large extents size with small with small extents
extents
Less than or Fixed-Block (FB): FB: 4096 TiB FB: 512 TiB FB: 1024 TiB
equal to 512 GB 4096 TiB CKD: 3652 TiB CKD: 551 TiB CKD: 913 TiB
Count Key Data
(CKD): 3652 TiB
Greater than FB: 16384 TiB FB: 8160 TiB - 16384 FB: 2048 TiB FB: 3968 TiB - 4096 TiBa
512 GB CKD: 14608 TiB TiBa CKD: 2205 TiB CKD: 3538 TiB - 3652
CKD: 7263 TiB - 14608 TiBa
TiBa
a. The exact value within the range is determined by a complex calculation that is based on the number of volumes
and volume sizes. You should conservatively plan for configurations targeting the low end of the range.
Table 2-14 shows the maximum number of flash drives and maximum raw storage capacity
for the different models.
RPCs also communicate with each of the CPC operating LPARs over RS485 serial
connections. Using this communication path, the RPCs act as a quorum in the CPC or LPAR
cluster communication to avoid cluster splits in a quorum or RPC race.
Important: Unlike earlier DS8000 models, DS8900F RPCs normally provide for only
communication and connectivity to components in the storage system. Power control
functions are managed by the HMCs.
The iPDUs are available in single or three-phase power configurations in all models. Each
iPDU has one AC power connector and a dedicated inline power cord. Output power is
provided by 12 C13 power outlets with circuit breaker protection.
Ethernet and
serial connector
iPDUs are installed in pairs. Each DS8900 rack has a minimum of one iPDU pair. For models
988, 986, and 984, a second pair may be installed in the base frame to provide power for
more I/O and storage enclosures.
The iPDUs are managed by using the black and gray internal private networks. Each of the
outlets can be individually monitored, and powered on or off. The iPDUs support Simple
Network Management Protocol (SNMP), telnet, and a web interface.
DS8900F HMCs are responsible for system power control and monitoring by communicating
to the network interfaces of the iPDUs and RPCs.
IPDU IPDU
R1E21 R1E22
iPDU Ethernet connections:
J12 black network SW1-T1 J12
J10 J10
J4 J4
J3 J3
J2 J2
J1 J1
ETH ETH
SW1-T5
SW1-T3
SW1-T6
SW1-T2
SW1-T1
SW1-T4
keyboard
keyboard
To MC1
IPDU IPDU
MC2-A
MC1-A
RPC card
RPC card
R1E23 R1E24
C1
C2
J12 J12
J11 J11
J10 J10
SW2-T1
SW2-T2
SW2-T3
SW2-T4
SW2-T5
SW2-T6
J9
MC2-B
MC1-B
J9
To MC2
To MC1
monitor
monitor
J8 J8
J7 J7
J6 J6
J5 J5
J4 J4
J3 J3
J2 J2
J1 J1
ETH ETH
CEC
Enclosure
(Upper)
XC1 (ESS01)
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
CEC
Enclosure
(Lower)
XC2 (ESS11)
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
Figure 2-26 on page 63 shows an example how the power is distributed when two iPDU pairs
are installed. Note the power connections of a second HPFE Gen2 storage enclosure or the
second I/O enclosure pair.
Adding a model E96 expansion frame to a DS8980F or DS8950F system also adds another
iPDU pair in that frame, which requires Ethernet connections to the internal management
networks. To provide these connections, two extra Ethernet switches are installed in the base
frame. This switch pair feature must be ordered with the expansion frame.
IPDU IPDU
R1E21 R1E22
J12
The 2 additional managed Ethernet J12
J11
J10
switches uplink to the ME switches: J11
J10
J9 J9
J8
J7
black network SW1-T1 J8
J7
J6
J5
gray network SW2-T2 J6
J5
J4 J4
J3 J3
J2 J2
J1 J1
ETH ETH
SW1-T5
SW1-T3
SW1-T6
SW1-T2
SW1-T1
SW1-T4
RPC card
RPC card
keyboard
keyboard
Private network: Black
To MC2
To MC1
MC2-A
MC1-A
C1
C2
IPDU IPDU IPDU IPDU
R2E23 R2E24 R1E23 R1E24
SW2-T1
SW2-T2
SW2-T3
SW2-T4
SW2-T5
SW2-T6
MC2-B
MC1-B
To MC2
To MC1
monitor
monitor
J12 J12 J12 Private network: Gray J12
J9 J9 J9 J9
J8 J8 J8 J8
J7
J6
The iPDU set of the expansion J7
J6
J7
J6
CEC
Enclosure
(Upper) XC1 (ESS01)
J7
J6
J5 frame and the 2nd (upper) iPDU J5 J5 P1-C5-T1
FOURMILE
P1-C5-T2
P1-C7-T1
GAGNON
P1-C7-T2
P1-C11-T1
LPAR eth
P1-C11-T2
J5
J4
J3
set of the base frame are J4
J3
J4
J3
P1-T1 HMC1
P1-T2 HMC2
J4
J3
J2
J1
connected to the 2 additional J2
J1
J2
J1
C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12
J2
J1
ETH
managed Ethernet switches ETH ETH CEC
Enclosure
(Lower) XC2 (ESS011)
ETH
P1-T1 HMC1
P1-T2 HMC2
T1 T23
SMC 8126L2
T2 T24
SW3
T1 T23
SMC 8126L2
T2 T24
SW4
Figure 2-27 DS8950F model 996 and model E96 Ethernet connections
The redundant power supplies in the CPCs, I/O enclosures, HPFE Gen2 enclosures, and the
ME are connected across both power domains. The left power supplies connect to the green
domain, while the right power supplies connect to the yellow domain.
For full redundancy, each power domain must be connected to separate power distribution
systems that are fed by independent building power sources or service entrances.
During normal operation, the NVDIMMs behave like any other DRAM, but when a power
outage or other system failure occurs, the NVS partition contents are hardened in NAND flash
storage. This NAND flash storage is with the DRAM chips on the NVDIMM module. The
content is encrypted when written to the flash storage to prevent unauthorized access to the
contents. Storing the write cache data in flash chips replaces the need for a fire hose dump,
which was used on earlier DS8000 models to harden NVS data to disk. Figure 2-28 shows a
symbolic view of an NVDIMM module.
BPMs connect directly to the NVDIMM modules to provide power during the DRAM to flash
operation. They are specific nickel-based hybrid energy storage modules with a high-power
discharge and fast charge times of 3 - 15 minutes. When system power is restored, the
NVDIMMs move the preserved data from flash back to DRAM to be destaged to the storage
system arrays during initial microcode load (IML).
The POWER9 processor-based systems support two NVDIMMs per CPC in designated
memory slots.
The size of a BPM is smaller than a standard 2.5-inch disk drive module (DDM) and fits into
one of the free CPC disk drive bays. A maximum of two BPMs are installed per CPC.
With the BPMs connected directly to the NVDIMMs, the DRAM to flash operation functions
independently without the need for any power that is provided by the CPC.
The NVDIMM capability is in addition to the data protection concept of storing the write cache
NVS on the alternative node. For more information, see 3.2, “CPC failover and failback” on
page 78.
NVDIMM configurations use either two 16 GB or two 32 GB modules. NVDIMMs are always
installed in pairs of the same size. With 16 GB NVDIMMs, one BPM is sufficient to provide
power to both modules in one CPC. With 32 GB NVDIMMs, two BPMs are provided in each
CPC, with one for each NVDIMM.
Note: The DS8900F is designed for efficient air flow and to be compliant with hot and cold
aisle data center configurations.
Figure 2-30 shows a diagram of the mini-PC HMCs and keyboard and monitor drawer
location in the DS8950F model 996 base frame.
2 HMCs in ME
Keyboard
and monitor
drawer
Figure 2-30 Diagram of mini-PC HMC and keyboard and monitor drawer location in the DS8950F
The storage administrator runs all DS8900F logical configuration tasks by using the Storage
Management GUI or DS CLI. All client communications to the storage system are through the
HMCs.
Clients that use the DS8900F advanced functions, such as MM or FlashCopy, communicate
to the storage system with IBM Copy Services Manager.
The HMCs provide connectivity between the storage system and external Encryption Key
Manager (EKM) servers.
HMCs also provide remote Call Home and remote support connectivity.
For more information about the HMC, see Chapter 6, “IBM DS8900F Management Console
planning and setup” on page 167.
The switches receive power from the PSUs inside the ME and do not require separate power
outlets. The ports on these switches are shown in Figure 2-31.
Figure 2-31 Eight-port Ethernet switches (SW1 and SW2) in the Management Enclosure
Each HMC also uses two designated Ethernet interfaces for the internal black (eth0) and gray
(eth3) networks. Because the HMCs are installed in the ME, they are connected directly to
the switches without routing through the external breakout ports.
The black and gray networks provide fully redundant communication between the HMCs and
CPCs. These networks cannot be accessed externally, and no external connections are
allowed. External customer network connections for both HMCs are provided at the rear of
the base rack.
When an expansion frame is installed, the DS8900F has two 24-port switches (one each for
the gray and black private networks) at the bottom of the base frame. These switches provide
internal network connectivity to the iPDUs in the expansion frame.
Figure 2-32 Twenty-four-port Ethernet switches (SW3 and SW4) at the bottom of DS8950F model 996
for extra iPDUs
Important: The internal Ethernet switches that are shown in Figure 2-31 and Figure 2-32
are for the DS8900F private networks only. Do not connect an external network (or any
other equipment) to the black or gray network switches.
However, the CPCs are also redundant so that if either one fails, the system switches to the
remaining CPC and continues to run without any host I/O interruption. This section looks at
the RAS features of the CPCs, including the hardware, the operating system (OS), and the
interconnections.
The AIX OS uses PHYP services to manage the Translation Control Entry (TCE) tables. The
OS communicates the wanted I/O bus address to logical mapping, and the PHYP returns the
I/O bus address to physical mapping within the specific TCE table. The PHYP needs a
dedicated memory region for the TCE tables to convert the I/O address to the partition
memory address. The PHYP then can monitor direct memory access (DMA) transfers to the
Peripheral Component Interconnect Express (PCIe) adapters.
The remainder of this section describes the RAS features of the POWER9 processor. These
features and abilities apply to the DS8900F. You can read more about the POWER9 and
processor configuration from the DS8900F architecture point of view in 2.3.1, “IBM POWER9
processor-based CPCs” on page 42.
With the instruction retry function, when an error is encountered in the core in caches and
certain logic functions, the POWER9 processor first automatically retries the instruction. If the
source of the error was truly transient, the instruction succeeds and the system can continue
normal operation.
The L2 and L3 caches in the POWER9 processor are protected with double-bit detect
single-bit correct error correction code (ECC). Single-bit errors are corrected before they are
forwarded to the processor, and then they are written back to L2 or L3.
In addition, the caches maintain a cache line delete capability. A threshold of correctable
errors that is detected on a cache line can result in purging the data in the cache line and
removing the cache line from further operation without requiring a restart. An ECC
uncorrectable error that is detected in the cache can also trigger a purge and delete of the
cache line.
For most faults, a good FFDC design means that the root cause is detected automatically
without intervention by an IBM Systems Service Representative (IBM SSR). Pertinent error
data that relates to the fault is captured and saved for analysis. In hardware, FFDC data is
collected from the fault isolation registers and the associated logic. In firmware, this data
consists of return codes, function calls, and other items.
FFDC check stations are carefully positioned within the server logic and data paths to ensure
that potential errors can be identified quickly and accurately tracked to a field-replaceable unit
(FRU).
This proactive diagnostic strategy is an improvement over the classic, less accurate restart
and diagnose service approach.
Redundant components
High opportunity components (those components that most affect system availability) are
protected with redundancy and the ability to be repaired concurrently.
Self-healing
For a system to be self-healing, it must be able to recover from a failing component by
detecting and isolating the failed component. The system is then able to take the component
offline, fix, or isolate it, and then reintroduce the fixed or replaced component into service
without any application disruption. Self-healing technology includes the following examples:
Chipkill, which is an enhancement that enables a system to sustain the failure of an entire
DRAM chip. The system can continue indefinitely in this state with no performance
degradation until the failed dual inline memory module (DIMM) can be replaced.
Single-bit error correction by using ECC without reaching error thresholds for main, L2,
and L3 cache memory.
L2 and L3 cache line delete capability, which provides more self-healing.
The memory bus between processors and the memory uses CRC with retry. The design also
includes a spare data lane so that if a persistent single data error exists, the faulty bit can be
“self-healed.” The POWER9 busses between processors also have a spare data lane that can
be substituted for a failing one to “self-heal” the single bit errors.
A rank of four ISDIMMs contains enough DRAMs to provide 64 bits of data at a time with
enough check bits to correct the case of a single DRAM module after the bad DRAM is
detected, and then correct an extra faulty bit.
The ability to correct an entire DRAM is what IBM traditionally called Chipkill correction.
Correcting this kind of fault is essential in protecting against a memory outage and should be
considered as a minimum error correction for any modern server design.
The POWER9 processors that are used in DS8900F are designed internally for ISDIMMs
without an external buffer chip. The ECC checking is at the 64-bit level, so Chipkill protection
is provided with x4 DIMMs plus some additional sub Chipkill level error checking after a
Chipkill event.
The memory DIMMs also use hardware scrubbing and thresholding to determine when
memory modules within each bank of memory must be used to replace modules that
exceeded their threshold of error count. Hardware scrubbing is the process of reading the
contents of the memory during idle time and checking and correcting any single-bit errors that
accumulated by passing the data through the ECC logic. This function is a hardware function
on the memory controller chip, and does not influence normal system memory performance.
The ability to use hardware accelerated scrubbing to refresh memory that might have
experienced soft errors is a given. The memory bus interface is also important. The direct
bus-attach memory that is used in the scale-out servers supports RAS features in that design,
including register clock driver (RCD) parity error detection and retry.
Fault masking
If corrections and retries succeed and do not exceed threshold limits, the system remains
operational with full resources, and no external administrative intervention is required.
Figure 3-1 shows the redundant PCIe fabric design for XC communication in the DS8900F
and depicts the single-chip modules (SCMs) (SCM #0 and SCM#1) in each CPC. If the I/O
enclosure that is used as the XC communication path fails, the system automatically uses an
available alternative I/O enclosure for XC communication.
C2-T1 C2-T1
bay06
P9 C3-T2 P9
SCM #1 C3-T2 SCM #1
C3-T1 C3-T1
bay05
bay04
bay03
C12-T1 C12-T1
bay02
P9 P9
SCM #0
C9-T1 C9-T1 SCM #0
bay01
C8-T1 C8-T1
Figure 3-1 DS8900F XC communication through the PCIe fabric and I/O enclosures
Voltage monitoring provides a warning and an orderly system shutdown when the voltage is
out of the operational specification range.
More monitoring support can be found by running the DS CLI showsu command and viewing
the Added Energy Report (ER) Test Mode, ER Recorded, ER Power Usage, ER Inlet Temp,
ER I/O Usage, and ER Data Usage fields, as shown in Example 3-1.
For a DS8980F model 998 with 4.3 TB total system memory or a DS8950F system with a
maximum configuration of 3.4 TB of total system memory, NVS is 128 GB. For IBM DS8910F
model 993 and DS8910F model 994, all configurations use 1/16th of system memory except
for the smallest systems with 192 GB of total system memory, which uses the minimum of
8 GB of NVS. NVS contains write data until the data is destaged from cache to the drives.
NVS data is protected and kept by non-volatile dual inline memory module (NVDIMM)
technology, where the data is moved from DRAM to a flash memory on the NVDIMM modules
if the DS8900F experiences a complete loss of input AC power.
When a write is sent to a volume and both the nodes are operational, the write data is placed
into the cache memory of the owning node and into the NVS of the other CPC. The NVS copy
of the write data is accessed only if a write failure occurs and the cache memory is empty or
possibly invalid. Otherwise, the NVS copy of the write data is discarded after the destaging
from cache to the drives is complete.
The location of write data when both CPCs are operational is shown in Figure 3-2 on
page 79, which shows how the cache memory of node 0 in CPC0 is used for all logical
volumes that are members of the even logical subsystems (LSSs). Likewise, the cache
memory of node 1 in CPC1 supports all logical volumes that are members of odd LSSs. For
every write that is placed into cache, a copy is placed into the NVS memory that is in the
alternative node. Therefore, the following normal flow of data for a write when both CPCs are
operational is used:
1. Data is written to cache memory in the owning node. At the same time, data is written to
the NVS memory of the alternative node.
2. The write operation is reported to the attached host as complete.
3. The write data is destaged from the cache memory to a drive array.
4. The write data is discarded from the NVS memory of the alternative node.
Cache Cache
Memory for Memory for
EVEN ODD
numbered numbered
LSS LSS
CPC0 CPC1
Figure 3-2 NVS write data when both CPCs are operational
Under normal operation, both DS8900F nodes are actively processing I/O requests. The
following sections describe the failover and failback procedures that occur between the CPCs
when an abnormal condition affects one of them.
3.2.2 Failover
In the example that is shown in Figure 3-3, CPC0 failed. CPC1 must take over all of the CPC0
functions. All storage arrays are accessible by both CPCs.
CPC0 CPC1
Failover
Figure 3-3 CPC0 failover to CPC1
2. The NVS and cache of node 1 are divided in two portions, one for the odd LSSs and one
for the even LSSs.
3. Node 1 begins processing the I/O for all the LSSs, taking over for node 0.
This entire process is known as a failover. After failover, the DS8900F operates as shown in
Figure 3-3 on page 79. Node 1 now owns all the LSSs, which means all reads and writes are
serviced by node 1. The NVS inside node 1 is now used for both odd and even LSSs. The
entire failover process is transparent to the attached hosts.
The DS8900F can continue to operate in this state indefinitely. No functions are lost, but the
redundancy is lost, and performance is decreased because of the reduced system cache.
Any critical failure in the working CPC renders the DS8900F unable to serve I/O for the
arrays, so the IBM Support team begins work immediately to determine the scope of the
failure and build an action plan to restore the failed CPC to an operational state.
3.2.3 Failback
The failback process begins automatically when the DS8900F determines that the failed CPC
did not resume an operational state. If the failure was relatively minor and recoverable by the
DS8900F OS, the software starts the resume action. If a service action occurred and
hardware components were replaced, the IBM SSR or remote support engineer resumes the
failed CPC.
This example in which CPC0 failed assumes that CPC0 was repaired and resumed. The
failback begins with server 1 in CPC1 starting to use the NVS in node 0 in CPC0 again, and
the ownership of the even LSSs being transferred back to node 0. Normal I/O processing,
with both CPCs operational, then resumes. Just like the failover process, the failback process
is transparent to the attached hosts.
In general, recovery actions (failover or failback) on the DS8900F do not affect I/O operation
latency by more than 8 seconds.
If you require real-time response in this area, contact IBM to determine the latest information
about how to manage your storage to meet your requirements.
During normal operation, the DS8900F preserves write data by storing a duplicate copy in the
NVS of the alternative CPC. To ensure that write data is not lost during a power failure event,
the DS8900F stores the NVS contents on non-volatile DIMMs (NVDIMMs). Each CPC
contains two NVDIMMs with dedicated Backup Power Modules (BPMs). The NVDIMMs act
as regular DRAM during normal operation. During AC power loss, the BPMs provide power to
the NVDIMM modules until they have moved all modified data (NVS) to integrated flash
memory. The NVDIMM save process is autonomous, and requires nothing from the CPC.
Important: DS8900F can tolerate a power line disturbance (PLD) for up to 20 ms. A PLD
that exceeds 20 ms on both power domains initiates an emergency shutdown.
The following sections describe the steps that occur when AC input power is lost to both
power domains.
Power loss
When a wall power loss condition occurs, the following events occur:
1. All host adapter I/O is blocked.
2. Each NVDIMM begins copying its NVS data to the internal flash partition.
3. The system powers off without waiting for the NVDIMM copy operation.
4. The copy process continues and completes independently from the storage systems
power.
Power restored
When power is restored, the DS8900F must be powered on manually unless the remote
power control mode is set to automatic.
Note: Be careful if you decide to set the remote power control mode to automatic. If the
remote power control mode is set to automatic, after input power is restored, the DS8900F
is powered on automatically.
For more information about how to set power control on the DS8900F system, see
IBM Documentation.
The DS8900F I/O enclosures use adapters with PCIe connections. The adapters in the I/O
enclosures are concurrently replaceable. Each slot can be independently powered off for
installation, replacement, or removal of an adapter.
In addition, each I/O enclosure has N+1 power and cooling redundancy in the form of two
PSUs with integrated fans, and two enclosure cooling fans. The PSUs and enclosure fans can
be replaced concurrently without disruption to the I/O enclosure.
Important: For host connectivity, hosts that access the DS8900F must have at least two
connections to I/O ports on separate host adapters in separate I/O enclosures.
HBA
HP HP HP HP HP HP HP HP
Host Host
Adapter Adapter
I/O enclosure 2
PCI PCI
CPC 0 Express Express
CPC 1
I/O enclosure 3
Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP
A more robust design is shown in Figure 3-5, in which the host is attached to separate FC
host adapters in separate I/O enclosures. This configuration is also important because during
a LIC update, a host adapter port might need to be taken offline. This configuration allows
host I/O to survive a hardware failure on any component on either path.
HBA HBA
HP HP HP HP HP HP HP HP
Host Host
Adapter Adapter
I/O enclosure 2
PCI PCI
CPC 0 Express Express
CPC 1
I/O enclosure 3
Host Host
Adapter Adapter
HP HP HP HP HP HP HP HP
A logic or power failure in a switch or director can interrupt communication between hosts and
the DS8900F. Provide more than one switch or director to ensure continued availability.
Configure ports from two separate host adapters in two separate I/O enclosures to go through
each of two directors. The complete failure of either director leaves the paths that are
configured to the alternative director still available.
When data is read, the DIF is checked before the data leaves the DS8900F and again when
the data is received by the host system. Previously, it was possible to ensure the data integrity
within the storage system only with ECC. However, T10 DIF can now check end-to-end data
integrity through the SAN. Checking is done by hardware, so no performance impact occurs.
For more information about T10 DIF implementation in the DS8900F, see “T10 Data Integrity
Field support” on page 118.
To provide more proactive system diagnosis information about SAN fabric systems, the Read
Diagnostic Parameters (RDP) function, which complies with industry standards, is
implemented on the DS8900F. This function provides host software with the capability to
perform predictive failure analysis (PFA) on degraded SAN links before they fail.
When troubleshooting SAN errors, the IBM SSR can run a wrap test on a single host adapter
port without taking the entire adapter offline.
Multipathing software
Each attached host OS requires multipathing software to manage multiple paths to the same
device, and to provide redundant routes for host I/O requests. When a failure occurs on one
path to a logical device, the multipathing software on the attached host can identify the failed
path and route the I/O requests for the logical device to alternative paths. Furthermore, it can
likely detect when the path is restored. The multipathing software that is used varies by
attached host OS and environment, as described in the following sections.
Open systems
In most open systems environments, multipathing is available at the OS level. The Subsystem
Device Driver (SDD), which was provided and maintained by IBM for several OSs, is now an
obsolete approach for a multipathing solution.
For the AIX OS, the DS8000 is supported through the AIX multipath I/O (MPIO) framework,
which is included in the base AIX OS. Use the base AIX Multipath I/O Path Control Module
(AIXPCM) support instead of the old SDDPCM.
For multipathing under Microsoft Windows, the DS8000 is supported by the native Microsoft
MPIO stack by using Microsoft Device Specific Module (MSDSM). Existing environments that
rely on the old SDDDSM should be moved to the native OS driver.
Note: To move existing SDDPCM and SDDDSM implementations, see the following
resources:
How To Migrate SDDPCM to AIXPCM
Migrating from SDDDSM to Microsoft MSDSM - SVC/Storwize
For all newer versions of RHEL and SUSE Linux Enterprise Server, the native Linux
multipathing driver, Device-Mapper Multipath (DM Multipath), is used.
Also, on the VMware vSphere ESXi server, the VMware Native Multipathing Plug-in (NMP) is
the supported multipathing solution.
For more information about the multipathing software that might be required for various OSs,
see the IBM System Storage Interoperation Center (SSIC).
IBM Z
In the IBM Z environment, a best practice is to provide multiple paths from each host to a
storage system. Typically, four or eight paths are configured. The channels in each host that
can access each logical control unit (LCU) in the DS8900F are defined in the hardware
configuration definition (HCD) or input/output configuration data set (IOCDS) for that host.
Dynamic Path Selection (DPS) allows the channel subsystem to select any available
(non-busy) path to start an operation to the disk subsystem. Dynamic Path Reconnect (DPR)
allows the DS8900F to select any available path to a host to reconnect and resume a
disconnected operation, for example, to transfer data after disconnection because of a cache
miss.
These functions are part of the IBM z/Architecture®, and are managed by the channel
subsystem on the host and the DS8900F.
A physical FICON path is established when the DS8900F port sees light on the fiber, for
example, a cable is plugged in to a DS8900F host adapter, a processor or the DS8900F is
powered on, or a path is configured online by z/OS. Logical paths are established through the
port between the host, and part or all of the LCUs in the DS8900F are controlled by the HCD
definition for that host. This configuration happens for each physical path between an IBM Z
host and the DS8900F. Multiple system images can be in a CPU. Logical paths are
established for each system image. The DS8900F then knows the paths that can be used to
communicate between each LCU and each host.
CUIR is available for the DS8900F when it operates in the z/OS and IBM z/VM®
environments. CUIR provides automatic channel path vary on and vary off actions to
minimize manual operator intervention during selected DS8900F service actions.
CUIR also allows the DS8900F to request that all attached system images set all paths that
are required for a particular service action to the offline state. System images with the correct
level of software support respond to such requests by varying off the affected paths, and
either notifying the DS8900F system that the paths are offline, or that it cannot take the paths
offline. CUIR reduces manual operator intervention and the possibility of human error during
maintenance actions, and reduces the time that is required for the maintenance. This function
is useful in environments in which many z/OS or z/VM systems are attached to a DS8900F.
The metadata check is independent of the DS8900F T10 DIF support for FB volumes. For
more information about T10 DIF implementation in the DS8000, see “T10 Data Integrity Field
support” on page 118.
The HMC is the DS8900F management focal point. If no HMC is operational, it is impossible
to run maintenance, modifications to the logical configuration, or Copy Services (CS) tasks,
such as the establishment of FlashCopy backups, Metro Mirror (MM) or Global Mirror (GM),
by using the DS Command-line Interface (DS CLI), Storage Management GUI, or IBM Copy
Services Manager. The implementation of a secondary HMC provides a redundant
management focal point and is especially important if CS or Encryption Key Manager (EKM)
are used.
The DS8900F CPCs have an OS (AIX) and Licensed Machine Code (LMC) that can be
updated. As IBM continues to develop and improve the DS8900F, new releases of firmware
and LMC become available that offer improvements in function and reliability. For more
information about LIC updates, see Chapter 11, “Licensed Machine Code” on page 405.
Call Home
Call Home is the capability of the DS8900F to notify the client and IBM Support to report a
problem. Call Home is configured in the HMC at installation time. Call Home to IBM Support
is done over the customer network through a secure protocol. Customer notifications can also
be configured as email (SMTP) or Simple Network Management Protocol (SNMP) alerts. An
example of an email notification output is shown in Example 3-2.
For more information about planning the connections that are needed for HMC installations,
see Chapter 6, “IBM DS8900F Management Console planning and setup” on page 167.
For more information about setting up SNMP notifications, see Chapter 12, “Monitoring and
support” on page 423.
Remote support
Remote support provides the ability of IBM Support personnel to remotely access the
DS8900F. This capability can be configured at the HMC, and access is through Assist On-site
(AOS) or by IBM Remote Support Center (RSC).
For more information about remote support operations, see Chapter 12, “Monitoring and
support” on page 423.
For more information about AOS, see IBM Assist On-site for Storage Overview, REDP-4889.
Note: Due to the added resiliency of RAID 6, RAID 5 is not recommended and only
supported by RPQ.
IBM Storage Modeler is an easy-to-use web tool that is available only to IBM personnel and
Business Partners to help with capacity planning for physical and usable capacities that are
based on installation drive capacities and quantities in intended RAID configurations.
RAID 6 is the default when creating new arrays by using the DS Storage Manager GUI.
For the latest information about supported RAID configurations and to request an RPQ /
SCORE, contact your IBM SSR.
Each flash drive has two separate connections to the enclosure backplane. This configuration
allows a flash drive to be simultaneously attached to both SAS expander switches. If either
ESM is removed from the enclosure, the SAS expander switch in the remaining ESM retains
the ability to communicate with all the flash drives and both flash RAID controllers in the DA
pair. Similarly, each DA has a path to each switch, so it can also tolerate the loss of a single
path. If both paths from one DA fail, it cannot access the switches. However, the partner DA
retains connectivity to all drives in the enclosure pair.
CPC 1
Device
Adapter
For more information about the drive subsystem of the DS8900F, see 2.5, “Flash drive
enclosures” on page 56.
The arrays are balanced between the flash enclosures to provide redundancy and
performance. Both flash RAID controllers can access all arrays within the DA pair. Each
controller in a DA pair is installed in different I/O enclosures, and each has allegiance to a
different CPC.
Figure 3-7 on page 91 shows the connections for the DA and flash enclosure pair.
CPC 0
1st DA dual path 1 Device 1st DA dual path 2
Adapter
CPC 1
2nd DA dual path 1 Device 2nd DA dual path 2
Adapter
.... ....
If ECC detects correctable bad bits, the bits are corrected immediately. This ability reduces
the possibility of multiple bad bits accumulating in a block beyond the ability of ECC to correct
them. If a block contains data that is beyond ECC’s ability to correct, RAID is used to
regenerate the data and write a new copy onto a spare block or cell of the flash drive. This
scrubbing process applies to flash drives that are array members and spares.
Data scrubbing can proactively relocate data, which reduces the probability of data reread
impact. Data scrubbing does this relocation before errors add up to a level beyond error
correction capabilities.
Important: RAID 6 is now the default and preferred setting for the DS8900F. RAID 5 can
be configured with exceptions, but it is not recommended and requires an RPQ. On
high-capacity tiers (Tier 1 and Tier 2) RAID 5 is not allowed at all. RAID 10 continues to be
an option for all-flash drive types.
Spare drives
An HPFE Gen2 pair in a DS8900F can contain up to six array sites. Each array site contains
eight flash drives, and the HPFE Gen2 pair has two spare flash drives for each enclosure pair.
The first two array sites on a flash RAID controller (DA) pair have a spare that is assigned,
and the rest of the array sites have no spare that is assigned if all flash drives are the same
capacity. The number of required spare drives per flash enclosure pair applies to all available
RAID levels.
RAID 6 provides around a 1,000 times improvement over RAID 5 for impact risk. RAID 6
allows more fault tolerance by using a second independent distributed parity scheme (dual
parity). Data is striped on a block level across a set of drives, similar to RAID 5 configurations.
The second set of parity is calculated and written across all the drives, and allows
reconstruction of the data even when two drives fail. The striping is shown in Figure 3-8.
Drives 0 1 2 3 4 5 6
0 1 2 3 4 P00 P01
5 6 7 8 9 P10 P11
10 11 12 13 14 P20 P21
15 16 17 18 19 P30 P31
P41
P00 = 0+1+2+3+4; P10 = 5+6+7+8+9; etc.
P01 = 9+13+17+0; P11 = 14+18+1+5; etc.
P41 = 4+8+12+16
For random writes, the throughput of a RAID 6 array is only two-thirds of a RAID 5 due to the
extra parity handling. Workload planning is important before considering RAID 6 for
write-intensive applications, including CS targets. In the case of high random-write ratios,
RAID 10 can be the better choice.
When RAID 6 is sized correctly for the I/O demand, it is a considerable reliability
enhancement, as shown in Figure 3-8 on page 92.
During the rebuilding of the data on the new drive, the DA can still handle I/O requests from
the connected hosts to the affected array. Performance degradation might occur during the
reconstruction because DAs and path resources are used to do the rebuild. Because of the
dual-path architecture of the DS8900F, this effect is minimal. Additionally, any read requests
for data on the failing drive require data to be read from the other drives in the array, and then
the DA reconstructs the data.
Any subsequent failure during the reconstruction within the same array (second drive failure,
second coincident medium errors, or a drive failure and a medium error) can be recovered
without data loss.
Performance of the RAID 6 array returns to normal when the data reconstruction on the spare
drive is complete. The rebuild time varies, depending on the capacity of the failed drive and
the workload on the array and the DA. The completion time is comparable to a RAID 5 rebuild,
but slower than rebuilding a RAID 10 array in a single drive failure.
RAID 10 offers faster data reads and writes than RAID 6 or RAID 5 because it does not need
to manage parity. However, with half of the drives in the group used for data and the other half
mirroring that data, RAID 10 arrays have less usable capacity than RAID 6 or RAID 5 arrays.
RAID 10 is commonly used for workloads that require the highest performance from the drive
subsystem. With RAID 6, each front-end random write I/O might theoretically lead to six
back-end I/Os, including the parity updates (RAID penalty), but this number is four for
RAID 5 and only two for RAID 10 (not counting cache optimizations). A typical use case for
RAID 10 is for workloads with a high random-write ratio. Either member in the mirrored pair
can respond to the read requests.
While this data copy is occurring, the DA can still service read/write requests to the array from
the hosts. Performance might degrade while the copy operation is in progress because DAs
and path resources are used to rebuild the RAID 1 pair. Because a good drive is available,
this effect is minimal. Read requests for data on the failed drive likely are not affected
because they are all directed to the good copy on the mirrored drive. Write operations are not
affected.
Performance of the RAID 10 array returns to normal when the data copy onto the spare drive
completes. The time that is taken for rebuild can vary, depending on the capacity of the failed
drive and the workload on the array and the DA.
Compared to RAID 5 or RAID 6, RAID 10 rebuild completion time is faster because rebuilding
a RAID 5 or RAID 6 array requires several reads on the remaining stripe units plus one parity
operation for each write. However, a RAID 10 configuration requires one read and one write
(essentially, a direct copy).
Important: RAID 5 can be configured for Tier 0 flash drives of less than 1 TB, but this
configuration is not recommended, and requires a risk acceptance and an RPQ for
high-performance flash drives. Tier 0 flash drive sizes larger than 1 TB (not Tier 1 and
Tier 2 high-capacity flash drives) can be configured by using RAID 5, but require an RPQ
and an internal control switch to be enabled.
An array site with a spare creates a RAID 5 array that is 6+P+S (where the P stands for parity
and S stands for spare). The other array sites on the DA pair are 7+P arrays.
The suspect drive and the new member-spare are set up in a temporary RAID 1 association,
allowing the troubled drive to be duplicated onto the spare rather than running a full RAID
reconstruction (rebuild) from data and parity. The new member-spare is then made a regular
member of the array and the suspect drive is rejected from the RAID array. The array never
goes through an n-1 stage in which it might suffer a complete failure if another drive in this
array encounters errors. The result saves substantial time and provides a new level of
availability that is not available in other RAID products.
Smart Rebuild is not applicable in all situations, so it is not always used. Smart Rebuild runs
only for healthy RAID arrays. If two drives with errors are in a RAID 6 configuration, or if the
drive mechanism failed to the point that it cannot accept any I/O, the standard RAID rebuild
procedure is used for the RAID array. If communications across a drive fabric are
compromised, such as an SAS path link error that causes the drive to be bypassed, standard
RAID rebuild procedures are used because the suspect drive is not available for a one-to-one
copy with a spare. If Smart Rebuild is not possible or cannot complete, a standard RAID
rebuild occurs.
Drive error patterns are continuously analyzed as part of the scheduled tasks that are run by
the DS8900F LIC. Drive firmware is optimized to report predictive errors to the DA. At any
time, when certain drive errors (following specific criteria) reach a specified threshold, the
RAS LIC component starts Smart Rebuild within the hour. This enhanced technique, when it
is combined with a more frequent schedule, leads to considerably faster identification of
drives showing signs of imminent failure.
Smart Rebuild is also used to proactively rebalance member and spare drive distribution
between the paths in a DA pair. Also, if a DA pair has a mix of different capacity flash drives, a
larger spare may, in some cases, be taken by a smaller drive array. Smart Rebuild corrects
this situation after the failing drives are replaced, and return the larger drive to the spare pool.
DS8000 Release 9.1 code provided an enhancement of the rebuild process by avoiding the
rebuild of areas that are not mapped to logical volumes.
This process is performed by running a status command to the drives to determine whether
the parity stripe is unmapped. This process prevents unnecessary writes (P/E cycles) of
zeroed data to the target drive in a rebuild, allowing faster rebuild for partially allocated RAID
arrays.
IBM SSRs and remote support can manually initiate a Smart Rebuild if needed, such as when
two drives in an array are logging temporary media errors.
A minimum of one spare is created for each array site that is assigned to an array until the
following conditions are met:
A minimum of two spares per DA pair exist.
A minimum of two spares for the largest capacity array site on the DA pair exist.
Spare rebalancing
The DS8900F implements a spare rebalancing technique for spare drives. When a drive fails
and a hot spare is taken, it becomes a member of that array. When the failed drive is repaired,
DS8900F LIC might choose to allow the hot spare to remain where it was moved. However, it
can instead choose to move the spare to a more optimum position. This migration is
performed to better balance the spares across the two dual flash enclosure paths to provide
the optimum spare location based on drive capacity and spare availability.
In a flash drive intermix on a DA pair, it is possible to rebuild the contents of a smaller flash
drive onto a larger spare drive. When the failed origin flash drive is replaced with a new drive,
the DS8900F LIC moves the data back onto the recently replaced drive.
When this process completes, the smaller flash drive rejoins the array, and the larger drive
becomes a spare again.
Hot-pluggable drives
Replacing a failed flash drive does not affect the operation of the DS8900F system because
the drives are fully hot-pluggable. Each drive plugs into a SAS expander switch, so no path
break is associated with the removal or replacement of a drive. In addition, no potentially
disruptive loop initialization process occurs.
All power and cooling components that constitute the DS8900F power subsystem are fully
redundant. The key element that allows this high level of redundancy is a dual power domain
configuration that is formed of iPDU pairs. Dual PSUs in all major components provide a 2N
redundancy for the system.
Combined with the NVDIMMs and the BPMs, which preserve the NVS write cache, the design
protects the storage system in an input power failure.
The BPMs in each of the CPCs provide power to complete the movement of write data from
cache memory to non-volatile flash storage if an input power loss occurs in both power
domains (as described in 3.2.4, “NVS and power outages” on page 81).
The CPCs, I/O enclosures, and flash enclosures components in the frame all feature
duplicated PSUs.
In addition, the ME includes redundant PSUs that provide dual power to the ME components,
such as the primary and secondary HMCs, Rack Power Control (RPC) cards, and the internal
Ethernet switches.
iPDUs are installed in pairs, one in each input power domain. An iPDU module can be
replaced concurrently, as described in 2.6.3, “Power domains” on page 64.
The iPDUs are firmware upgradeable and controlled and managed by the HMCs through its
Ethernet interfaces.
For more information, see 2.6.2, “Intelligent Power Distribution Units” on page 59.
iPDUs support high or low voltage three-phase, and low-voltage single-phase input power.
The correct power cables must be used. For more information about power cord Feature
Codes, see IBM DS8900F Introduction and Planning Guide, GC27-9560.
The BPMs provide the power for this emergency copy process of the NVDIMMs. They are
firmware upgradeable. The condition of the BPMs is continually monitored by the CPC FSP.
The BPMs have fast charge times that ensure that an empty BPM is charged and fully
operational during the IML phase of the system when the storage system powers on so that
no SPoF occurs. For more information, see 2.6.5, “Backup Power Modules and NVDIMM” on
page 65.
The DS8900F BPMs have a 5-year lifetime. If a BPM must be replaced, the containing CPC
must be set to service mode and shut down, which invokes a failover of all operations to the
other CPC. Because of the high resilience of the system, the remaining CPC keeps the whole
storage facility operable and in production servicing all I/Os. As a best practice, replacement
should be done in a scheduled service window to avoid reduced performance and
redundancy during peak workload hours. As the BPM is monitored, sufficient warning is given
to schedule the service action.
Each flash drive enclosure power supply plugs into two separate iPDUs, which must each be
supplied by redundant independent customer power feeds.
CPC power supply units and I/O enclosure power supply units
Each CPC and I/O enclosure has dual redundant PSUs that each receive power from a
designated iPDU pair. Each I/O enclosure and each CPC has its own cooling fans.
Figure 3-10 shows the power control settings window of the Storage Management GUI.
Figure 3-10 DS8900F modify power control settings from the Storage Management GUI
In addition, the following switches in the ME of a DS8900F are accessible when the ME cover
is open:
The local mode jumper connector on the local remote switch card is for service use only.
There is a plug to set the system to local (ME) power control mode.
The local power on / local force power off switch (white switch) is also on the local remote
switch card in the ME. This switch can manually power on or force power off the complete
system if the local remote switch card is in local power control mode. When the local /
remote switch card is in remote power control mode (nothing plugged in the local mode
jumper connector), the HMCs are in control of power-on / power-off (this condition is the
default for client usage).
Powering off the storage system with the white switch is a forceful shutdown. It includes
the procedure of moving NVS data to the flash portion of the NVDIMMs, which must be
destaged on the next system power-on and start.
100 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Important: The local / remote power off switch (white switch) must be used only by service
personnel. The switch can be used only under certain circumstances and as part of an
action plan or problem determination that is performed by an IBM SSR.
Note: The Ethernet switches that are used internally in DS8900F are for private network
communication only. No external connection to the private networks is allowed. Client
connectivity to the DS8900F is allowed only through the provided external customer HMC
Ethernet connectors (eth2 and eth1) at the rear of the base frame.
Storage system frames with this optional seismic kit include hardware at the bottom of the
frame that secures it to the floor. Depending on the flooring in your environment (specifically,
non-raised floors), installation of the required floor mounting hardware might be disruptive.
This kit must be special-ordered for the DS8900F. The kit is not available for the
rack-mountable DS8910F model 993. For more information, contact your IBM SSR.
The storage system also overwrites the areas that are usually not accessible and used only
internally by the disk.
NVDIMM
The NVDIMMs are cleared by applying a single-pass overwrite in accordance with NIST
SP-800-881. This process is run in parallel on both CPCs.
Process overview
The SDO process is summarized in these steps:
1. After the logical configuration is removed, SDO is started from the primary HMC.
2. The primary HMC performs a dual cluster restart.
3. The crypto-erase and format of the flash drives is started.
4. The overwrite of the CPC hard disk drives (HDDs) are started in parallel (with each other
and with the above).
5. The overwrite of the secondary HMC is started in parallel.
6. Both CPCs are restarted and the NVDIMMs are cleared.
7. After the overwrite of the CPC and secondary HMC HDDs is complete, the primary HMC
HDD is overwritten.
8. The certificate is generated.
102 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Certificate
The certificate provides written verification, by drive or flash drive serial number, of the full
result of the overwrite operations. You can retrieve the certificate by using DS CLI, or your
IBM SSR can offload the certificate to removable media and provide it to you. Example 3-3
shows a sample SDO certificate.
Secure Data Overwrite Service for the IBM System Storage DS8900F
Certificate of Completion
2. IBM performed such Secure Data Overwrite Service as set forth herein
In all cases, the successful complexion of all erasure commands is a prerequisite for successful erasure.
Flash module (shown as 2.5" FLASH-FDE) were PURGED in accordance with NIST SP-800-88R1 for flash-based media, by
issuing the sanitize command, which performs a crypto erase followed by block overwrite.
NVDIMMÕs NAND Flash blocks were CLEARED in accordance with NIST SP-800-88R1 for flash-based media, by applying a
single overwrite pattern of 0x00. After the blocks are cleared, the data was read back to verify that the contents
are erased. The overwrite and verification was performed by using vendor provided tools/methods.
CPC drives were CLEARED in accordance with NIST SP-800-88R1 for magnetic disks, by applying a single pass overwrite
pattern of 0x00. Random samples, the first two sectors, and the last 10000 sectors were read back and verified to
match the data written.
HMC flash base media drives were NOT securely erased. This device does not contain customer data, but the partition
containing all trace data and diagnostic dumps was overwritten with single pass overwrite pattern of 0x00.
Scope
==================
This report covers the secure data overwrite service that is performed on the
DS8900F storage system with the serial number 75NH430
The Drive Types Table provides information about each drive type that is installed on the DS8000 system.
a) Drive Type: This identifies that the drive is solid-state class of full disk encryption drive that is, 2.5"
FLASH-FDE.
b) Drive block type: This identifies that the drive block consists of 528 bytes.
c) Drive Capacity: This identifies the specified drive type's capacity in GB.
This section covers the devices that are used to store customer data (and associated metadata) both of which are
subject to erasure.
Disk Type - All these devices are flash memory-based and are labeled as FLASH-FDE
Disk Serial# - Manufacturer assigned serial number visible on the device case
WWNN. - Device WWNN
Drive Location - Rack, Enclosure, and slot where the device is installed.
Overwrite Status - The success or failure of the overwrite operation
Sector Defect Count - Always zero for these devices.
This section covers the devices on the processors that are used to store the operating system,
configuration data and trace data on the Central Processor Complex (CPC) servers.
--------------------------------------------------------------------------------
| Processor | hdisk | CPC Drive | Overwrite | Completion |
| Complex # | Number | Serial Number | Status | Date |
--------------------------------------------------------------------------------
| CPC 0 | hdisk0 | WAE1045Q | Successful | 2021/03/09 19:11:46 |
| CPC 0 | hdisk1 | WAE10N39 | Successful | 2021/03/09 20:10:31 |
| CPC 1 | hdisk0 | 0TJ5SJLP | Successful | 2021/03/09 19:13:16 |
| CPC 1 | hdisk1 | WAE104DZ | Successful | 2021/03/09 20:12:32 |
--------------------------------------------------------------------------------
This section covers the devices on the processors that are used to store the operating system,
104 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
configuration data and trace data on the Hardware Management Console (HMC).
As noted above, these devices were NOT erased and only the partition containing logs
and dumps were deleted.
HMC Type - Indicates whether this is the first or optional second HMC
HMC Drive Serial Number - Manufacturer assigned serial number visible on the device case
Overwrite Status - The success or failure of the overwrite operation
Completion Date - Completion timestamp
HMC Drive Type - Always SSD for these systems
--------------------------------------------------------------------------------------------------------
| HMC Type | HMC Drive Serial Number | SDO Results | Completion Date | HMC Drive Type |
| | | | | Hard Disk Drive/ |
| | | | | SSD |
--------------------------------------------------------------------------------------------------------
| First Management | N/A | Successful | 03/10/21-06:33:51 | SSD |
| Console | | | | |
--------------------------------------------------------------------------------------------------------
|Secondary Management| N/A | Successful | 03/10/21-03:49:39 | SSD |
| Console | | | | |
--------------------------------------------------------------------------------------------------------
This section covers the devices that are used to store customer data when system goes through emergency power off and
the device is subject to erasure.
NVDIMM Location Code - Central Processor Complex (CPC) and slot where the device is
installed
Serial Number- Manufacturer assigned serial number visible on the device
NVDIMM Capacity- Device capacity in GB
Overwrite Status- The success or failure of the overwrite operation
Completion Date - Completion timestamp
108 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
4.3 Abstraction layers for drive virtualization
Virtualization in the DS8900F refers to the process of preparing physical drives for storing
data on logical volumes for use by attached hosts. Logical volumes are seen by the hosts as
though they were physical storage. In open systems, this process is known as creating logical
unit numbers (LUNs). In IBM Z, it refers to the creation of 3390 volumes.
The basis for virtualization begins with the physical drives, which are mounted in storage
enclosures and connected to the internal storage servers. DS8900F uses only the
High-Performance Flash Enclosure (HPFE) Gen2 storage enclosures. To learn more about
the drive options and their connectivity to the internal storage servers, see 2.5, “Flash drive
enclosures” on page 56.
4.3.2 Arrays
An array is created from one array site. When an array is created, its RAID level, array type,
and array configuration are defined. This process is also called defining an array. In all
IBM DS8000 series implementations, one array is always defined as using one array site.
Each HPFE Gen2 pair can contain up to six array sites. The first set of 16 flash drives creates
two 8-drive array sites. RAID 6 arrays are created by default on each array site. RAID 10 is
optional for all flash drive sizes. RAID 5 is optional for flash drives smaller than 1 TB, but is not
recommended, and requires risk acceptance.
A Request for Price Quotation (RPQ) is required to use of RAID 5 on flash drives greater than
1 TB (RPQ is not available for drive sizes of 4 TB and greater).
Important: Using RAID 6 is recommended, and it is the default in the DS GUI. As with
large drives in particular, the RAID rebuild times (after one drive failure) get ever larger.
Using RAID 6 reduces the danger of data loss due to a double-drive failure. For more
information, see 3.5.1, “RAID configurations” on page 89.
For more information about the sparing algorithm, see 3.5.11, “Spare creation” on page 96.
Figure 4-1 shows the creation of a RAID 6 array with one spare, which is also called a
5+P+Q+S array. It has a capacity of five drives for data, two drives for double distributed
parity, and a spare drive. According to the RAID 6 rules, parities are distributed across all
seven drives in this example.
Depending on the selected RAID level and sparing requirements, six types of arrays are
possible, as shown in Figure 4-2 on page 111.
110 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
(RAID 5 is not recommended)
"
"
"
!"
"
""
"
"
"
Figure 4-2 RAID array types
Tip: Larger drives have a longer rebuild time. Only RAID 6 can recover from a double disk
error during a rebuild, by using the additional parity data. RAID 6 is the best choice for
systems that require high availability (HA), and is the default in DS8900F.
4.3.3 Ranks
After the arrays are created, the next task is to define a rank. A rank is a logical
representation of the physical array that is formatted for use as FB or CKD storage types. In
the DS8900F, ranks are defined in a one-to-one relationship to arrays. Before you define any
ranks, you must decide whether you plan to encrypt the data or not.
Encryption group
All drives that are offered in the DS8900F are Full Disk Encryption (FDE)-capable to secure
all logical volume data at rest. In the DS8900, the Encryption Authorization license is included
in the Base Function (BF) license group.
If you plan to use encryption for data at rest, you must define an encryption group before any
ranks are created. The DS8900F supports only one encryption group. All ranks must be in
this encryption group. The encryption group is an attribute of a rank. Therefore, your choice is
to encrypt everything or nothing. If you want to enable encryption later (create an encryption
group), all logical configuration must be deleted and re-created, and volume data restored.
For more information, see IBM DS8000 Encryption for Data at Rest, Transparent Cloud
Tiering, and Endpoint Security (DS8000 Release 9.2), REDP-4500.
Important: In all DS8000 series implementations, a rank is defined as using only one
array. Therefore, rank and array can be treated as synonyms.
An FB rank features an extent size of either 1 GB (more precisely a gibibyte (GiB), which is a
binary gigabyte that is equal to 230 bytes), called large extents, or an extent size of 16
mebibytes (MiB), called small extents.
IBM Z users or administrators typically do not deal with gigabytes or gibibytes. Instead,
storage is defined in terms of the original 3390 volume sizes. A 3390 Model 3 is three times
the size of a Model 1. A Model 1 features 1113 cylinders, which are about 0.946 GB.
A 3390 Model 1 (1113 cylinders) is the large extent size for CKD ranks. The CKD small extent
size is 21 cylinders, which corresponds to the z/OS allocation unit for EAV volumes larger
than 65520 cylinders. z/OS changes the addressing modes and allocates storage in
21 cylinder units.
An extent can be assigned to only one volume. Although you can define a CKD volume with a
capacity that is an integral multiple of one cylinder or an FB LUN with a capacity that is an
integral multiple of 128 logical blocks (64 KB), if you define a volume this way, you might
waste the unused capacity in the last extent that is assigned to the volume.
For example, the DS8900F theoretically supports a minimum CKD volume size of one
cylinder, but the volume still claims one full extent of 1113 cylinders if large extents are used
or 21 cylinder for small extents. So, 1112 cylinders are wasted if large extents are used.
Note: In the DS8900F firmware, all volumes have a common metadata structure. All
volumes have the metadata structure of ESE volumes, whether the volumes are
thin-provisioned or fully provisioned. ESE is described in 4.4.4, “Volume allocation and
metadata” on page 125.
112 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
4.4 Extent pools
An extent pool is a logical construct to aggregate the extents from a set of ranks, and it forms
a domain for extent allocation to a logical volume. Originally, extent pools were used to
separate drives with different revolutions per minute (RPM) and capacity in different pools that
have homogeneous characteristics. You still might want to use extent pools to separate Tier 0,
Tier 1, and Tier 2 flash drives, but be aware that Easy Tier does not manage data placement
across extent pools.
No rank or array affinity to an internal server (central processor complex (CPC) is predefined.
The affinity of the rank (and its associated array) to a server is determined when it is assigned
to an extent pool. One or more ranks with the same extent type (FB or CKD) can be assigned
to an extent pool.
Important: Because a rank is formatted to have small or large extents, the first rank that is
assigned to an extent pool determines whether the extent pool is a pool of all small or all
large extents. You cannot have a pool with a mixture of small and large extents. You cannot
change the extent size of an extent pool.
If you want Easy Tier to automatically optimize rank utilization, configure more than one rank
in an extent pool. A rank can be assigned to only one extent pool. As many extent pools as
ranks can exist, but for most systems, a single pair of extent pools for each rank type (FB or
CKD) provides the best overall performance.
Heterogeneous extent pools, with a mixture of Tier 0, Tier 1, and Tier 2 flash drives can take
advantage of the capabilities of Easy Tier to optimize I/O throughput. Easy Tier moves data
across different storage tiering levels to optimize the placement of the data within the extent
pool.
With storage pool striping, you can create logical volumes that are striped across multiple
ranks to enhance performance. To benefit from storage pool striping, more than one rank in
an extent pool is required.
Storage pool striping can enhance performance significantly. However, in the unlikely event
that a whole RAID array fails, the loss of the associated rank affects the entire extent pool
because data is striped across all ranks in the pool. For data protection, consider mirroring
your data to another DS8000 family storage system.
As with ranks, extent pools are also assigned to encryption group 0 or 1, where group 0 is
non-encrypted, and group 1 is encrypted. The DS8900F supports only one encryption group,
and all extent pools must use the same encryption setting that is used for the ranks.
A minimum of two extent pools must be configured to balance the capacity and workload
between the two servers. One extent pool is assigned to internal server 0. The other extent
pool is assigned to internal server 1. In a system with both FB and CKD volumes, four extent
pools provide one FB pool for each server and one CKD pool for each server.
If you plan on using small extents for ESE volumes while retaining large extents for other
volumes, you must create more pools with small extents. Small and large extents cannot be in
the same pool.
POWER9 - Server-0
POWER9 - Server-1
16 MB extents
Rank R1
1 GB extents Rank R4
Rank R2
1 6 MB extents
1 GB extents
Extent pools can be expanded by adding more ranks to the pool. All ranks that belong to
extent pools with the same internal server affinity are called a rank group. Ranks are
organized in two rank groups. Rank group 0 is controlled by server 0, and rank group 1 is
controlled by server 1.
The logical volumes in both extent pools remain accessible to the host systems. Dynamic
extent pool merge can be used for the following reasons:
Consolidation of two smaller extent pools with the equivalent storage type (FB or CKD)
and extent size into a larger extent pool. Creating a larger extent pool allows logical
volumes to be distributed over a greater number of ranks, which improves overall
performance in the presence of skewed workloads. Newly created volumes in the merged
extent pool allocate capacity as specified by the selected extent allocation algorithm.
Logical volumes that existed in either the source or the target extent pool can be
redistributed over the set of ranks in the merged extent pool by using the Migrate Volume
function.
Consolidating extent pools with different storage tiers to create a merged extent pool with
a mix of storage drive technologies. This type of an extent pool is called a multitiered pool,
and it is a prerequisite for using the Easy Tier automatic mode feature.
114 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Tier 0 Pools Tier 1 Pools Tier 2 Pools
Merge Pools
Merged
Important: Volume migration (or DVR) within the same extent pool is not supported in
multitiered pools. Easy Tier automatic mode rebalances the volumes’ extents within the
multitiered extent pool automatically based on I/O activity. However, you can also use the
Easy Tier application to manually place entire volumes in designated tiers. For more
information, see DS8870 Easy Tier Application, REDP-5014.
Dynamic extent pool merge is allowed only among extent pools with the same internal server
affinity or rank group. Additionally, the dynamic extent pool merge is not allowed in the
following circumstances:
If source and target pools have different storage types (FB and CKD).
If source and target pools have different extent sizes.
If you selected an extent pool that contains volumes that are being moved.
If the combined extent pools include 2 PB or more of ESE effective (virtual) capacity.
For more information about Easy Tier, see IBM DS8000 Easy Tier (Updated for DS8000
R9.0), REDP-4667.
Fixed-Block LUNs
A logical volume that is composed of FB extents is called a LUN. An FB LUN is composed of
one or more 1 GiB (230 bytes) large extents or one or more 16 MiB small extents from one FB
extent pool. A LUN cannot span multiple extent pools, but a LUN can have extents from
multiple ranks within the same extent pool. You can construct LUNs up to 16 TiB (16 x 240
bytes, or 244 bytes) when using large extents.
Important: DS8000 CS does not support FB logical volumes larger than 4 TiB. Do not
create a LUN that is larger than 4 TiB if you want to use CS for the LUN unless the LUN is
integrated as managed disks (MDisks) in an IBM SAN Volume Controller (SVC), and the
LUN is using IBM Spectrum® Virtualize CS.
LUNs can be provisioned (allocated) in binary GiB (230 bytes), decimal GB (109 bytes), or
512 or 520-byte blocks. However, the usable (physical) capacity that is provisioned (allocated)
is a multiple of 1 GiB. For small extents, it is a multiple of 16 MiB. Therefore, it is a good idea
to use LUN sizes that are a multiple of a gibibyte or a multiple of 16 MiB. If you define a LUN
with a size that is not a multiple of 1 GiB (for example, 25.5 GiB), the LUN size is 25.5 GiB.
However, 26 GiB are physically provisioned (allocated), of which 0.5 GiB of the physical
storage is unusable. When you want to specify a LUN size that is not a multiple of 1 GiB,
specify the number of blocks. A 16 MiB extent has 32768 blocks.
1 GB 1 GB
Rank-b used
free
used
free Allocate a 3 GB LUN
1 GB 1 GB 1 GB 1 GB
Rank-a used
3 GB LUN 2.9 GB LUN
created
1 GB 1 GB
used used
Rank-b used used
100 MB unused
116 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
With small extents, wasted storage is not an issue.
An FB LUN must be managed by an LSS. One LSS can manage up to 256 LUNs. The LSSs
are created and managed by the DS8900, as required. A total of 255 LSSs can be created in
the DS8900.
IBM i LUNs may use the unprotected attribute, in which case the DS8900F reports that the
LUN is not RAID-protected. Selecting either the protected or unprotected attribute does not
affect the RAID protection that is used by the DS8900F on the open volume.
IBM i LUNs display a 520-byte block to the host. The operating system (OS) uses eight of
these bytes, so the usable space is still 512 bytes like other Small Computer System Interface
(SCSI) LUNs. The capacities that are quoted for the IBM i LUNs are in terms of the 512-byte
block capacity, and they are expressed in GB (109 ). Convert these capacities to GiB (230 )
when you consider the effective usage of extents that are 1 GiB (230 ).
Important: The DS8900F supports IBM i variable volume (LUN) sizes in addition to fixed
volume sizes.
IBM i volume enhancement adds flexibility for volume sizes and can optimize the DS8900F
capacity usage for IBM i environments.
The DS8900F supports IBM i variable volume data types A50, which is an unprotected
variable size volume, and A99, which is a protected variable size volume. For more
information, see Table 4-1. DS8000 Release 9 introduced dynamic expansion of IBM i
variable volumes.
Example 4-1 demonstrates the creation of both a protected and an unprotected IBM i variable
size volume by using the DS CLI.
Example 4-1 Creating the IBM i variable size for unprotected and protected volumes
dscli> mkfbvol -os400 050 -extpool P4 -name itso_iVarUnProt1 -cap 10 5413
CMUC00025I mkfbvol: FB volume 5413 successfully created.
Note: IBM i fixed volume sizes continue to be supported in current DS8000 code levels.
Consider the best option for your environment between fixed and variable-size volumes.
A T10 DIF-capable LUN uses 520-byte sectors instead of the common 512-byte sector size.
Eight bytes are added to the standard 512-byte data field. The 8-byte DIF consists of 2 bytes
of CRC data, a 4-byte Reference Tag (to protect against misdirected writes), and a 2-byte
Application Tag for applications that might use it.
On a write, the DIF is generated by the HBA, which is based on the block data and LBA. The
DIF field is added to the end of the data block, and the data is sent through the fabric to the
storage target. The storage system validates the CRC and Reference Tag and, if correct,
stores the data block and DIF on the physical media. If the CRC does not match the data, the
data was corrupted during the write. The write operation is returned to the host with a write
error code. The host records the error and retransmits the data to the target. In this way, data
corruption is detected immediately on a write, and the corrupted data is never committed to
the physical media.
On a read, the DIF is returned with the data block to the host, which validates the CRC and
Reference Tags. This validation adds a small amount of latency for each I/O, but it might affect
overall response time on smaller block transactions (less than 4 KB I/Os).
The DS8900F supports the T10 DIF standard for FB volumes that are accessed by the Fibre
Channel Protocol (FCP) channels that are used by Linux on IBM Z, or AIX. You can define
LUNs with an option to instruct the DS8900F to use the CRC-16 T10 DIF algorithm to store
the data.
You can also create T10 DIF-capable LUNs for OSs that do not yet support this feature
(except for IBM i). Active protection is available for Linux on IBM Z, and AIX on IBM Power
servers. For other distributed OSs, check their documentation.
When you create an FB LUN by running the mkfbvol DS CLI command, add the option
-t10dif. If you query a LUN with the showfbvol command, the data type is FB 512T instead
of the standard FB 512 type.
Important: Because the DS8900F internally always uses 520-byte sectors (to support
IBM i volumes), no extra capacity is considered when standard or T10 DIF-capable
volumes are used.
Target LUN: When FlashCopy for a T10 DIF LUN is used, the target LUN must also be a
T10 DIF-type LUN. This restriction does not apply to mirroring.
118 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Count Key Data volumes
An IBM Z CKD volume is composed of one or more extents from one CKD extent pool. CKD
extents are of the size of 3390 Model 1, which features 1113 cylinders for large extents or
21 cylinders for small extents. When you define an IBM Z CKD volume, specify the size of the
volume as a multiple of 3390 Model 1 volumes or the number of cylinders that you want for
the volume.
Before a CKD volume can be created, an LCU must be defined that provides up to 256
possible addresses that can be used for CKD volumes. Up to 255 LCUs can be defined. For
more information about LCUs, see 4.4.5, “Logical subsystems” on page 130.
On a DS8900F, you can define CKD volumes with up to 1,182,006 cylinders, or about 1 TB.
This volume capacity is called an EAV, and it is supported by the 3390 Model A.
A CKD volume cannot span multiple extent pools, but a volume can have extents from
different ranks in the same extent pool. You also can stripe a volume across the ranks. For
more information, see “Storage pool striping: Extent rotation” on page 121.
Figure 4-6 shows an example of how a logical volume is provisioned (allocated) with a CKD
volume.
1113 1000
Rank-y used
used
used
used
113 cylinders unused
Classically, to start an I/O to a base volume, z/OS can select any alias address only from the
same LCU as the base address to perform the I/O. With SuperPAV, the OS can use alias
addresses from other LCUs to perform an I/O for a base address.
The restriction is that the LCU of the alias address belongs to the same DS8000 server.
In other words, if the base address is from an even / odd LCU, the alias address that z/OS can
select must also be from an even / odd LCU. In addition, the LCU of the base volume and the
LCU of the alias volume must be in the same path group. z/OS prefers alias addresses from
the same LCU as the base address, but if no alias address is free, z/OS looks for free alias
addresses in LCUs of the same Alias Management Group.
An Alias Management Group is all the LCUs that have affinity to the same DS8000 internal
server and have the same paths to the DS8900F. SMF can provide reports at the Alias
Management Group level.
Initially, each alias address must be assigned to a base address. Therefore, it is not possible
to define an LCU with only alias addresses.
As with PAV and HyperPAV, SuperPAV must be enabled. SuperPAV is enabled by the
HYPERPAV=XPAV statement in the IECIOSxx parmlib member or by the SETIOS HYPERPAV=XPAV
command. The D M=DEV(address) and the D M=CU(address) display commands show whether
XPAV is enabled or not. With the D M=CU(address) command, you can check whether aliases
from other LCUs are being used (Example 4-2).
120 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
PATH VALIDATED Y Y Y Y
MANAGED N N N N
ZHPF - CHPID Y Y Y Y
ZHPF - CU INTERFACE Y Y Y Y
MAXIMUM MANAGED CHPID(S) ALLOWED = 0
DESTINATION CU LOGICAL ADDRESS = 56
CU ND = 002107.981.IBM.75.0000000FXF41.0330
CU NED = 002107.981.IBM.75.0000000FXF41.5600
TOKEN NED = 002107.900.IBM.75.0000000FXF41.5600
FUNCTIONS ENABLED = ZHPF, XPAV
XPAV CU PEERS = 4802, 4A02, 4C02, 4E02
DEFINED DEVICES
04E00-04E07
DEFINED PAV ALIASES
14E40-14E47
With cross-LCU HyperPAV, which is called SuperPAV, the number of alias addresses can
further be reduced while the pool of available alias addresses to handle I/O bursts to volumes
is increased.
This construction method of using fixed extents to form a logical volume in the DS8900F
allows flexibility in the management of the logical volumes. You can delete LUNs or CKD
volumes, resize LUNs or volumes, and reuse the extents of those LUNs to create other LUNs
or volumes, including ones of different sizes. One logical volume can be removed without
affecting the other logical volumes that are defined on the same extent pool.
The extents are cleaned after you delete a LUN or CKD volume. The reformatting of the
extents is a background process, and it can take time until these extents are available for
reallocation.
Two extent allocation methods (EAMs) are available for the DS8000: Storage pool striping
(rotate extents) and rotate volumes.
Note: Although the preferred SAM was storage pool striping, it is now a better choice to let
Easy Tier manage the storage pool extents. This chapter describes rotate extents for the
sake of completeness, but it is now mostly irrelevant.
The DS8900F maintains a sequence of ranks. The first rank in the list is randomly picked at
each power-on of the storage system. The DS8900F tracks the rank in which the last
allocation started. The allocation of the first extent for the next volume starts from the next
rank in that sequence.
If more than one volume is created in one operation, the allocation for each volume starts in
another rank. When several volumes are provisioned (allocated), rotate through the ranks, as
shown in Figure 4-9.
You might want to consider this allocation method if you prefer to manage performance
manually. The workload of one volume is going to one rank. This configuration makes the
identification of performance bottlenecks easier. However, by putting all the volumes’ data
onto one rank, you might introduce a bottleneck, depending on your actual workload.
122 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Important: Rotate extents and rotate volume EAMs provide distribution of volumes over
ranks. Rotate extents run this distribution at a granular (1 GiB or 16 MiB extent) level, and
is a better method to minimize hot spots and improve overall performance.
However, as previously stated, Easy Tier is the preferred choice for managing the storage
pool extents.
In a mixed-tier extent pool that contains different tiers of ranks, the storage pool striping EAM
is used independently of the requested EAM, and the EAM is set to managed.
The Easy Tier default allocation order is High Performance. With the High Performance
setting, the system populates drive classes in this order: Flash Tier 0, Flash Tier 1, and then
Flash Tier 2.
There is a GUI and a CLI option for the whole Storage Facility to change the allocation
preference. The two options are High Performance and High Utilization. With the High
Utilization setting, the machine populates drive classes in this order: Flash Tier 1, Flash Tier
2, and then Flash Tier 0. The chsi command can be used to switch the ETTierOrder
parameter between High Performance and High Utilization.
When you create striped volumes and non-striped volumes in an extent pool, a rank might be
filled before the others. A full rank is skipped when you create striped volumes.
By using striped volumes, you distribute the I/O load of a LUN or CKD volume to more than
one set of eight drives, which can enhance performance for a logical volume. In particular,
OSs that do not include a volume manager with striping capability benefit most from this
allocation method.
Small extents can increase the parallelism of sequential writes. Although the system stays
within one rank until 1 GiB is written, with small extents it jumps to the next rank after 16 MiB.
This configuration uses more disk drives when performing sequential writes.
Important: If you must add capacity to an extent pool because it is nearly full, it is better to
add several ranks concurrently, not just one rank. This method allows new volumes to be
striped across the newly added ranks.
With the Easy Tier manual mode facility, if the extent pool is a single-tier pool, the user can
request an extent pool merge followed by a volume relocation with striping to run the same
function. For a multitiered managed extent pool, extents are automatically relocated over
time, according to performance needs. For more information, see IBM DS8000 Easy Tier
(Updated for DS8000 R9.0), REDP-4667.
Rotate volume EAM: The rotate volume EAM is not allowed if one extent pool is
composed of flash drives and configured for effective (virtual) capacity.
A logical volume includes the attribute of being striped across the ranks or not. If the volume
was created as striped across the ranks of the extent pool, the extents that are used to
increase the size of the volume are striped. If a volume was created without striping, the
system tries to allocate the additional extents within the same rank that the volume was
created from originally.
Because most OSs have no means of moving data from the end of the physical drive off to
unused space at the beginning of the drive, and because of the risk of data corruption, IBM
does not support shrinking a volume. The DS CLI and DS GUI interfaces cannot reduce the
size of a volume.
DVR allows data that is stored on a logical volume to be migrated from its allocated storage to
newly allocated storage while the logical volume remains accessible to attached hosts.
The user can request DVR by using the Migrate Volume function that is available through the
DS GUI or DS CLI. DVR allows the user to specify a target extent pool and an EAM. The
target extent pool can be a separate extent pool than the extent pool where the volume is. It
can also be the same extent pool, but only if it is a single-tier pool. However, the target extent
pool must be managed by the same DS8900F internal server.
Important: DVR in the same extent pool is not allowed in a managed pool. In managed
extent pools, Easy Tier automatic mode automatically relocates extents within the ranks to
allow performance rebalancing.
You can move volumes only among pools of the same extent size.
Each logical volume has a configuration state. To begin a volume migration, the logical
volume initially must be in the normal configuration state.
More functions are associated with volume migration that allow the user to pause, resume, or
cancel a volume migration. Any or all logical volumes can be requested to be migrated at any
time if available capacity is sufficient to support the reallocation of the migrating logical
volumes in their specified target extent pool. For more information, see IBM DS8000 Easy
Tier (Updated for DS8000 R9.0), REDP-4667.
124 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
4.4.4 Volume allocation and metadata
The DS8900F internal data layout is identical to the DS8880 internal data layout.
For ESE volumes, several logical extents, which are designated as auxiliary rank extents,
are allocated to contain volume metadata.
Figure 4-11 shows the extent layout in the DS8870 and earlier for ESE volumes. In
addition to the reserved area, auxiliary 1 GB extents are also allocated to store metadata
for the ESE volumes.
In the DS8900F and DS8880, all volumes use an improved metadata extent structure, similar
to what was previously used for ESE volumes. This unified extent structure greatly simplifies
the internal management of logical volumes and their metadata. No area is fixed and
reserved for volume metadata, and this capacity is added to the space that is available for
use.
For storage pools with large extents, metadata is also allocated as large extents (1 GiB for FB
pools or 1113 cylinders for CKD pools). Large extents that are allocated for metadata are
subdivided into 16 MiB subextents, which are also referred to as metadata extents, for FB
volumes, or 21 cylinders for CKD. For extent pools with small extents, metadata extents are
also small extents. Sixty-four metadata subextents are in each large metadata extent for FB,
and 53 metadata subextents are in each large metadata extent for CKD.
For each FB volume that is provisioned (allocated), an initial 16 MiB metadata subextent or
metadata small extent is allocated, and an extra 16 MiB metadata subextent or metadata
small extent is allocated for every 10 GiB of provisioned (allocated) capacity or portion of
provisioned capacity.
For each CKD volume that is provisioned (allocated), an initial 21 cylinders metadata
subextent or metadata small extent is allocated, and an extra 21 cylinders metadata
subextent or metadata small extent is allocated for every 11130 cylinders (or ten 3390 Model
1) of allocated capacity or portion of allocated capacity.
For example, a 3390-3 (that is, 3339 cylinders or about 3 GB) or 3390-9 (that is, 10,017
cylinders or 10 GB) volume takes two metadata extents (one metadata extent for the volume
and another metadata extent for any portion of the first 10 GB). A 128 GB FB volume takes 14
metadata extents (one metadata extent for the volume and another 13 metadata extents to
account for the 128 GB).
Figure 4-12 on page 127 shows an illustration of 3 GB and 12 GB FB volumes for a storage
pool with large extents. In an extent pool with small extents, there is no concept of subextents.
You have user extents, unused extents, and metadata extents.
126 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 4-12 Metadata allocation
Metadata extents with free space can be used for metadata by any volume in the extent pool.
When you use metadata extents and user extents within an extent pool, some planning and
calculations are required, especially in a mainframe environment where thousands of
volumes are defined and the whole capacity is provisioned (allocated) during the initial
configuration. You must calculate the capacity that is used up by the metadata to get the
capacity that can be used for user data. This calculation is important only when fully
provisioned volumes are used. Thin-provisioned volumes use no space when created; only
metadata and space are used when data is written.
For extent pools with small extents, the number of available user data extents can be
estimated as follows:
For extent pools with regular 1 GiB extents where the details of the volume configuration are
not known, you can estimate the number of metadata extents based on many volumes only.
The calculations are performed as shown:
FB pool overhead = (number of volumes × 2 + total volume extents/10)/64 and rounded up
to the nearest integer
CKD pool overhead = (number of volumes × 2 + total volume extents/10)/53 and rounded
up to the nearest integer
The formulas overestimate the space that is used by the metadata by a small amount
because it assumes wasted space on every volume. However, the precise size of each
volume does not need to be known.
Space for a thin-provisioned volume is allocated when a write occurs. More precisely, it is
allocated when a destage from the cache occurs and insufficient free space is left on the
currently allocated extent.
Therefore, thin provisioning allows a volume to exist that is larger than the usable (physical)
capacity in the extent pool to which it belongs. This approach allows the “host” to work with
the volume at its defined capacity, even though insufficient usable (physical) space might exist
to fill the volume with data.
The assumption is that either the volume is never filled, or as the DS8900F runs low on raw
capacity, more is added. This approach also assumes that the DS8900F is not at its
maximum raw capacity.
Note: If thin provisioning is used, the metadata is allocated for the entire volume (effective
provisioned capacity) when the volume is created, not when extents are used.
128 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Extent space efficient capacity controls for thin provisioning
Using thin provisioning can affect the amount of storage capacity that you choose to order.
Use ESE capacity controls to allocate storage correctly.
With the mixture of thin-provisioned (ESE) and fully provisioned (non-ESE) volumes in an
extent pool, a method is needed to dedicate part of the extent-pool storage capacity for ESE
user data usage, and to limit the ESE user data usage within the extent pool.
Also, you must be able to detect when the available storage space within the extent pool for
ESE volumes is running out of space.
ESE capacity controls provide extent pool attributes to limit the maximum extent pool storage
that is available for ESE user data usage. These controls also ensure that a proportion of the
extent pool storage is available for ESE user data usage.
The following controls are available to limit the usage of extents in an extent pool:
Reserve capacity in an extent pool by enabling the extent pool limit function by running the
chextpool -extentlimit enable -limit extent_limit_percentage pool_ID command.
You can reserve space for the sole use of ESE volumes by creating a repository by
running the mksestg -repcap capacity pool_id command.
Extent pool
Virtual capacity
ESE adjust
Repository repcap
Space for
Virtual volume adjust
with portion of normal volumes
%limit
allocated data/extents
Capacity controls exist for an extent pool and also for a repository, if it is defined. There are
system-defined warning thresholds at 15% and 0% free capacity left, and you can set your
own user-defined threshold for the whole extent pool or for the ESE repository. Thresholds for
an extent pool are set by running the DS CLI chextpool -threshold (or mkextpool)
command. Thresholds for a repository are set by running the chsestg -repcapthreshold (or
mksestg) command.
A Simple Network Management Protocol (SNMP) trap is associated with the extent pool /
repository capacity controls, and it notifies you when the extent usage in the pool exceeds a
user-defined threshold and when the extent pool is out of extents for user data.
When the size of the extent pool remains fixed or when it increases, the available usable
(physical) capacity remains greater than or equal to the provisioned (allocated) capacity.
However, a reduction in the size of the extent pool can cause the available usable (physical)
capacity to become less than the provisioned (allocated) capacity in certain cases.
For example, if the user requests that one of the ranks in an extent pool is depopulated, the
data on that rank is moved to the remaining ranks in the pool. This process causes the rank to
become not provisioned (allocated) and removed from the pool. The user is advised to
inspect the limits and threshold on the extent pool after any changes to the size of the extent
pool to ensure that the specified values are still consistent with the user’s intentions.
Overprovisioning control
It is possible to set the maximum allowed overprovisioning ratios for an extent pool.
A new attribute (-opratiolimit) is available when creating or modifying extent pools to add
operational limits. Example 4-3 provides an example of creating and modifying an extent pool
with a defined operational limit that cannot be exceeded.
Setting an overprovisioning ratio limit results in the following changes to system behavior to
prevent an extent pool from exceeding the overprovisioning ratio:
Prevent volume creation, expansion, or migration.
Prevent rank depopulation.
Prevent pool merge.
Prevent turning on Easy Tier space reservation.
For more information, see IBM DS8880 Thin Provisioning (Updated for Release 8.5),
REDP-5343.
130 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
All even-numbered LSSs (X’00’, X’02’, X’04’, up to X’FE’) are handled by internal server 0,
and all odd-numbered LSSs (X’01’, X’03’, X’05’, up to X’FD’) are handled by internal server 1.
LSS X’FF’ is reserved. This configuration allows both servers to handle host commands to the
volumes in the DS8000 if the configuration takes advantage of this capability. If either server
is not available, the remaining operational server handles all LSSs. LSSs are also placed in
address groups of 16 LSSs, except for the last group that has 15 LSSs. The first address
group is 00 - 0F, and so on, until the last group, which is F0 - FE.
Because the LSSs manage volumes, an individual LSS must manage the same type of
volumes. An address group must also manage the same type of volumes. The first volume
(either FB or CKD) that is assigned to an LSS in any address group sets that group to
manage those types of volumes. For more information, see “Address groups” on page 132.
Volumes are created in extent pools that are associated with either internal server 0 or 1.
Extent pools are also formatted to support either FB or CKD volumes. Therefore, volumes in
any internal server 0 extent pools can be managed by any even-numbered LSS if the LSS
and extent pool match the volume type. Volumes in any internal server 1 extent pools can be
managed by any odd-numbered LSS if the LSS and extent pool match the volume type.
Volumes also have an identifier 00 - FF. The first volume that is assigned to an LSS has an
identifier of 00. The second volume is 01, and so on, up to FF if 256 volumes are assigned to
the LSS.
For FB volumes, the LSSs that are used to manage them are not significant if you spread the
volumes between odd and even LSSs. When the volume is assigned to a host (in the
DS8900F configuration), a LUN is assigned to it that includes the LSS and Volume ID. This
LUN is sent to the host when it first communicates with the DS8900, so it can include the LUN
in the “frame” that is sent to the DS8900F when it wants to run an I/O operation on the
volume. This method is how the DS8900F knows which volume on which to run the operation.
Conversely, for CKD volumes, the LCU is significant. The LCU must be defined in a
configuration that is called the input/output configuration data set (IOCDS) on the host. The
LCU definition includes a control unit address (CUADD). This CUADD must match the LCU ID
in the DS8900F. A device definition for each volume, which has a unit address (UA) that is
included, is also included in the IOCDS. This UA must match the volume ID of the device. The
host must include the CUADD and UA in the “frame” that is sent to the DS8900F when it
wants to run an I/O operation on the volume. This method is how the DS8900F knows which
volume on which to run the operation.
For both FB and CKD volumes, when the “frame” that is sent from the host arrives at a host
adapter port in the DS8900F, the adapter checks the LSS or LCU identifier to know which
internal server to pass the request to inside the DS8900. For more information about host
access to volumes, see 4.4.6, “Volume access” on page 133.
FB LSSs are created automatically when the first FB logical volume on the LSS is created. FB
LSSs are deleted automatically when the last FB logical volume on the LSS is deleted. CKD
LCUs require user parameters to be specified and must be created before the first CKD
logical volume can be created on the LCU. They must be deleted manually after the last CKD
logical volume on the LCU is deleted.
Certain management actions in Metro Mirror (MM), Global Mirror (GM), or Global Copy (GC)
operate at the LSS level. For example, the freezing of pairs to preserve data consistency
across all pairs in case a problem occurs with one of the pairs is performed at the LSS level.
Array
Site
LSS X'17'
DB2
Array
….
….
Site
24
24
. ...
. ...
Array
Site
Array
Site
LSS X'18'
…….
….
Array DB2-test
24
24
Site
. ... ...
…. ...
Array
Site
Address groups
Address groups are created automatically when the first LSS that is associated with the
address group is created. The groups are deleted automatically when the last LSS in the
address group is deleted.
All devices in an LSS must be CKD or FB. This restriction goes even further. LSSs are
grouped into address groups of 16 LSSs. LSSs are numbered X’ab’, where a is the address
group and b denotes an LSS within the address group. For example, X’10’ - X’1F’ are LSSs in
address group 1.
All LSSs within one address group must be of the same type (CKD or FB). The first LSS that
is defined in an address group sets the type of that address group.
132 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 4-15 shows the concept of LSSs and address groups.
Server1
X'1E01'
X'1D00'
Server0
LSS X'1E'
Extent Pool FB-1 LSS X'1F' Extent Pool FB-2
Rank-y
Rank-c
The LUN identification X’gabb’ is composed of the address group X’g’, the LSS number within
the address group X’a’, and the ID of the LUN within the LSS X’bb’. For example, FB LUN
X’2101’ denotes the second (X’01’) LUN in LSS X’21’ of address group 2.
An extent pool can have volumes that are managed by multiple address groups. The example
in Figure 4-15 shows only one address group that is used with each extent pool.
Host attachment
HBAs are identified to the DS8900F in a host attachment construct that specifies the
worldwide port names (WWPNs) of a host’s HBAs. A set of host ports can be associated
through a port group attribute that allows a set of HBAs to be managed collectively. This port
group is referred to as a host attachment within the configuration.
Each host attachment can be associated with a volume group to define the LUNs that host is
allowed to access. Multiple host attachments can share the volume group. The host
attachment can also specify a port mask that controls the DS8900F I/O ports that the host
HBA is allowed to log in to. Whichever ports the HBA logs in to, it sees the same volume
group that is defined on the host attachment that is associated with this HBA.
Volume group
A volume group is a named construct that defines a set of logical volumes. A volume group is
required only for FB volumes. When a volume group is used with CKD hosts, a default volume
group contains all CKD volumes. Any CKD host that logs in to a Fibre Channel connection
(IBM FICON) I/O port has access to the volumes in this volume group. CKD logical volumes
are automatically added to this volume group when they are created and are automatically
removed from this volume group when they are deleted.
When a host attachment object is used with open systems hosts, a host attachment object
that identifies the HBA is linked to a specific volume group. You must define the volume group
by indicating the FB volumes that are to be placed in the volume group. Logical volumes can
be added to or removed from any volume group dynamically.
Important: Volume group management is available only with the DS CLI. In the DS GUI,
users define hosts and assign volumes to hosts. A volume group is defined in the
background. No volume group object can be defined in the DS GUI.
Two types of volume groups are used with open systems hosts. The type determines how the
logical volume number is converted to a host addressable LUN_ID in the Fibre Channel (FC)
SCSI interface. A SCSI map volume group type is used with FC SCSI host types that poll for
LUNs by walking the address range on the SCSI interface. This type of volume group can
map any FB logical volume numbers to 256 LUN IDs that have zeros in the last 6 bytes and
the first 2 bytes in X’0000’ - X’00FF’.
A SCSI mask volume group type is used with FC SCSI host types that use the Report LUNs
command to determine the LUN IDs that are accessible. This type of volume group can allow
any FB logical volume numbers to be accessed by the host where the mask is a bitmap that
specifies the LUNs that are accessible. For this volume group type, the logical volume number
X’abcd’ is mapped to LUN_ID X’40ab40cd00000000’. The volume group type also controls
whether 512-byte block LUNs or 520-byte block LUNs can be configured in the volume group.
When a host attachment is associated with a volume group, the host attachment contains
attributes that define the logical block size and the Address Discovery Method (LUN Polling or
Report LUNs) that is used by the host HBA. These attributes must be consistent with the
volume group type of the volume group that is assigned to the host attachment. This
consistency ensures that HBAs that share a volume group have a consistent interpretation of
the volume group definition and have access to a consistent set of logical volume types.
The DS Storage Manager GUI typically sets these values for the HBA based on your
specification of a host type. You must consider what volume group type to create when a
volume group is set up for a particular HBA.
FB logical volumes can be defined in one or more volume groups. This definition allows a
LUN to be shared by host HBAs that are configured to separate volume groups. An FB logical
volume is automatically removed from all volume groups when it is deleted.
134 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 4-16 shows the relationships between host attachments and volume groups. Host
AIXprod1 has two HBAs, which are grouped in one host attachment, and both HBAs are
granted access to volume group DB2-1. Most of the volumes in volume group DB2-1 are also
in volume group DB2-2, which is accessed by the system AIXprod2.
However, one volume in each group is not shared in Figure 4-16. The system in the lower-left
part of the figure has four HBAs, and they are divided into two distinct host attachments. One
HBA can access volumes that are shared with AIXprod1 and AIXprod2. The other HBAs have
access to a volume group that is called docs.
W W PN-7
Host att: Prog
W W PN-8 Volum e group: docs
When working with Open Systems clusters for defining volumes, use the Create Cluster
function of the DS Storage Manager GUI to easily define volumes. In general, the GUI hides
the complexity of certain DS8000 internal definition levels like volume groups, array sites, and
ranks. It helps save time by directly processing these definitions internally in the background
without presenting them to the administrator.
Data
1 GB FB
1 GB FB
1 GB FB
Data
1 GB FB
1 GB FB
1 GB FB
Data
Data
Server0
Data
1 GB FB
1 GB FB
1 GB FB
Data
Parity
Spare
X'2x' FB
4096
addresses
LSS X'27'
X'3x' CKD
4096
addresses
This virtualization concept provides a high degree of flexibility. Logical volumes can be
dynamically created, deleted, and resized. They can also be grouped logically to simplify
storage management. Large LUNs and CKD volumes reduce the total number of volumes,
which contributes to the reduction of management effort.
Tip: The DS GUI helps save administration steps by handling some of these virtualization
levels in the background and automatically processing them for the administrator.
136 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
4.5 Terminology for IBM Storage products
This section lists the capacity terms for all IBM Storage products. These terms replace all
terms that were previously used to describe capacity in IBM Storage products.
Raw capacity
The reported capacity of the drives in the system before formatting or RAID is applied.
Usable capacity
The amount of capacity that is available for storing data on a system, pool, or array after
formatting and RAID techniques are applied.
Used (usable) capacity
The amount of usable capacity that is taken up by data in a system, pool, or array after
data reduction techniques are applied.
Available (usable) capacity
The amount of usable capacity that is not yet used in a system, pool, or array.
Provisioned capacity
The amount of provisioned capacity that can be created in a system or pool without
running out of usable capacity for the current data-reduction savings that you want
achieved. This capacity equals the usable capacity that is divided by the data reduction
savings percentage. In some storage systems, restrictions in the system determine the
maximum provisioned capacity that is allowed in a pool or system. In those cases, the
provisioned capacity cannot exceed this limit.
The current implementation of the DS8900F does not feature any data reduction
techniques.
Provisioned capacity was previously called virtual capacity in this chapter.
Written capacity
The amount of usable capacity that would have been used to store written data in a pool or
system if data reduction was not applied.
Overhead capacity
The amount of usable capacity that is occupied by metadata in a pool or system and other
data that is used for system operation.
Thin-provisioning savings
The total amount of usable capacity that is saved in a pool, system, or volume by using
usable capacity when needed as a result of write operations. The capacity that is saved is
the difference between the provisioned capacity minus the written capacity.
Overprovisioning
The result of creating more provisioned capacity in a storage system or pool than there is
usable capacity. This result occurs when thin provisioning or compression ensures that the
used capacity of the provisioned volumes is less than their provisioned capacity.
Overprovisioned ratio
The ratio of provisioned capacity to usable capacity in a pool or system.
For more information about the configuration and installation process, see the IBM DS8900F
Introduction and Planning Guide, SC27-9560.
Important: The IBM DS8980F and DS8950F systems support an expansion frame that
can be installed adjacent or 20 meters away from the base frame. (Feature Code 1341
is needed.)
Consider location suitability, floor loading, access constraints, elevators, and doorways.
Analyze power requirements, such as redundancy and using an uninterruptible power
supply (UPS).
Examine environmental requirements, such as adequate cooling capacity.
Full Disk Encryption (FDE) drives are a standard feature for the DS8900F. If encryption
activation is required, consider the location and connection needs for the external key
servers, such as IBM Security Key Lifecycle Manager or Gemalto SafeNet KeySecure
servers.
Consider the integration of Lightweight Directory Access Protocol (LDAP) to allow a single
user ID and password management. LDAP can be configured from the Storage
Management GUI, as described in 6.5.2, “Remote authentication” on page 188.
Call Home through a Secure Sockets Layer (SSL) installation to provide a continued
secure connection to the IBM Support center.
Consider connecting to IBM Storage Insights that can help you predict and prevent
storage problems before they impact your business.
Plan for logical configuration, Copy Services (CS), and staff education. For more
information, see Chapter 8, “Configuration flow” on page 225.
142 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Configuration and integration of external key servers and IBM DS8000 Encryption for
enhanced data security
Supported key servers for data at rest and Transparent Cloud Tiering (TCT) Encryption
include IBM Security Key Lifecycle Manager, Gemalto Safenet KeySecure, and Thales
Vormetric DSM. IBM Security Guardium Key Lifecycle Manager is the only supported key
server for encryption of data in flight (IBM Fibre Channel Endpoint Security).
IBM provides services to set up and integrate IBM Security Guardium Key Lifecycle
Manager components.
Alternatively, clients can install the Gemalto SafeNet key servers or Thales Vormetric
DSM. For IBM Fibre Channel Endpoint Security, IBM Security Guardium Key Lifecycle
Manager with Key Management Interoperability Protocol (KMIP) in Multi-Master mode is
required.
Logical configuration planning and application
Logical configuration refers to the creation of redundant array of independent disks
(RAID) arrays and pools, and to the assignment of the configured capacity to servers.
Application of the initial logical configuration and all subsequent modifications to the
logical configuration also are client responsibilities. The logical configuration can be
created, applied, and modified by using the DS GUI, DS Command-line Interface (DS
CLI), or DS Open application programming interface (DS Open API).
IBM Services® also can apply or modify your logical configuration, which is a fee-based
service.
5.1.2 Participants
A project manager must coordinate the many tasks that are necessary for a successful
installation. Installation requires close cooperation with the user community, IT support staff,
and technical resources that are responsible for floor space, power, and cooling.
A storage administrator must also coordinate requirements from the user applications and
systems to build a storage plan for the installation. This plan is needed to configure the
storage after the initial hardware installation is complete.
The following people must be briefed and engaged in the planning process for the physical
installation:
Systems and storage administrators
Installation planning engineer
Building engineer for floor loading, air conditioning, and electrical considerations
Security engineers for AOS, LDAP, key servers, and encryption
Administrator and operator for monitoring and handling considerations
IBM Systems Service Representative (IBM SSR) or IBM Business Partner
Table 5-1 lists the final packaged dimensions and maximum packaged weight of the DS8900F
storage unit ship group. The maximum packaged weight is the maximum weight of the frame
plus the packaging weight.
IBM DS8910F model 993 Height 1.49 m (58.7 in.) 295 kg (650 lb)
Width 1.05 m (41.3 in.)
Depth 1.30 m (51.2 in.)
DS8910F model 994 Height 2.22 m (87.7 in.) 762 kg (1680 lb)
Width 0.95 m (37.4 in.)
Depth 1.50 m (59.1 in.)
DS8950F model 996 and Height 2.22 m (87.7 in.) 793 kg (1748 lb)
DS8980F model 998 Width 1.0 m (39.4 in.)
Depth 1.50 m (59.1 in.)
Expansion Frame model E96 Height 2.22 m (87.7 in.) 603 kg (1330 lb)
Width 1.0 m (39.4 in.)
Depth 1.50 m (59.1 in.)
By using the shipping weight reduction option, you can receive delivery of a DS8900F model
in multiple shipments that do not exceed 909 kg (2,000 lb) each.
The DS8910F model 993 can be integrated into an existing IBM z15 model T02, IBM
LinuxONE Rockhopper III Model LT2, IBM z14 Model ZR1, IBM LinuxONE Rockhopper II
Model LR1, or other standard 19-inch wide frame with 16U contiguous space. For more
information, see IBM DS8910F Model 993 Rack-Mounted Storage System Release 9.1,
REDP-5566.
For more information about the Shipping Weight Reduction option, see Chapter 7, “IBM
DS8900F features and licensed functions” on page 199.
144 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5.2.2 Floor type and loading
The DS8900F can be installed on a raised or nonraised floor. The total weight and space
requirements of the storage unit depend on the configuration features that you ordered. You
might consider calculating the weight of the unit and the expansion frame (if ordered) in their
maximum capacity to allow for the addition of new features.
For the maximum weight of the various DS8900F models, see Table 5-1 on page 144.
Important: You must check with the building engineer or other appropriate personnel to
ensure that the floor loading is correctly considered.
Figure 5-1 shows the location of the cable cutouts for DS8900F. You can use the following
measurements when you cut the floor tile:
Width: 41.91 cm (16.5 in.)
Depth: 8.89 cm (3.5 in.)
End of frame to edge of cable cutout: 10.0 cm (3.9 in.)
For more information about floor loading and weight distribution, see IBM DS8900F
Introduction and Planning Guide, SC27-9560.
For more information, see IBM DS8900F Introduction and Planning Guide, SC27-9560.
146 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5.2.4 Room space and service clearance
The total amount of space that is needed by the storage units can be calculated by using the
dimensions that are shown in Table 5-2.
Table 5-2 DS8900F dimensions
Dimensions with casters DS8900F all models (racked)
and covers
The storage unit location area also covers the service clearance that is needed by the
IBM SSR when the front and rear of the storage unit are accessed. You can use the following
minimum service clearances. Verify your configuration and the maximum configuration for
your needs, keeping in mind that the DS8900F has a maximum of one expansion frame (for a
total of two frames).
Power connectors
Each DS8900F base and expansion frame features redundant intelligent Power Distribution
Unit (iPDU) rack systems. The base frame can have 2 - 4 power cords, and the expansion
frame two power cords.
Attach the power cords to each frame to separate AC power distribution systems. For more
information about power connectors and power cords, see IBM DS8900F Introduction and
Planning Guide, SC27-9560.
Input voltage
When you plan the power requirements of the storage system, consider the input voltage
requirements. Table 5-3 and Table 5-4 shows the DS8900F input voltages and frequencies.
148 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Power consumption
Table 5-5 lists the power consumption specifications of the DS8900F. The power estimates in
this table are conservative and assume a high transaction rate workload.
Table 5-5 Power consumption and environmental information (fully equipped frames)
The values represent data that was obtained from the following configured systems:
A standard DS8910F model 993 system that contains six sets of fully configured
high-performance storage enclosures and eight Fibre Channel (FC) adapters.
Standard base frames that contain 12 sets of fully configured high-performance storage
enclosures and 16 FC adapters.
Expansion models that contain 12 sets of fully configured high-performance storage
enclosures and 16 FC adapters.
DS8900F cooling
Air circulation for the DS8900F is provided by the various fans that are installed throughout
the frame. All of the fans in the DS8900F system direct air flow from the front of the frame to
the rear of the frame. No air exhausts out of the top of the frame.
The operating temperature for the DS8900F is 16 - 32 °C (60 - 90 °F) at relative humidity
limits of 20% - 80% and optimum at 45%.
Important: The following factors must be considered when the DS8900F system is
installed:
Ensure that the air circulation for the DS8900F base frame and expansion frames is
maintained free from obstruction to keep the unit operating in the specified temperature
range.
For safety reasons, do not store anything on top of the DS8900F system.
Table 5-6 on page 151 shows the minimum and maximum numbers of host adapters that are
supported by the DS8900F.
150 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Table 5-6 Minimum and maximum host adapters
Storage system type Storage system Minimum number of Maximum number of
configuration host adapter features host adapter features
for the base frame for the storage
system
The FC and FICON shortwave (SW) host adapter, when it is used with 50 μm multi-mode
fiber cable, supports point-to-point distances. For more information about cable limits, see
Table 5-7.
OM3 (50 μm) 150 m (492 ft.) 100 m (328 ft.) 70 m (230 ft.)
OM4 (50 μm) 190 m (623 ft.) 125 m (410 ft.) 100 m (328 ft.)
The FC and FICON longwave (LW) host adapter, when it is used with 9 μm single-mode fiber
cable, extends the point-to-point distance to 10 km (6.2 miles).
Different fiber optic cables with various lengths can be ordered for each FC adapter port.
Table 5-8 lists the fiber optic cable features for the FCP/FICON adapters.
For more information about IBM supported attachments, see IBM DS8900F Introduction and
Planning Guide, SC27-9560.
For more information about host types, models, adapters, and operating systems (OSs) that
are supported by the DS8900F, see the IBM System Storage Interoperation Center (SSIC) for
DS8000.
zHyperLink does not replace zHPF. It works in cooperation with it to reduce the workload on
zHPF. zHyperLink provides a new PCIe connection. The physical number of current zHPF
connections is not reduced by zHyperLink.
On the DS8900F, the number of zHyperLink ports that can be installed varies, depending on
the number of cores per CPC that are available and the number of I/O bay enclosures.
The number of zHyperLinks that can be installed based on the number of cores available is
listed in Table 5-9 on page 153.
152 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Table 5-9 zHyperLink availability by DS8900F model
System or model Cores per CPC zHyperLink support Max zHyperLink
(DS8900F server) connections
(increments of 2)
20 Yes 8
Each zHyperLink connection requires a zHyperLink I/O adapter to connect the zHyperLink
cable to the storage system, as shown in Table 5-10 and Table 5-11.
Note: The IBM z16™, z15, z14, and z13 servers support 32,000 devices for each FICON
host channel. The IBM zEnterprise® EC12 and IBM zEnterprise BC12 servers support
24,000 devices for each FICON host channel. Earlier IBM Z servers support 16,384
devices for each FICON host channel. To fully access 65,280 devices, it is necessary to
connect multiple FICON host channels to the storage system. You can access the devices
through an FC switch or FICON director to a single storage system.
154 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Note: IBM z16 now supports 32 GFC host adapters and FICON Express32S Channels,
which provide twice the read/write bandwidth compared to 16 GFC host adapters on
previous models, thus taking full advantage of 32 GFC host adapters on the DS8900F.
Better performance for copy services can be obtained by using dedicated host ports for
remote copy links, and other path optimization. For more information, see IBM DS8900F
Performance Best Practices and Monitoring, SG24-8501.
Note: DS8000 has a set of internal parameters that are known as pokeables, which
sometimes are referred to as product switches. These internal parameters are set to
provide the best behavior in most typical environments. In special cases, like
intercontinental distances or when bandwidth is low, some internal tuning might be
required to adjust those internal controls to keep Global Mirror (GM) as efficient as it is
in more common environments. Pokeable values can be displayed by a GUI or by Copy
Services Manager, but they can be changed only by IBM Support. For more
information, see DS8000 Global Mirror Best Practices, REDP-5246.
The z and x:xx values are unique combinations for each system and each SFI that are based
on a machine’s serial number. Use the DS CLI command lssi to determine the SFI WWNN,
as shown in Example 5-1.
Do not use the lssu command because it determines the storage unit WWNN, which is not
used. Attached hosts see only the SFI, as shown in Example 5-2.
However, the DS8900F WWPN is a child of the SFI WWNN, where the WWPN inserts the z
and x:xx values from SFI WWNN. It also includes the YY:Y from the logical port naming,
which is derived from where the host adapter is physically installed. Use the DS CLI
command lsioport to determine the SFI WWPN, as shown in Example 5-3.
156 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 5-6 System properties: WWNN
You can also determine the host adapter port WWPN by completing the following steps:
1. Connect to the HMC IP address by using a web browser:
https://< hmc IP address >
2. Select Actions.
3. Select Modify Fibre Channel Port Protocols.
The default view may show protocols and the state only. The view can be customized to
display the port WWPN and the frame.
You can also select Show System Health Overview and then Fibre Channel Ports, as
shown in Figure 5-8.
158 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5.3 Network connectivity planning
To implement the DS8900F, you must consider the physical network connectivity of the HMC
within your LAN.
Consider the following network and communications requirements when you plan the location
and interoperability of your storage systems:
HMC network access (one IP per HMC).
Remote support connection.
SAN connectivity.
An IBM Security Guardium Key Lifecycle Manager connection if encryption, end-point
security, or TCT is activated, or an LDAP connection if LDAP is implemented.
For more information about physical network connectivity, see IBM DS8900F Introduction and
Planning Guide, SC27-9560.
A dual Ethernet connection is available for client access. The two HMCs provide redundant
management access to enable continuous availability access for encryption key servers and
other advanced functions.
The HMC can be connected to the client network for the following tasks:
Remote management of your system by using the DS CLI
Remote management by using the DS Storage Management GUI by opening a browser to
the network address of the HMC:
https://<HMC IP address>
To access the HMCs (HMC1/HMC2) over the network, provide the following information:
HMC: For each HMC, determine one TCP/IP address, hostname, and domain name.
DNS settings: If a DNS is implemented, ensure that it is reachable to avoid contention or
network timeout.
Gateway routing information: Supply the necessary routing information.
Note: Users also can control a second Ethernet adapter within the HMCs. This capability is
available only by Request for Price Quotation (RPQ).
For more information about HMC planning, see Chapter 6, “IBM DS8900F Management
Console planning and setup” on page 167.
Important: The DS8900F uses 172.16.y.z and 172.17.y.z private network addresses. If the
client network uses the same addresses, the IBM SSR can reconfigure the private
networks to use another address range option.
IBM Spectrum Control simplifies storage management by providing the following benefits:
Centralizing the management of heterogeneous storage network resources with IBM
storage management software
Providing greater synergy between storage management software and IBM storage
devices
Reducing the number of servers that are required to manage your software infrastructure
Migrating from basic device management to storage management applications that
provide higher-level functions
IBM Storage Insights is offered free of charge to customers who own IBM block storage
systems. It is an IBM Cloud storage service that monitors IBM block storage. It provides
single-pane views of IBM block storage systems, such as the Operations dashboard and the
Notifications dashboard.
With the information that is provided, such as the diagnostic event information; key capacity;
and performance information, and the streamlined support experience, you can quickly
assess the health of your storage environment and get help with resolving issues.
On the Advisor window, IBM Storage Insights provides recommendations about the remedial
steps that can be taken to manage risks and resolve issues that might impact your storage
services. For a brief illustration of IBM Storage Insights features, see 12.10, “Using IBM
Storage Insights” on page 445.
A DS CLI script file is a text file that contains one or more DS CLI commands. It can be
issued as a single command. DS CLI can also be used to manage other functions for a
storage system, including managing security settings, querying point-in-time performance
information or the status of physical resources, and exporting audit logs.
The DS CLI client can be installed on a workstation, and can support multiple OSs. The DS
CLI client can access the DS8900F over the client’s network. For more information about
hardware and software requirements for the DS CLI, see IBM DS8000 Series Command-Line
Interface User’s Guide, SC27-9562.
160 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5.3.4 Remote support connection
Remote support is available through the embedded AOS application or through the IBM
Remote Support Center (RSC).
Embedded AOS
The preferred remote support connectivity method for IBM is through Transport Layer
Security (TLS) for the Management Console (MC) to IBM communication. DS8900F uses an
embedded AOS server solution. Embedded AOS is a secure and fast broadband form of
remote access.
For more information, see Chapter 6, “IBM DS8900F Management Console planning and
setup” on page 167 and Chapter 12, “Monitoring and support” on page 423.
A SAN allows your host bus adapter (HBA) host ports to have physical access to multiple host
adapter ports on the storage system. Zoning can be implemented to limit the access (and
provide access security) of host ports to the storage system.
Shared access to a storage system host adapter port is possible from hosts that support a
combination of HBA types and OSs.
Important: A SAN administrator must verify periodically that the SAN is working correctly
before any new devices are installed. SAN bandwidth must also be evaluated to ensure
that it can handle the new workload.
With a DS8900F system, you can choose among IBM Security Guardium Key Lifecycle
Manager, Gemalto SafeNet KeySecure, and Thales Vormetric Data Security Manager for data
at rest and TCT encryption. IBM Fibre Channel Endpoint Security encryption requires IBM
Security Guardium Key Lifecycle Manager. You cannot mix IBM Security Guardium Key
Lifecycle Manager and SafeNet or Vormetric key servers. For more information, see IBM
DS8000 Encryption for Data at Rest, Transparent Cloud Tiering, and Endpoint Security
(DS8000 Release 9.2), REDP-4500.
Important: Clients must acquire an IBM Security Guardium Key Lifecycle Manager license
to use the Guardium Key Lifecycle Manager software.
Note: The licensing for IBM Security Guardium Key Lifecycle Manager includes both an
installation license for the Guardium Key Lifecycle Manager management software and
licensing for the encrypting drives.
The DS8000 series supports IBM Security Guardium Key Lifecycle Manager V2.5 or later.
This version also uses a connection between the HMC and the key server, which complies
with the National Institute of Standards and Technology (NIST) SP 800-131A standard. For
TCT encryption, IBM Security Guardium Key Lifecycle Manager V3.0.0.2 or later is required.
For IBM Fibre Channel Endpoint Security encryption, IBM Security Guardium Key Lifecycle
Manager V4.0 or later is required.
You are advised to upgrade to the latest version of the IBM Security Guardium Key Lifecycle
Manager.
Two network ports must be opened on a firewall to allow the DS8900F connection and to
obtain an administration management interface to the IBM Security Guardium Key Lifecycle
Manager server. These ports are defined by the IBM Security Guardium Key Lifecycle
Manager administrator.
For more information, see the following IBM publications for IBM Security Guardium Key
Lifecycle Manager:
IBM Security Guardium Key Lifecycle Manager Quick Start Guide, GI13-4178
IBM Security Key Lifecycle Manager Installation and Configuration Guide, SC27-5335, or
the relevant sections in IBM Security Guardium Key Lifecycle Manager 4.1.0.
162 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5.3.7 Lightweight Directory Access Protocol server
The DS8000 system provides, by default, local basic user authentication. The user IDs, roles,
and their passwords are maintained locally within the DS8000 system. Each individual
DS8000 system has its own list of user IDs and passwords that must be maintained
separately.
Since Release 9.1, LDAP authentication can be configured through the Storage Management
GUI, as described in 6.5.2, “Remote authentication” on page 188.
The native LDAP implementation does not require the IBM Copy Services Manager proxy. For
more information, see LDAP Authentication for IBM DS8000 Systems: Updated for DS8000
Release 9.1, REDP-5460.
Plan the distance between the primary and auxiliary storage systems carefully to correctly
acquire fiber optic cables of the necessary length that are required. If necessary, the CS
solution can include hardware, such as channel extenders or dense wavelength division
multiplexing (DWDM).
For more information, see IBM DS8000 Copy Services: Updated for IBM DS8000 Release
9.1, SG24-8367.
For more information about the DS8000 sparing concepts, see 3.5.11, “Spare creation” on
page 96.
For the effective capacity of one rank in the various possible configurations, see IBM
DS8900F Introduction and Planning Guide, SC27-9560.
Important: When you review the effective capacity, consider the following points:
Effective capacities are in decimal gigabytes (GB). 1 GB is 1,000,000,000 bytes.
Although drive sets contain 16 drives, arrays use only eight drives. The effective
capacity assumes that you have two arrays for each disk drive set.
164 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The IBM Storage Modeller tool can help you determine the raw and net storage capacities
and the numbers for the required extents for each available type of RAID. IBM Storage
Modeller is available only for IBM employees and IBM Business Partners.
DS8900F offers the following flash drives sets with HPFE Gen2:
2.5-inch high-performance flash (Tier 0) drives are 800 TB, 1.6 TB, and 3.2 TB capacity
drives.
2.5-inch high-capacity flash (Tier 1) drives are 3.84 TB.
2.5-inch high-capacity flash (Tier 2) drives are 1.92 TB, 7.68 TB, and 15.36 TB capacity
drives.
Flash drives in HPFE Gen2 are ordered in sets of 16 within an enclosure pair. There are three
sets of 16 drives in an HPFE Gen2 enclosure pair.
For the latest information about supported RAID configurations and requesting an RPQ or
SCORE, contact your IBM SSR.
The MC does not process any of the data from hosts. It is not even in the path that the data
takes from a host to the storage. The MC is a configuration and management station for the
whole DS8900F system.
The DS8900F includes a Management Enclosure (ME). This enclosure contains two MCs,
which is standard for redundancy reasons, but the ME contains other essential management
components too, which are explained in 6.1.1, “Management Enclosure” on page 168.
The MC, which is the focal point for DS8900F management, includes the following functions:
DS8900F power control
Storage provisioning
Storage system health monitoring
Storage system performance monitoring
Copy Services (CS) monitoring
Embedded IBM Copy Services Manager
Interface for onsite service personnel
Collection of diagnostic and Call Home data
Problem management and alerting
Enables remote support access
Storage management through the DS GUI
Connection to IBM Security Guardium Key Lifecycle Manager or other supported external
key manager for encryption management functions, if required
Connection to an external IBM Copy Services Manager or IBM Spectrum Control
Interface for Licensed Internal Code (LIC) and other firmware updates
168 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 6-1 shows a ME.
Front View of ME
two HMCs
ME
location in base rack
Note: The location of the ME can be slightly different from what is shown in Figure 6-1
because there are many rack configurations, such as IBM DS8980F model 998,
IBM DS8950F model 996, IBM DS8910F model 994, and IBM DS8910F Rack-Mounted
model 993 that can fit into an existing 19-inch form-factor rack. On racked DS8900F
systems, the ME is always in the base frame (Rack 1).
The ME is designed to create a compact container for all essential system management
components that otherwise would be mounted around the rack as in former IBM DS8000
models.
The ME provides internal communications to all of the modules of the DS8900F system. It
also provides external connectivity by using two Ethernet cables from each HMC for remote
management, and provides keyboard, mouse, and video connectivity from each HMC for
local management. Cables are routed from the MCs to the rear of the ME through a cable
management arm (CMA).
Because of the small width, both primary and secondary MCs are mounted in the front of the
ME next to each other.
There is an 1U keyboard and display tray that are available. For racked DS8980F, DS8950,
and DS8910F model 994 systems, you must order one (use Feature Code 1765). For the
Flexibility Class Rack-Mounted model 993, it is optional. For more information, see IBM
DS8910F Model 993 Rack-Mounted Storage System Release 9.1, REDP-5566.
170 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The MC connects to the customer network and provides access to functions that can be used
to manage the DS8900F. Management functions include logical configuration, problem
notification, Call Home for service, remote service, and CS management.
These management functions can be performed from the DS GUI, Data Storage
Command-Line Interface (DS CLI), or other storage management software that supports the
DS8900F.
For example, clients who use an external IBM Copy Services Manager for advanced
functions, such as Metro Mirror (MM) or FlashCopy are communicating to the storage system
by connecting the IBM Copy Services Manager server to the HMCs as management entry
point.
The MC provides connectivity between the DS8000 and Encryption Key Manager (EKM)
servers (Security Guardium Key Lifecycle Manager), and also provides the functions for
remote call home and remote support connectivity.
The MCs are equipped with Ethernet connections for the client’s network. For more
information, see 6.1.3, “Private and Management Ethernet networks” on page 171.
Use the DS CLI command lshmc to show the HMC types, whether both HMCs are online, and
their amount of disk capacity and memory, as shown in Example 6-1.
Inside the ME, the switch ports of the internal black and gray network switches are routed
from inside the ME to an external breakout at the rear of the ME by using short patch cables
to make the ports accessible from outside.
Each central processor complex (CPC) flexible service processor (FSP) and each CPC
logical partition (LPAR) network are connected to both switches. Each of these components
(FSP and LPAR) uses their own designated interface for the black network and another
interface for the gray network. These components are connected to the external breakout
ports of the ME.
Additionally, an MC contains a third Ethernet interface (eth2) for the customer network
connection to allow management functions to be started over the network. This customer
network connection is routed from the MC directly to the rear of ME to its own breakout ports.
For particular circumstances where the customer needs a second Ethernet interface for
management reasons, you can place a Request for Price Quotation (RPQ) to have a USB
Ethernet adapter (eth1) added to the MC. This adapter can be used to connect the HMC to
two separate customer networks, usually for separating internet traffic (call home and remote
access) from storage management tasks (DS GUI, DS CLI, and IBM Copy Services
Manager).
MC1- Adapter
Ethernet (eth1) to USB
0DQDJHPHQW
(QFORVXUH
SW1 Black Network
SW2 Gray Network
T8 T1
T8 T1
Rear
Connectors
Customer, MC2-BRSWHWK SW1-T6 SW1-T1
Customer, MC1-BRSWHWK
SW2-T6 SW2-T1
Customer,MC2-AHWK
Customer,MC1-AHWK
Important: The internal Ethernet switches that are shown in Figure 6-3 and Figure 6-4 on
page 173 (the black and gray private networks) are for DS8900F internal communication
only. Do not connect these ports directly to your network. There is no connection between
the customer network interfaces and the black and gray network to keep them isolated.
172 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 6-4 ME external connections
An HMC communicates to these iPDUs by using the Ethernet network, and it manages and
monitors the system power state, iPDU configuration, System AC power on and off, iPDU
firmware update, iPDU health check and error, and power usage reporting.
The iPDUs’ network interfaces are also connected to the external ports of the ME to reach the
black and gray network switches. They are distributed over the black and gray network, which
means iPDUs that belong to one power domain (usually on the left side of the rear of the rack)
connect to a gray network switch and the iPDUs that belong to the other power domain
(usually on the right side of the rear of the rack).
For the rack-mounted DS8980F and DS8950F systems, you may add an expansion frame
model E96. Two cascaded switches are added to the base frame to connect the additional
iPDUs of the expansion rack to the ME switches.
One 1U 24-port Ethernet switch is added for the black network and one 24-port is added for
the gray network. Each of them has an uplink to the related ME switch port of their designated
black or gray network. The 2U space that is required is already reserved at the bottom of the
base rack.
SW2-T1 gray Unused or iPDU-E21 (upper left from rear) or uplink to 24-port
cascaded Ethernet switch gray network
SW1-T7 black / SW2-T7 gray MC1 eth0 black / MC1 eth3 gray
SW1-T8 black / SW2-T8 gray MC2 eth0 black / MC2 eth3 gray
The Management Console also provides the interfaces for IBM Spectrum Control,
IBM Storage Insights, and the DS CLI to connect to the DS8900F remotely.
174 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Note: The DS Open API with IBM System Storage Common Information Model (CIM)
agent is no longer supported. The removal of the CIM Agent simplifies network security
because fewer open ports are required.
For more information, see 9.2.1, “Accessing the DS GUI” on page 235.
Note: The DS Storage Management GUI also provides a built-in DS CLI. Look for the
console icon on the lower left of the browser window after logging in.
For more information about DS CLI usage and configuration, see Chapter 10, “IBM DS8900F
Storage Management Command-line Interface” on page 339. For a complete list of DS CLI
commands, see IBM DS8000 Series: Command-Line Interface User’s Guide, SC27-9562.
This feature removes the requirement for an external server to host IBM Copy Services
Manager, which provides savings on infrastructure costs and operating system (OS)
licensing. Administration costs are also reduced because the embedded IBM Copy Services
Manager instance is upgraded through the DS8900F code maintenance schedule, which is
performed by IBM support personnel.
Important: Updating the HMC embedded IBM Copy Services Manager must be done
exclusively through the IBM DS CLI tool that is installed on the workstation, laptop, or
server.
Update IBM Copy Services Manager on the HMC by completing the following steps:
1. Verify the current level of the DS CLI.
2. Verify the current level of IBM Copy Services Manager on the HMC.
3. Download selected releases of DS CLI, if necessary, and IBM Copy Services Manager
from IBM Fix Central.
4. Update DS CLI, if needed.
5. Update IBM Copy Services Manager on the HMC.
The DS8000 Code Recommendation page provides a link to the DS8900F code bundle
information page, as shown in Figure 6-6 and Figure 6-7.
Verifying the current level of IBM Copy Services Manager on the HMC
To verify the current IBM Copy Services Manager release that is installed on a DS8000 HMC,
run the lssoftware DS CLI command:
Example 6-2 on page 177 shows an example where the IBM Copy Services Manager release
on both HMCs is 6.2.9.1.
176 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Example 6-2 Current IBM Copy Services Manager release
dscli> lssoftware -l -type csm -hmc all
Type Version Status HMC
========================================
CSM V6.2.9.1-a20200804-1704 Running 2
CSM V6.2.9.1-a20200804-1704 Running 1
dscli>
Complete the following steps. Assume that IBM Copy Services Manager 6.3.0 is the release
that will be installed.
1. On the IBM Fix Central page, select IBM Copy Services Manager as the product, 6.3.0.0
as the installed version, and Linux as the platform. Figure 6-8 shows a summary of
selected options.
Figure 6-8 Selected IBM Copy Services Manager Version for HMC
Note: The HMC OS is Linux. Ensure that the correct platform is selected.
2. Be sure to download the correct Linux-x86_64 release. Figure 6-9 shows the correct
package type selected. Check the Release Notes, and if there is a newer fix pack file, you
can use it instead.
Updating IBM Copy Services Manager on the HMC by using the DS CLI
Update the IBM Copy Services Manager on each HMC. In a dual HMC environment, update
one IBM Copy Services Manager instance at a time.
Note: If your IBM Copy Services Manager installation has active CS sessions, you must
follow best practices while applying maintenance to an active management server.
Note: The Active and Standby servers must be updated concurrently. Failure to do so
results in the inability to connect to the other server.
The DS CLI command that is used for the IBM Copy Services Manager update is
installsoftware. You can find more information about the command in IBM Documentation.
Table 6-2 describes the parameters that are necessary for the installsoftware command.
Note: Ensure that no spaces are included in the path that you specify for the location of the
software package and certificate file.
Note: In addition to the standard 1751 port, DS CLI also uses port 1755 (TCP protocol) to
transfer the IBM Copy Services Manager installation file to the HMC. That port must be
open on any physical or software firewall standing between the workstation where DS CLI
is installed and the DS8000 HMCs.
178 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
To effectively run the command, you must use a DS8000 user ID that is part of the
Administrator role (for example, the default admin user ID).
Example 6-3 shows how the IBM Copy Services Manager on HMC1 was updated by using
DS CLI.
dscli> lssoftware
Type Version Status
====================================
CSM V6.3.0.0-a20210622-1237 Running
CSM V6.2.9.1-a20200804-1704 Running
The next step is to update IBM Copy Services Manager on HMC2, as shown in Example 6-4.
dscli> lssoftware
Type Version Status
====================================
CSM V6.3.0.0-a20210622-1237 Running
CSM V6.3.0.0-a20210622-1237 Running
2. Click Log on and launch the Hardware Management Console web application to open
the login window, as shown in Figure 6-12 on page 181, and log in. The default user ID is
customer and the default password is cust0mer.
Important: Make sure to change the default password. The user credentials for
accessing the Service Management Console (HMC) are managed separately from the
ones that are used with DS CLI and the Storage Management GUI. For more
information about HMC user management, see 6.5.3, “Service Management Console
User Management” on page 189.
180 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 6-12 Service Management Console application
3. If you are successfully logged in, you see the MC window, in which you can select
Status Overview to see the status of the DS8900F. Other areas of interest are shown in
Figure 6-13.
Because the MC web UI is mainly a services interface, it is not covered here. For more
information, see the Help menu.
182 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
6.3.2 Planning for Licensed Internal Code upgrades
The following tasks must be considered regarding the LIC upgrades on the DS8900F:
LIC changes
IBM periodically releases changes to the DS8900F series Licensed Machine Code (LMC).
Customers can check the IBM Support site for the latest Flashes, Alerts and Bulletins, and
keep up to date by subscribing to IBM Support Notifications.
LIC installation options
There are three installation types available, depending on the support contract:
– On-site Code Load
An IBM Systems Service Representative (IBM SSR) goes onsite to install the changes.
– Remote Code Load (RCL)
IBM Remote Support personnel install the LIC remotely.
– Customer Code Load
As of Release 9.3, customers can perform the installation.
DS CLI Compatibility
Check whether the new LIC requires new levels of DS CLI. Plan on upgrading them on the
relevant workstations, if necessary.
Code prerequisites
When you are planning for initial installation or for LIC updates, ensure that all
prerequisites for the environment are identified correctly, which include host OS versions,
fixes, host bus adapter (HBA) levels, interconnect and fabric types, and OS versions.
DS8900F interoperability information is available at the IBM System Storage
Interoperation Center (SSIC).
To prepare for downloading the drivers, see the “Interoperability Search Details” report in
SSIC, which provides an end-to-end support matrix from the host to the DS8900F, and
covers all versions of OS, multipathing software, and firmware. This check is necessary to
ensure that the DS8900F storage subsystem is in a supported environment.
Important: The SSIC includes information about the latest supported code levels. This
availability does not necessarily mean that former levels of HBA firmware or drivers are
no longer supported. Some host type interoperability, such as NetApp ONTAP, might
need to be confirmed in the vendor’s support matrix. If you are in doubt about any
supported levels, contact your IBM SSR.
Maintenance windows
The LIC update of the DS8900F is a nondisruptive action. Scheduling a maintenance
window with added time for contingency is still a best practice. Also, plan for sufficient time
to confirm that all environment prerequisites are met before the upgrade begins.
For more information about LIC upgrades, see Chapter 11, “Licensed Machine Code” on
page 405.
Important: For correct error analysis, the date and time information must be synchronized
on all components in the DS8900F environment. These components include the DS8900F
MC, the attached hosts, IBM Spectrum Control, and DS CLI workstations.
Additionally, when the DS8900F is attached to an IBM Z system server, a service information
message (SIM) notification occurs automatically. A SIM message is displayed on the OS
console if a serviceable event occurs. These messages are not sent from the MC, but from
the DS8900F through the channel connections that run between the server and the DS8900F.
Up to eight external syslog servers can be configured, with varying ports if required. Events
that are forwarded include user login and logout, all commands that are issued by using the
GUI or DS CLI while the user is logged in, and remote access events. Events are sent from
Facility 19, and are logged as level 6.
184 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
6.3.6 Call Home and remote support
The MC uses outbound (Call Home) and inbound (remote service) support.
Call Home is the capability of the MC to contact the IBM Support Center to report a
serviceable event. Remote support is the capability of IBM SSR to connect to the MC to
perform service tasks remotely. If the IBM Support Center can connect to the MC to perform
service tasks remotely based on the setup of the client’s environment, an IBM SSR can
connect to the MC to perform detailed problem analysis. The IBM SSR can view error logs
and problem logs and start trace or memory dump retrievals.
Remote support can be configured by using the embedded Assist On-site (AOS) or Remote
Support Console. The setup of the remote support environment is performed by the IBM SSR
during the initial installation. For more information, see Chapter 12, “Monitoring and support”
on page 423.
This activity includes the configuration of the private (internal) and management (customer)
network with IPv6 or IPv4, hostname, DNS, NTP, routing, and remote support settings.
Chapter 8, “Configuration flow” on page 225 explains the configuration flow in more detail.
Those settings can be changed afterward by using the Service Management Console WUI or
DS GUI.
Note: Only the customer management network interfaces eth2 and eth1 are shown and
can be configured in the Network Settings dialog because the internal private black and
gray networks with interfaces eth0 and eth3 are used for the running system. The eth0 and
eth3 interfaces can be changed only by opening an IBM support request.
If the default address range cannot be used because it conflicts with another network, you
can instead specify one of three optional addresses ranges. Table 6-3 shows the possible
options that can be chosen during installation.
When you change the internal private network, you do not need to configure each individual
network interface. Instead, each change that you make changes both the black and gray
networks at once.
186 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
To make the change, select HMC Management → Query/Change IP Range, as shown in
Figure 6-16.
Note: Changing the internal private network range on the storage system facility can be
done in concurrent mode, but requires special care. For that reason, an IBM service
request must be opened before making such a change.
To manage the DS GUI and DS CLI credentials, you can use the DS CLI or the DS GUI. An
administrator user ID is preconfigured during the installation of the DS8900F and this user ID
uses the following defaults:
User ID: admin
Password: admin
The password of the admin user ID must be changed before it can be used. The GUI forces
you to change the password when you first log in. By using the DS CLI, you log in but you
cannot run any other commands until you change the password. For example, to change the
admin user’s password to passw0rd, run the following DS CLI command:
chuser -pw passw0rd admin
After you issue that command, you can run other commands.
Recommendation: Do not set the value of the chpass command to 0 because this setting
indicates that passwords never expire and unlimited login attempts are allowed.
Important: Upgrading an existing storage system to the latest code release does not
change the old default user-acquired rules. Existing default values are retained to prevent
disruption. The user might opt to use the new defaults by running the chpass -reset
command. The command resets all default values to the new defaults immediately.
The password for each user account is forced to adhere to the following rules:
Passwords must contain one character from at least two groups of the following ones:
alphabetic, numeric, and punctuation.
The range for minimum password length is 6 - 64 characters. The default minimum
password length is 8 characters.
Passwords cannot contain the user’s ID.
Passwords are case-sensitive.
The length of the password is determined by the administrator.
Initial passwords on new user accounts are expired.
Passwords that are reset by an administrator are expired.
Users must change expired passwords at the next logon.
Starting with Release 9.1 the remote authentication setup can be found in the Storage
Manager GUI. Go to the Access menu and select Remote Authentication. From there, click
Configure Remote Authentication. The installation is guided by the Remote Authentication
wizard.
DS8900F now has native support for Remote Authentication through LDAP, although it is still
supported to use IBM Copy Services Manager servers as a proxy to the remote
authentication servers.
188 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 6-17 shows the window that opens directly after the Welcome window. After you
complete all the wizard steps of the wizard, the DS8000 is enabled and configured for remote
authentication.
The following prerequisites are required to complete the Remote Authentication wizard:
Access to create users and groups on your remote authentication server.
A primary LDAP repository URI is required.
A secondary LDAP repository URI is optional.
A User search base (only for Direct LDAP).
A truststore file with a password is required (only for IBM Copy Services Manager).
An IBM WebSphere® username with a password is required (only for IBM Copy Services
Manager).
For more information about LDAP-based authentication and configuration, see LDAP
Authentication for IBM DS8000 Systems: Updated for DS8000 Release 9.1, REDP-5460.
In the HMC Management section of the WUI, two options are available:
Managed User Profiles and Access
Configure LDAP
Important: Do not delete the last user ID in a role. For more information about removing
user IDs for a role, see Table 6-5 on page 191.
There are three predefined user roles that are related to the Customer, Service, and
Engineering user IDs, as shown in Table 6-4.
esshmcpe Requires the IBM proprietary challenge/response key for remote access.
The roles, access, and properties for each user ID are described in Table 6-5 on page 191.
190 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Table 6-5 User roles
Role esshmccustomer esshmcserv esshmcpe
User IDs PE, CE, and customer are specifically for DS8900F use. Ignore the other profiles.
Note: Do not change the user ID PE because it uses the remote challenge/response login
process, which is logged and audited.
The user ID root cannot log in to the WUI. The user ID hscroot cannot access HMC
functions externally. Do not use them.
Do not create user IDs with a Task Role beginning with “hmc”.
Adding a user ID
To add a user ID, complete the following steps:
1. Click User to open the User Profiles option menu, as shown in Figure 6-21.
192 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
2. Click Add. The Add User window opens, as shown in Figure 6-22.
Only those roles that are outlined by the boxes are valid Task Roles.
3. Complete the following fields:
a. Under Description, define a user or use HMC User as an example.
b. Passwords must adhere to the DS8900F password policies. For more information, see
, “After you issue that command, you can run other commands.” on page 187.
c. Choose the type of Authentication that you want.
d. Select AllSystemResources, under Managed Resource Roles.
e. Select the Task Role type.
4. Click User Properties to optionally add Timeout and Inactivity values. Ensure that Allow
access via the web is selected if web access is needed.
The User Profiles are updated and list the new user ID. As an example, user ID IBM_RSC was
created and is shown in Figure 6-24 and Figure 6-25 on page 195.
194 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 6-25 IBM_RSC user ID properties
The DS8900F can run all storage duties while the MC is down or offline, but configuration,
error reporting, and maintenance capabilities become severely restricted. Any organization
with high availability (HA) requirements should strongly consider deploying an MC redundant
configuration.
Important: The primary and secondary MCs are not available to be used as
general-purpose computing resources.
196 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
6.6.1 Management Console redundancy benefits
MC redundancy provides the following advantages:
Enhanced maintenance capability
Because the MC is the only interface that is available for service personnel, an alternative
MC provides maintenance operational capabilities if the internal MC fails.
When a configuration or CS command is run, the DS CLI or DS GUI sends the command to
the first MC. If the first MC is unavailable, it automatically sends the command to the second
MC instead. Typically, you do not need to reissue the command.
Any changes that are made by using one MC are instantly reflected in the other MC. No host
data is cached within the MC, so no cache coherency issues occur.
The IBM Copy Services Manager provides an advanced GUI to easily and efficiently manage
CS. IBM Copy Services Manager is available on the DS8000 HMC, which eliminates the need
to maintain a separate server for CS functions. For that reason, and in addition to the other
license bundles that are shown in Table 7-1, the IBM Copy Services Manager for HMC license
can be configured along with these bundles, and it is enabled by using a data storage feature
activation (DSFA) key. IBM Copy Services Manager enablement files are activated on the
HMC when the key is applied.
The grouping of licensed functions facilitates ordering, which differs from earlier DS8000
models for which licensed functions were more granular and ordered specifically.
200 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
zsS:
– Fibre Channel connection (IBM FICON) attachment
– Parallel access volume (PAV)
– HyperPAV
– SuperPAV
– High-Performance FICON for IBM Z (zHPF)
– IBM z/OS Distributed Data Backup (zDDB)
– Transparent Cloud Tiering (TCT)
– zHyperLink
IBM Copy Services Manager on the HMC license
IBM Copy Services Manager facilitates the use and management of CS functions, such as
the remote mirror and copy functions (MM and GM) and the point-in-time copy (PTC)
function (FlashCopy). IBM Copy Services Manager is available on the HMC, which
eliminates the need to maintain a separate server for CS functions.
Licensed functions enable the operating system and functions of the storage system. Some
features, such as the operating system, are always enabled, and other functions are optional.
Licensed functions are purchased as 5341 machine function authorizations for billing
purposes.
Each licensed function indicator feature that is ordered enables that function at the system
level, and it requires an equivalent 9031 function authorization. The licensed function
indicators are also used for maintenance billing purposes.
Starting with DS8900F R9.3, maintenance and support fall under Expert Care, which defines
the support duration (1, 2, 3, 4, or 5 years) and the service level (Advanced or Premium).
When purchasing IBM DS8900F (machine type 5341), the inclusion of Expert Care is
mandatory. For more information, see 7.4, “Expert Care” on page 220.
All DS8900F models are sold with a 1-year warranty. This warranty is extended by Expert
Care from 2 to 5 years. The machine type 5341 no longer indicates the warranty period.
Each license function authorization is associated with a fixed 1-year function authorization
9031-FF8.
The licensed function indicator feature numbers enable the technical activation of the
function, subject to a feature activation code that is made available by IBM, which must be
applied by the client.
Licensed functions are activated and enforced with a defined license scope. License scope
refers to the type of storage and the type of servers that the function can be used with. For
example, the zsS licenses are available only with the CKD (z/FICON) scope.
The BFs are mandatory. The BFs must always be configured for both mainframe and open
systems, which have a scope of ALL. Also, to configure CKD volumes, Feature Code 8300
is required.
With CS, if these services are used only for either mainframe or open systems, the
restriction to either FB or CKD is possible. However, most clients want to configure CS for
the scope ALL.
BF CS zsS
The following features are available after the license bundle is activated:
MM is a synchronous way to perform remote replication. GM enables asynchronous
replication, which is useful for longer distances and lower bandwidth.
MGM enables cascaded 3-site replication, which combines synchronous mirroring to an
intermediate site with asynchronous mirroring from that intermediate site to a third site at a
long distance.
Combinations with other CS features are possible and sometimes needed. Usually, the
3-site MGM installation also requires an MM sublicense on site A with the MGM license
(and even a GM sublicense, if after a site B breakdown you want to resynchronize site A
and site C). At site B, on top of the MGM, you also need the MM and GM licenses. At site
C, you then need sublicenses for MGM, GM, and FlashCopy.
Multiple-Target PPRC (MT-PPRC) enhances disaster recovery (DR) solutions by allowing
data at a single primary site to be mirrored to two remote sites simultaneously. The
function builds and extends MM and GM capabilities and is supported on DS8900F,
DS8880, on later DS8870 firmware, and on IBM Copy Services Manager or
IBM Z software, such as IBM Geographically Dispersed Parallel Sysplex (IBM GDPS) /
MTMM.
Various interfaces and operating systems (OSs) support the function. For the DS8900F
family, this feature is integrated with the CS license bundle.
202 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Two possibilities exist for FlashCopy PTC: Use it with thick (standard) volumes or
thin-provisioned extent space efficient (ESE) volumes.
The ESE thin volumes can also be used in remote mirroring relationships. ESE volumes
offer the same good performance as standard (thick) volumes, and can be managed by
IBM Easy Tier.
Safeguarded Copy enables you to create snapshots for Logical Corruption Protection
(LCP). It provides many recovery points from which to restore data in case of logical
corruption or destruction of data. If Safeguarded Copy is used, you should additionally
mark the Feature Code 0785 indicator option when ordering.
The z/OS Global Mirror (zGM) license, which is also known as Extended Remote Copy
(XRC), enables z/OS clients to copy data by using System Data Mover (SDM). This copy
is asynchronous.
As previously noted, IBM Copy Services Manager on HMC offers full IBM Copy Services
Manager functions and must be enabled by using a DSFA activation key. IBM Copy
Services Manager enablement files are activated on the HMC when the key is applied.
The license for IBM Copy Services Manager on the HMC server must be purchased as a
separate software license.
For IBM Z clients, PAVs allow multiple concurrent I/O streams to the same CKD volume.
HyperPAV reassigns the alias addresses dynamically to the base addresses of the
volumes based on the needs of a dynamically changing workload. Both features result in
such large performance gains that for many years they were configured as an effective
standard for mainframe clients, similar to FICON, which is required for z/OS.
SuperPAV is an extension to HyperPAV support and allows aliases to be borrowed from
eligible peer logical control units (LCUs).
zHPF is a feature that uses a protocol extension for FICON and allows data for multiple
commands to be grouped in a single data transfer. This grouping increases the channel
throughput for many workload profiles. It works on all newer IBM zEnterprise Systems and
it is preferred for these systems because of the performance gains that it offers.
zDDB is a feature for clients with a mixture of mainframe and distributed workloads to use
their powerful IBM Z host facilities to back up and restore open systems data. For more
information, see IBM System Storage DS8000: z/OS Distributed Data Backup,
REDP-4701.
Easy Tier is available in the following modes:
– Automatic mode works on the subvolume level (extent level) and allows auto-tiering in
hybrid extent pools. The most-accessed volume parts go to the upper tiers. In
single-tier pools, it allows auto-rebalancing if it is turned on.
– Manual Dynamic Volume Relocation (DVR) mode works on the level of full volumes
and allows volumes to be relocated or restriped to other places in the DS8000 online. It
also allows ranks to be moved out of pools. For more information, see IBM DS8000
Easy Tier (Updated for DS8000 R9.0), REDP-4667.
– Easy Tier Heat Map Transfer (HMT) automatically replicates a heat map to remote
systems to ensure that they are also optimized for performance and cost after a
planned or unplanned outage. For more information, see IBM DS8870 Easy Tier Heat
Map Transfer, REDP-5015.
The Encryption Authorization feature provides data encryption by using IBM Full Disk
Encryption (FDE) and key managers, such as IBM Security Guardium Key Lifecycle
Manager. The key manager must be licensed separately.
For more information about these features, see IBM DS8900F Introduction and Planning
Guide, SC27-9560.
Important: With the CS license bundle, order subcapacity licensing, which is less than the
total physical raw capacity, only when a steady remote connection for the DS8000 is
available.
By using a remote connection for Call Home, the CS license can be based on the usable
capacity of the volumes that will potentially be in CS relationships. This amount typically is
less than the total raw capacity.
Note: The CS license goes by the capacity of all volumes that are involved in at least one
CS relationship. The CS license is based on the provisioned capacity of volumes and not
on raw capacity. If overprovisioning is used on the DS8000 with a significant number of CS
functions, the CS license needs to be equal only to the total provisioned capacity. This
situation is true even if the logical volume capacity of volumes in CS is greater.
For example, with overprovisioning, if the total rank raw capacity of a DS8900F is 100 TB but
200 TB of thin-provisioning volumes are in MM, only a 100 TB of CS license is needed.
For FlashCopy volumes, you must count the source plus target volumes as provisioned
capacity. Several examples are shown in “Pricing examples for Copy Services” on page 205.
204 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Pricing examples for Copy Services
The following examples are provided to illustrate your CS licensing requirements in an FB
environment:
Scenario 1: For FlashCopy for a 10 TB source, the purchase of a 20 TB capacity CS
license is required.
Scenario 2: To use MM on a 10 TB source and then FlashCopy on a 10 TB target, the
purchase of a 10 TBs CS license on the source and a 20 TB CS license on the target
DS8000 is required.
Scenario 3: To use GM on a 10 TB source and then FlashCopy on a 10 TB target DS8000,
the purchase of a 10 TB CS license on the source and a 20 TB CS license on the target
DS8000 is required.
Scenario 4: To use MGM on a 10 TB source and then FlashCopy on a 10 TB target, the
purchase of a 10 TB CS license on the source and secondary, and the purchase of a
20 TB CS license on the target is required.
However, consider that with MGM, certain scenarios can require more FlashCopy targets
on the local machines, and so larger CS terabyte scopes are necessary.
Scenario 5: A client wants to perform GM for a 10 TB source and use FlashCopy on the
target for practicing DR, but they do not want to affect the normal GM. This situation
requires a GM secondary, GM Journal, and a FlashCopy volume on the secondary
system. The source DS8900F requires a 10 TB CS license, and the target DS8880
requires a 30 TB CS license.
Scenario 6: To perform 4-site replications, the purchase of the correct capacity license
requirement for each storage system is required.
Note: If zDDB is used on a system with no CKD ranks, a 10 TB zsS license must be
ordered to enable the FICON attachment functions.
Drive features
The BF is based on the raw (decimal terabyte) capacity of the drives. The pricing is based on
the drive performance, capacity, and other characteristics that provide more flexible and
optimal price and performance configurations.
To calculate the raw (gross) physical capacity, multiply for each drive set the number of drives
with their individual capacities. Therefore, for example, a drive set of sixteen 3.84 TB drives
has a 61.44 TB raw capacity.
Table 7-3 shows the Feature Codes for high-performance flash drive sets.
Important: Check with an IBM Systems Service Representative (IBM SSR) or go to the
IBM website for an up-to-date list of available drive types.
Related information: New storage system expansions for DS8900F are delivered only
with FDE drives.
The Database Protection (for open and FB) feature and the Thin Provisioning feature come
with the BF license bundle.
Ordering granularity
You order the license bundles by the terabyte, but not by a single terabyte. The granularity is
slightly larger. For example, below 100 TB total raw capacity, the granularity increment for an
upgrade is 10 TB. With larger total capacities, the granularity is larger. For more information,
see Table 7-5.
Tip: For more information about the features and considerations when you order DS8900F
licensed functions, go to IBM Offering Information and search for the IBM DS8900F
Models 993, 994, 996, 998, and E96 announcement letter by using the DS8900F keyword
as a search term.
206 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
7.2 Activating licensed functions
You can activate the license keys of the DS8000 after the IBM SSR completes the storage
complex installation. If you plan to use the Storage Management GUI to configure your new
storage, after the initial login as admin, the setup wizard guides you to download your keys
from the DSFA website and activate them. However, if you plan to use the Data Storage
Command-line Interface (DS CLI) to configure your new storage, you must first obtain the
necessary keys from the DSFA website.
Before you connect to the DSFA website to obtain your feature activation codes, ensure that
you have the following items:
The IBM License Function Authorization documents. If you are activating codes for a new
storage unit, these documents are included in the shipment of the storage unit. If you are
activating codes for an existing storage unit, IBM sends the documents to you in an
envelope.
A USB memory device that can be used for downloading your activation codes if you
cannot access the DS Storage Manager from the system that you are using to access the
DSFA website. Instead of downloading the activation codes in softcopy format, you can
print the activation codes and manually enter them by using the DS Storage Manager GUI
or the DS CLI. However, this process is slow and error-prone because the activation keys
are 32-character strings.
You can obtain the required information by using the DS Storage Management GUI or the
DS CLI. If you use the Storage Management GUI, you can obtain and apply your activation
keys at the same time. These options are described next.
Note: Before you begin this task, resolve any current DS8000 problems that might
exist. You can contact IBM Support to help you resolve these problems.
4. To begin the guided procedure to acquire and activate your feature activation keys, select
System Setup → Licensed Functions, and then complete the Activate Licensed
Functions routine, as shown in Figure 7-2.
Note: You can download the keys and save the XML file to the folder that is shown here,
or you can copy the license keys from the IBM DSFA website.
208 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5. After you enter all your license keys, click Activate to start the activation process, as
shown in Figure 7-3.
6. Click Summary in the System Setup wizard to view the list of licensed functions or feature
keys that are installed on your DS8000, as shown in Figure 7-4.
8. To obtain the machine signature and machine type and model (MTM) after the installation,
go to the Dashboard and click Actions → Properties, as shown in Figure 7-6.
Figure 7-6 Properties window showing the machine signature and MTM
210 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Important: The initial enablement of any optional DS8000 licensed function is a
concurrent activity (assuming that the correct level of Licensed Internal Code (LIC) is
installed on the system for the function).
The following activation activities are disruptive and require an initial machine load or
restart of the affected image:
Removal of a DS8000 licensed function to deactivate the function.
A lateral change or reduction in the license scope. A lateral change is defined as
changing the license scope from FB to CKD or from CKD to FB. A reduction is
defined as changing the license scope from all physical capacity (ALL) to only FB or
only CKD capacity.
Note: Before you begin this task, you must resolve any current DS8000 problems that
exist. You can contact IBM Support for help with resolving these problems.
9. Click Activate to enter and activate your licensed keys, as shown in Figure 7-2 on
page 208.
10.Wait for the activation process to complete and select Licensed Functions to show the
list of activated features.
dscli> lssi
Date/Time: 01 April 2022 15:16:29 CEST IBM DSCLI Version: 7.9.21.80 DS: -
Name ID Storage Unit Model WWNN State ESSNet
==================================================================================
ds8k-r9-01 IBM.2107-75HAL91 IBM.2107-75HAL90 994 5005076312345678 Online Enabled
dscli> showsi
Date/Time: 01 April 2022 15:16:35 CEST IBM DSCLI Version: 7.9.21.80 DS: -
Name ds8k-r9-01
desc Sand Shark
ID IBM.2107-75HAL91
Storage Unit IBM.2107-75HAL90
Model 994
WWNN 5005076312345678
Signature abcd-ef12-3456-7890
State Online
ESSNet Enabled
Volume Group V0
os400Serial 050
NVS Memory 8.0 GB
Cache Memory 143.1 GB
Processor Memory 183.9 GB
MTS IBM.5331-75HAL90
numegsupported 16
ETAutoMode all
ETMonitor all
IOPMmode Disabled
ETCCMode -
ETHMTMode Enabled
ETSRMode Enabled
ETTierOrder High performance
ETAutoModeAccel Disabled
Figure 7-7 Obtaining DS8000 information by using the DS CLI
Note: The showsi command can take the storage facility image (SFI) serial number as a
possible argument. The SFI serial number is identical to the storage unit serial number,
except that the SFI serial number ends in 1 instead of 0 (zero).
212 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Gather the following information about your storage unit:
The Machine Type - Serial Number (MTS), which is a string that contains the machine type
and the serial number. The machine type, now mostly 5341, here above is 5331, and the
last 7 characters of the string are the machine’s serial number (XYABCDE), which always
ends with 0 (zero).
The model, which, for example, is 996 for a DS8950F.
The machine signature, which is found in the Machine signature field and uses the
following format:
ABCD-EF12-3456-7890
Table 7-6 documents this information, which is entered at the DSFA website to retrieve the
activation codes.
Machine type
Machine signature
Note: A DS8880 is shown in the following figures. However, the steps are identical for all
models of the DS8000 family.
2. Click DS8000 series. The Select DS8000 series machine window opens, as shown in
Figure 7-9. Select the appropriate 5341, 533x, 283x, 242x, or 958x machine type.
214 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
3. Enter the machine information that was collected in Table 7-6 on page 213 and click
Submit. The View machine summary window opens, as shown in Figure 7-10.
The View machine summary window shows the total purchased licenses and the number
of licenses that are currently assigned. When you assign licenses for the first time, the
Assigned field shows 0.0 TB.
216 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
7. The Retrieve activation codes window opens, which shows the license activation codes for
the storage image, as shown in Figure 7-12. Print the activation codes or click Download
now to save the activation codes in an XML file that you can import into the DS8000.
To apply activation codes by using the DS CLI, complete the following steps:
1. Run the showsi command to display the DS8000 machine signature, as shown in
Figure 7-14.
dscli> showsi
Date/Time: 06 April 2022 15:01:47 CEST IBM DSCLI Version: 7.9.30.154 DS: -
Name ds8k-r9-01
desc Sand Shark
ID IBM.2107-75HAL91
Storage Unit IBM.2107-75HAL90
Model 998
WWNN 5005076312345678
Signature abcd-ef12-3456-7890
State Online
ESSNet Enabled
Volume Group V0
os400Serial 6DF
NVS Memory 127.5 GB
Cache Memory 4168.4 GB
Processor Memory 4343.4 GB
MTS IBM.5341-75HAL90
numegsupported 16
ETAutoMode tiered
ETMonitor automode
IOPMmode Disabled
ETCCMode -
ETHMTMode Enabled
ETSRMode Enabled
ETTierOrder High performance
ETAutoModeAccel Disabled
2. Obtain your license activation codes from the IBM DSFA website, as described in 7.2.2,
“Obtaining the activation codes” on page 213.
218 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
3. Enter the applykey command at the following DS CLI. The -file parameter specifies the
key file. The second parameter specifies the storage image.
dscli> applykey -file c:\53xx_75XXXX0.xml IBM.2107-75XXXX1
Or you can apply individual keys by running the following command:
dscli> applykey -key f190-1234-1234-1234-1234-5678-1234-5678 IBM.2107-75XXXX1
CMUC00199I applykey: License Machine Code successfully applied to storage image
IBM.2107-75XXXX1.
4. Verify that the keys were activated for your storage unit by running the lskey command, as
shown in Figure 7-15.
dscli> lskey
Activation Key Authorization Level (TB) Scope
==========================================================================
Base function 130.8 All
Copy services 130.8 All
Encryption Authorization on All
Global Mirror (GM) 130.8 All
High Performance FICON for System z (zHPF) on CKD
IBM HyperPAV on CKD
IBM System Storage DS8000 Thin Provisioning on All
IBM System Storage Easy Tier on All
IBM z/OS Distributed Data Backup on FB
Metro/Global Mirror (MGM) 130.8 All
Metro Mirror (MM) 130.8 All
Operating environment (OEL) 130.8 All
Parallel access volumes (PAV) 60.5 CKD
Point-in-time copy (PTC) 130.8 All
RMZ Resync 130.8 CKD
Remote Mirror for z/OS (RMZ) 130.8 CKD
z-synergy services 60.5 CKD
Figure 7-15 Using the lskey command to list the installed licenses
For more information about the DS CLI, see IBM DS8000 Series Command-Line Interface
User’s Guide, SC27-9562.
The BF license bundle must be installed before ranks can be formatted for FB (open
systems). The zsS license bundle must be installed before ranks can be formatted for CKD
(mainframe).
Tip: Because the BF license must be ordered for the full physical capacity anyway, and
because the CS license can be ordered for only those volumes that are in CS
relationships, consider the following tip: For BF and CS, configure these bundles with
scope “ALL” from the beginning.
220 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The Technical Account Manager is new role that combines the previous roles of Technical
Sales Manager and Technical Advisor. The TAM acts as the single point of contact for the
client. They set up a welcome call, schedule monthly activity reports, advise on code
currency, help schedule code upgrades, facilitate the initial installation, help with Call Home
and remote support setup, and perform other activities.
Enhanced Support Time targets an incident response time of 30 minutes or less for Severity 1
and 2 incidents in the United States and selected countries.
With Predictive Support, IBM pro-actively notifies customers of possible problems to prevent
issues from escalating or causing an impact. Predictive Support leverages statistics and
performance metrics from IBM Storage Insights. For more information about IBM Storage
Insights, see 12.10, “Using IBM Storage Insights” on page 445.
Table 7-8 shows the Feature Codes for each of the available options.
The feature codes for Expert Care might differ from region to region. For a full listing, see the
relevant IBM Hardware Announcement for your region.
Specific options are also available regarding contact and resolution times, including 1-hour
committed contact, 4-hour committed onsite, or 4-, 6-, 8-, 12-, 24-, 48-, or 72-hour committed
fix time, each with a corresponding feature code. For more information, contact your IBM
Sales Representative.
Note: Planning information for all DS8900F models, including the rack-mounted model
993, is covered in the same guide.
The completed customization worksheets specify the initial setup for the following items:
Company information: Provide important company and contact information. This
information is required to ensure that IBM support personnel can reach the appropriate
contact person or persons in your organization, or send a technician to service your
system in the event of a critical event as quickly as possible.
Hardware Management Console (HMC) network: Provide the IP address and local area
network (LAN) settings. This information is required to establish connectivity to the
Management Console (MC).
Remote support, including Call Home: Provide information to configure Remote Support
and Call Home. This information helps to ensure timely support for critical serviceable
events on the system.
Notification: Provide information to receive Simple Network Management Protocol (SNMP)
traps and email notifications. This information is required if you want to be notified about
serviceable events.
Power control: Provide your preferences for the power mode on the system.
Control switch: Provide information to set up the control switches on the system. This
information is helpful if you want to customize settings that affect host connectivity for
IBM i and IBM Z hosts.
226 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Assign two or more storage administrators and two or more security administrators to
manage your storage system. To preserve the dual control that is recommended for recovery
key management, do not assign both storage administrator and security administrator roles to
the same user. Assign one or more users to each of the following roles:
The Administrator (admin) has access to several HMC or MC service functions and all
storage image resources, except for specific encryption functions. This user authorizes the
actions of the Security Administrator during the encryption deadlock prevention and
resolution process.
The Security Administrator (secadmin) has access to all encryption functions. A user with
an Administrator role is required to confirm the actions that are taken by a user of this role
during the encryption deadlock prevention and resolution process.
The Physical operator (op_storage) has access to physical configuration service methods
and resources, such as managing the storage complex, storage image, rank, array, and
extent pool objects.
The Logical operator (op_volume) has access to all service methods and resources that
relate to logical volumes, hosts, host ports, logical subsystems (LSSs), and volume
groups, excluding security methods.
The Monitor role has access to all read-only, nonsecurity MC service methods, such as
the list and show commands.
The IBM Service role (ibm_service) has access to all MC service methods and resources,
such as running code loads and retrieving problem logs. This group also has the privileges
of the Monitor group, excluding security methods.
The IBM Engineering role (ibm_engineering) has all access that the ibm_service group
has plus more permissions to manage Fibre Channel (FC) Port settings, manage
data-at-rest encryption, and modify Easy Tier settings.
The Copy Services (CS) operator (op_copy_services) has access to all CS methods and
resources, and the privileges of the Monitor group, excluding security methods.
The Logical and Copy operator (op_volume and op_copy_services) has the combined
access of the Logical operator and the Copy operator.
Important: Resource groups offer an enhanced security capability that supports the
hosting of multiple customers with CS requirements. It also supports a single client with
requirements to isolate the data of multiple operating system (OS) environments. For
more information, see IBM DS8000 Copy Services: Updated for IBM DS8000 Release
9.1, SG24-8367.
For more information about the capabilities of certain user roles, see User roles or use the
DS GUI Help function.
The DS8900F provides a storage administrator with the ability to create custom user roles
with a fully customized set of permissions by using the DS GUI or DS CLI. This set of
permission helps to ensure that the authorization level of each user on the system exactly
matches their job role in the company so that the security of the system is more robust
against internal attacks or mistakes.
You can also consider using a Lightweight Directory Access Protocol (LDAP) server for
authenticating IBM DS8000 users. You can now take advantage of the IBM Copy Services
Manager and its LDAP client that comes preinstalled on the DS8900F HMC. For more
information about remote authentication and LDAP for the DS8900F, see LDAP
Authentication for IBM DS8000 Systems: Updated for DS8000 Release 9.1, REDP-5460.
For more information, including considerations and best practices for DS8900F encryption,
see 5.3.6, “Key manager servers for encryption” on page 161 and IBM DS8000 Encryption for
Data at Rest, Transparent Cloud Tiering, and Endpoint Security (DS8000 Release 9.2),
REDP-4500.
For more information about encryption license considerations, see “Encryption activation
review planning” on page 162.
228 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Two components are required to provide full network protection:
The first component is Internet Protocol Security (IPsec), and for Gen 2 security, IPsec v3
is required. IPsec protects network communication at the internet layer or the packets that
are sent over the network. This configuration ensures that a valid workstation or server
communicates with the HMC and that the communication between them cannot be
intercepted.
The second component is Transport Layer Security (TLS) 1.2, which provides protection at
the application layer to ensure that valid software (external to the HMC or client) is
communicating with the software (server) in the HMC.
Note: The details for implementing and managing security requirements are provided in
IBM DS8870 and NIST SP 800-131a Compliance, REDP-5069.
If you perform logical configuration by using the DS CLI, the following steps provide a
high-level overview of the configuration flow. For more detailed information about using and
performing logical configuration with the DS CLI, see Chapter 10, “IBM DS8900F Storage
Management Command-line Interface” on page 339.
2. Create arrays: Configure the installed flash drives as redundant array of independent disks
(RAID) 6, which is the default and preferred RAID configuration for the DS8900F.
3. Create ranks: Assign each array as a Fixed-Block (FB) rank or a Count Key Data (CKD)
rank.
4. Create extent pools: Define extent pools, associate each one with Server 0 or Server 1,
and assign at least one rank to each extent pool. To take advantage of storage pool
striping, you must assign multiple ranks to an extent pool.
Important: If you plan to use IBM Easy Tier (in particular, in automatic mode), select
the All pools option to receive all of the benefits of Easy Tier data management.
230 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 8-1 LUN configuration for shared access
Note: Avoid intermixing host I/O with CS I/O on the same ports for performance
reasons.
The DS GUI was designed and developed with three major objectives:
Speed: A graphical interface that is fast and responsive.
Simplicity: A simplified and intuitive design that can drastically reduce the time that is
required to perform functions with the system, which reduces the total cost of ownership
(TCO).
Commonality: Use of common graphics, widgets, terminology, and metaphors that
facilitate the management of multiple IBM storage products and software products. The
DS GUI that was introduced with Release 9.0 enables a consistent graphical experience
and easier switching between other products like IBM FlashSystem®, IBM Spectrum
Virtualize, or IBM Spectrum Control and IBM Storage Insights.
Based on these objectives, following the initial setup of the storage system, a system
administrator can use the DS GUI to complete the logical configuration and then prepare the
system for I/O. After the initial setup is complete, the system administrator can perform
routine management and maintenance tasks with minimal effort, including the monitoring of
performance, capacity, and other internal functions.
Logical storage configuration is streamlined in the DS GUI for ease of use. The conceptual
approach of array site, array, and ranks is streamlined into a single resource, which is
referred to as an array (or managed array). The storage system automatically manages flash
adapter pairs and balances arrays and spares across the two processor nodes without user
intervention.
Creating usable storage volumes for your hosts is equally simplified in the DS GUI. The
system can automatically balance volume capacity over a pool pair. If custom options are
required for your workload, the DS GUI can override defaults and customize your workload
needs.
Configuring connections to hosts is also easy. Host ports are updated automatically and host
mapping is allowed at volume creation.
The overall storage system status can be viewed at any time from the dashboard window. The
dashboard presents a view of the overall system performance when a system administrator
first logs in for a picture of the system status. This window also contains a “Hardware View”
and a “System Health View” that displays the status and attributes of all hardware elements
on the system.
Additionally, functions that include user access, licensed function activation, setup of
encryption, IBM Fibre Channel Endpoint Security, and remote authentication, and modifying
the power or Fibre Channel (FC) port protocols are available to the system administrator.
All functions that are performed in the DS GUI can be scripted by using the DS Command Life
Interface (DS CLI), which is described in Chapter 10, “IBM DS8900F Storage Management
Command-line Interface” on page 339.
234 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
9.2 DS GUI: Getting started
This section describes how to accomplish the following tasks:
Accessing the DS GUI
System Setup wizard
Configuring Fibre Channel port protocols and topologies
Managing and monitoring the storage system
Storage Monitoring and Servicing from the Unified Service GUI
On a new storage system, the user must log on as the administrator. The password expires
immediately, and the user is forced to change the password.
Figure 9-2 IBM Copy Services Manager window started from the DS GUI
The wizard guides the admin user through the following tasks:
1. Set the system name.
2. Activate the licensed functions.
3. Provide a summary of actions.
236 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The System Setup window opens with the Welcome pane, as shown in Figure 9-3.
2. Click Next. The Licensed Functions window opens. Click Activate Licensed Functions.
3. The Activate Licensed Functions window opens. Keys for licensed functions that are
purchased for this storage system must be retrieved by using the Machine Type, Serial
Number, and Machine Signature. The keys can be stored in a flat file or an XML file.
Licensed function keys are downloaded from the data storage feature activation (DSFA)
website.
4. When the license keys are entered, click Activate to enable the functions, as illustrated in
Figure 9-6.
238 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5. Click Next to open the Summary window, which is shown in Figure 9-7. If everything looks
correct, click Finish to exit the System Setup wizard. After the wizard is closed, the
System window opens.
Note: The Summary shows licenses for basic functions. The list might include some
advanced functions such as Copy Services (CS), Z Synergy Services (zsS), and IBM
Copy Services Manager on HMC if the corresponding licenses were activated.
Figure 9-8 Quick way to modify the fiber adapter port protocols
You can also configure the port topology from the System view or from the System Health
overview, as shown in Figure 9-10.
Figure 9-10 Modifying the FC port protocol from the System Health overview
You can also perform this configuration during logical configuration. For more information, see
9.6.4, “Creating FB host attachments” on page 282, and 9.13, “Monitoring system health” on
page 309.
240 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
9.3 Managing and monitoring the storage system
When the initial setup of the system is complete, the Dashboard opens. This view is what
opens after a user logs in. It displays the main hardware components of the DS8900F system
and shows the status of the system hardware, a view of the overall system performance for a
quick picture of the system performance, and a summary of the system capacity. You can
return to this window at any time by clicking the Dashboard icon in the upper left of the GUI.
Note: Different users might have a limited view of the Dashboard when logging in,
depending on their role. Most of the material that is documented here describes what the
Administrator role sees.
From the DS GUI, the administrator can manage the system by performing actions for various
activities, such as:
Logical configuration
Controlling how the system is powered on or off
Modifying the FC port protocols and customer network settings
Modifying Easy Tier settings
Viewing system properties
Displaying performance monitoring graphs
Accessing the embedded DS CLI
Viewing the status and properties of some CS functions, such as FlashCopy and mirroring
Figure 9-11 presents a high-level overview of the System window and all the objects that can
be accessed from this window.
Note: The menu items and actions that are shown in the DS GUI depend on the role of the
user that is logged in, and they can vary for each user. For more information about user
roles, click Help at the upper right of the DS GUI window and search for user role.
System property
System name
Current state
Product type
Machine Signature (string of characters that identifies a DS8000 storage system), which is
used to retrieve license keys from the DSFA website
Hardware component summary (such as processor type, total subsystem memory, raw
data storage capacity, or number of FC ports)
242 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
– Export reports.
Since Release 9.2, every user role, including Monitor, can download reports such as
the System Summary comma-separated values (CSV) report, Performance Summary
CSV report, Easy Tier Summary report, and FC Connectivity report. Previously,
exporting reports was limited to the Storage Administrator role.
To select multiple options that will be saved in a compressed CSV file, click the
Download icon. The options include the System Summary, Performance Summary,
Easy Tier Summary, and the FC Connectivity report.
Note: The System Capacity section in the System Summary CSV is composed of
consolidated data. The System Capacity, Used IBM Z (Count Key Data (CKD))
Capacity, and Used Open Systems (Fixed-Block (FB)) Capacity sections in the
System Summary CSV are now combined into one section that is called System
Capacity. All sections are now shown with the column headers listed even if there
are no items in the list. For example, the FlashCopy section is shown even if no
FlashCopy relationships are present on the system.
244 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Pools menu:
– Arrays by Pool: Access this view to see all the pools on the system along with the
arrays that they contain. Use this view to access actions that can be performed on
pools and arrays. This view shows any unconfigured arrays.
– Volumes by Pool: Access this view to see all the pools on the system along with the
volumes that they contain. Use this view to access actions that can be performed on
pools and volumes.
Volumes menu:
– Volumes: Access this view to see all the volumes on the system. Use this view to
access all actions that can be performed on volumes, such as create, modify, or delete
volumes.
– Volumes by Pool: This view is the same one that is described in the Pools menu.
– Volumes by Host: Access this view to see volumes that are based on the host or host
cluster to which they are assigned and all volumes that are not mapped to a host. Use
this view to access all actions that can be performed on volumes and hosts or host
clusters.
– Volumes by LSS: Access this view to see volumes that are based on the logical
subsystem (LSS) to which they belong. Use this view to access all actions that can be
performed on volumes and LSSs.
Hosts menu:
– Hosts: Access this view to see all the hosts and host clusters on the system. Use this
view to access all actions that can be performed on hosts and host clusters, such as
create, modify, or delete hosts or host clusters, and the state of host port logins.
– Volumes by Host: The same view that is described in the Volumes menu.
Copy Services menu:
– FlashCopy: The FlashCopy window provides details about FlashCopy relationships.
– Mirroring: The Mirroring window provides details and status information about Remote
Mirror and Remote Copy volume relationships.
– Mirroring Paths: The Mirroring Paths window displays a list of existing Remote Mirror
and Remote Copy path definitions.
Access menu:
– Users:
Only users with the administrator role can access this menu. This menu opens the
Users window. A system administrator can use this menu to perform the following
actions:
• Create user accounts.
• Set a user account role.
• Set temporary passwords (to be reset at first use by the new account).
• Modify an existing user account role.
• Reset an existing user account password.
• Disconnect a user account.
• Determine a user account connection (DS CLI or GUI).
• Remove user accounts.
246 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
• Advanced.
On the Advanced tab of the System settings window, you can allow service access,
enable ESSNet CS, set the IBM i serial number prefix, enable control-unit initiated
reconfiguration (CUIR) for IBM Z, and select the power control mode for the storage
system. In addition, you can manage service settings and work with other settings
for your system.
– Notifications:
• Call Home.
You can enable Call Home on your Management Console (MC) to send an
electronic Call Home record to IBM Support when there is a problem within the
storage complex.
• Syslog.
You can define, modify, or remove syslog servers. You can also enable extra
security with Transport Layer Security (TLS) for the syslog.
– Support:
• IBM Remote Support Center (RSC).
You can configure RSC to allow IBM Support to remotely access this system to
quickly resolve any issues that you might be having. You can choose that the RSC
connection stays open always, close 2 hours after IBM support is logged off, or
closed. For added security, you can require an access code for remote support.
• Assist On-site.
You can configure the Assist On-site (AOS) feature, which allows IBM Support to
remotely access the MC and storage system. Choose an option to stop, start, or
restart the AOS service.
• Troubleshooting.
You can restart the local or remote HMC to correct communication issues between
the storage system and HMC.
You can refresh the GUI cache if the data in the Storage Management GUI is not in
sync with data in the DS CLI or IBM Spectrum Control.
You can restart the web servers and communication paths that are used by
IBM Enterprise Storage Server Network Interface (IBM ESSNI).
– GUI Preferences:
• Login Message.
You can enter a message that is displayed when users log in to either the DS GUI or
an interactive DS CLI session.
• General.
You can set the default logout timeout and chose to show suggested tasks.
9.3.1 Storage Monitoring and Servicing from the Unified Service GUI
The Unified Service GUI, shown in Figure 9-13 on page 249, provides access to all service
functions and tools. Before Release 9.2, all these features were available for use only through
the Service Web User Interface (WUI) on the HMC. All these functions help IBM Support
representatives perform specific tasks like data collection, event management, miscellaneous
equipment specification (MES), model conversions, and microcode and hardware upgrades.
Important: This Unified Service GUI should be accessed by IBM Support representatives
or under the supervision of IBM support representatives only.
Service Dashboard was a recent add-on to the DS8900F GUI after Release 9.2. Service
Dashboard is accessible by an IBM service role privilege user only on the DS8900F GUI.
All functions in the service dashboard are added from the WUI. The categorization and
functions of each attribute under the service dashboard perform functions like under the WUI,
which provides ease of management and more privilege to customers. These functions are
still available through WUI and HMC access, but they are added to the DS8900F GUI
interface too.
248 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-13 Unified Service GUI
1. Data collection:
a. Perform Data collection on Demand: With this option, you can generate and offload the
existing complete PE package with different formats.
i. General PE Package: Collects data for any service actions, such as installation or
removal, MES, repair, and code load. This package contains the data of reliability,
availability, and serviceability (RAS) and functional code components (not including
state saves).
ii. Client User Interface (ESSNI, DS GUI, or API) Package: Collects data for problems
that originate from using customer user interface applications, such DS GUI. This
package contains the data of various software components running on HMC, such
as DS GUI, ESSNI, and RESTful APIs. DS CLI traces are not on the HMC.
iii. All data packages for removable media offload only option: Collects both the PE
packages and off loads the data to removable media.
b. Offload Data by Area: With this option, you can download an area package of any of
the following line items. This option is intended only as a substitute for the “PE
Package/ Full Data collection On-Demand” function. Many of the listed items contain
duplicate data. If you require more than one or two of these items, then it is a best
practice to cancel this function and use the “PE Package /Full Data collection
on-demand” function.
i. ESSNI.
ii. DEVICE Object Mismatch.
iii. Panics.
iv. Failed Service Action (Repair).
v. CDA.
250 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
h. Update HMC code: Updates the HMC code from downloaded code in the HMC.
i. Update Storage Facility Code: A service user can select the available facility code and
either Distribute only, Activate only, or both.
j. Select and Install corrective Services: With this option, you can install code add-ons for
certain components.
k. ICS Utilities.
l. Prepare HMC Upgrade: Selects the recovery image for HMC for an upgrade activity.
After a restart, the HMC begins the upgrade.
m. Rebuild Peer HMC: Rebuilds the Peer HMC.
n. Advanced Utilities:
i. Advanced configuration, Install Corrective Service, Display Library Contents, Clear
Library Contents, Delete Release Bundle and Package, and Delete a Recovery
Image.
ii. Display/Update Bootlist image & Reverse eServer Firmware is available in the WUI
update section under the HMC in Backlevel utilities, and in the DS8900F GUI under
Advanced Utilities.
iii. Display/Reset CDA SFI attributes & Reset Serviceable Event Tracking is available in
the WUI update section under HMC in Miscellaneous utilities, and in the DS8900F
GUI under Advanced Utilities.
o. CCL Utilities: With concurrent code load, you can perform CCL I/O Enclosure, IBM
Power firmware, and storage enclosure updates.
p. Non-concurrent code load (NCCL) Utilities: You can perform NCCL for the following
components:
i. NCCL SFI code Activation Single LPAR: No IML, NCCL SFI code Activation Single
LPAR – Resume, and NCCL SFI code Activation Single LPAR – Start CPSS.
ii. NCCL eServer firmware update, NCCL eServer firmware Single Node Update,
NCCL I/O Enclosure firmware update, and NCCL Power firmware update.
iii. NCCL SFI Code Activation: IML, and NCCL SFI Code Activation – NO IML.
3. Drive Utilities: You can perform Display Drive Code Levels, Display Drive Update Status,
Run Drive Pre-verify, CCL Update Drive Code Level, NCCL Update Drive Code Level, and
Terminate Drive Update on Drives.
4. Install Hardware: With this option, you can use the hardware component installation
assistance wizard:
a. Storage Facility Field Install.
b. Generate Install Report: This section is available under th Storage facility Management
section in the WUI interface.
c. View/Certify Drive: View and certify installed drives.
d. You can install the option Open wizard to help with the installation or MES upgrade of
the following items:
i. Install I/O Enclosure or Components.
ii. Install Rack Power Components.
iii. Install Storage Enclosure or Drives.
iv. Install Expansion Rack.
e. Storage Facility Conversion: With this option, you can do a model conversion.
252 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.Node0 & Node1 Management: Manage and change the Node, controller, or servers:
a. Set No-rsStart.
b. Reset No-rsStart.
c. Launch Advanced System Management (ASM).
d. AIX Command Processing.
e. Change/Show LPAR State.
f. CEC Power Control.
g. Display CEC Drive Status.
h. Rebuild CEC Hard Drive.
i. Advance Utilities:
i. Open Terminal Window.
ii. Close Terminal Window.
iii. Backup Partition Profile.
iv. Restore Partition Profile.
v. Rebuild Managed System Information.
vi. Service Processor Status.
vii. Rest or Remove Connections.
viii.Reference code History.
ix. View License.
x. Identify LED.
xi. Test LED.
11.Storage System: Manage and change the storage facility:
a. View Storage Facility State.
b. Reset Service Intent.
c. PCIe Graphic Analysis.
d. Change/Show SFI State.
e. View/Change Processor and Memory Allocation (Variable Image).
f. Secure Data Overwrite.
g. Discontinue Storage Facility.
h. Advanced Utilities:
i. View/Reset Attention Indicators.
ii. View Device RM Harvest Phase.
iii. View Hardware Topology.
iv. View Storage Facility Power Status.
v. View Storage Facility Resources States.
12.Service Information Center: Open the service information center.
13.Service GUI (WUI): Open the WUI/HMC console.
In IBM Documentation, you can discover introductory information about the DS8900F
architecture, features, and advanced functions. You can also learn about the available
management interfaces and tools, and troubleshooting and support.
You can obtain more information about using the DS GUI for common tasks:
Logically configuring the storage system for open systems and IBM Z attachment
Managing user access
Attaching host systems
IBM Documentation also provides links to external links for more information about
IBM storage systems, and other related online documentation.
254 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Ethernet Network
The network settings for both HMCs are performed by IBM Support personnel during system
installation. To modify the HMC network information postinstallation, click Settings →
Network → Ethernet Network, as shown in Figure 9-15.
Figure 9-16 shows the FC ports window with all available options that are listed.
Note: Exporting the FC port information does not produce the comprehensive report that is
available in the FC connectivity report.
Use these settings to configure the security settings for your DS8900F system.
Data-at-rest encryption
To enable data-at-rest encryption, select the Settings icon from the DS GUI navigation menu
on the left. Click Security to open the Security window, and click the Data at Rest
Encryption tab, as shown in Figure 9-17 on page 257.
256 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-17 Enabling data-at-rest encryption
You can define a custom certificate for communication between the encryption key servers
(typically IBM Security Guardium Key Lifecycle Manager) and the storage system.
Important: If you plan to activate data-at-rest encryption for the storage system, ensure
that the encryption license is activated and the encryption group is configured before you
begin any logical configuration on the system. After the pools are created, you cannot
disable or enable encryption.
If the DS8900F was ordered with the Local Key Management feature, then the DS8900F
manages the key group. Local Key Management can be set up only by using the DS CLI.
For more information, see IBM DS8000 Encryption for Data at Rest, Transparent Cloud
Tiering, and Endpoint Security (DS8000 Release 9.2), REDP-4500.
To enable IBM Fibre Channel Endpoint Security, select the Settings icon from the DS GUI
navigation menu on the left. Click Security to open the Security window, and click the Fibre
Channel Endpoint Security tab, as shown in Figure 9-18.
Communications Certificate
The Communications Certificate tab of the Security Settings window can be used to assign
or create an encryption certificate for each HMC with HTTPS connections to the storage
system. You can also create certificate signing requests (CSRs), import existing certificates,
create self-signed certificates, and view the certificate information for each HMC, as shown in
Figure 9-20.
The Create Certificate Signing Request button is used to generate a CSR that is sent to a
certificate authority (CA) for verification. As shown in Figure 9-21 on page 259, the necessary
information to include in the CSR are the HMC fully qualified domain name (FQDN),
organization details, the length of time that the certificate must be valid, and an email
address.
258 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-21 Certificate signing request
After the CSR file is created, you can download that file for processing with your trusted CA.
The extra two options that are available here for secure communications to the DS8900F
system are to import an already provided CA certificate from your security group within your
organization, or to create a self-signed certificate.
Licensed Functions
You can display all the installed licensed functions and activate new function keys from this
menu, as shown in Figure 9-22.
260 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The following settings are available:
Easy Tier mode: The available options are Enable, Tiered pools only, Disable, Monitor
only, or Disable. When this setting is configured to enable, Easy Tier monitors I/O activity
for capacity in all pools, and manages capacity placement within them.
Easy Tier Heat Map Transfer (HMT): Use this setting to maintain application-level
performance at the secondary site of a DS8000 by transferring the Easy Tier information
to the secondary site.
Easy Tier Allocation order: Specify the allocation order that is used by Easy Tier to select
the drive classes when allocating capacity in a pool.
Easy Tier Automatic mode acceleration: Use this setting to temporarily accelerate data
migration by Easy Tier.
For more information about Easy Tier settings, see IBM DS8000 Easy Tier (Updated for
DS8000 R9.0), REDP-4667.
zHyperLink
zHyperLink is a short-distance link technology that complements Fibre Channel connection
(IBM FICON) technology to accelerate I/O requests that are typically used for transaction
processing. It consists of point-to-point connections for random reads and writes, and
provides up to 10 times lower latency than High-Performance FICON for IBM Z (zHPF). You
can set it to Enabled, I/O Read Enabled, I/O Write Enabled, or Disabled, as shown in
Figure 9-24.
Note: To take advantage of zHyperLink in DS8000, ensure that CUIR support (under
IBM Z) is enabled.
For more information, see Getting Started with IBM zHyperLink for z/OS, REDP-5493.
Advanced settings
From the Advanced tab of the System Settings window, you can configure system-wide
options, such as power management, CS, IBM Z features, and the DS Open application
programming interface (API).
262 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Power control mode
You can determine how to control the power supply to the storage system. From the
System window, click the Settings icon. Click System to open the System window. Click
the Advanced tab to open the window to manage Power control mode (as shown in
Figure 9-26 on page 262). The following options are available:
– Automatic: Control the power supply to the storage system through the external wall
switch.
– Manual: Control the power supply to the storage system by using the Power Off action
on the System window.
Function settings
The Resource Group Control option is available in the Function Settings section. It allows
a storage administrator to specify which users can perform certain logical configuration
actions, such as create or delete volumes in a pool.
Service Access
The following options are available in the Service Access section:
– DS Service GUI Access.
Allows authorized IBM SSRs to access the DS Service GUI.
– SSH Service Access.
Allows authorized IBM SSRs to access the Secure Shell (SSH) CLI on the HMC.
IBM i
The following option is available in the IBM i section:
IBM i serial number suffix: Enter the IBM i serial number suffix to avoid duplicate logical
unit number (LUN) IDs for an IBM i (AS/400) host. Restart the storage system to assign
the new serial number.
IBM Z
The following option is available in the IBM Z section:
CUIR Support: Enables control unit initiated reconfiguration. This option allows
automation of channel path quiesce and resume actions during certain service actions. It
eliminates the requirement for manual actions from the host.
Other settings
The following options are available in the Other Settings section:
– ESSNet CS.
Enables the ESSNet user interface to manage CS on the storage system.
– ESSNet volume group.
Selects the ESSNet user interface to manage the volume group with CS.
– Host precheck.
Enables FB and CKD volume delete protection.
– Device Threshold.
Sets the threshold level for IBM Z at which the system presents a service information
message (SIM) to the operator console for device-related errors. Device threshold
levels are the same type and severity as control unit threshold settings:
• 0: Service, Moderate, Serious, and Acute (all)
• 1: Moderate, Serious, and Acute
264 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Call Home
The DS8900F uses the Call Home feature to report serviceable events to IBM. To ensure
timely action from IBM Support personnel for these events, it is important to enable and
properly configure Call Home on the system.
When enabling Call Home for the first time, you must accept the Agreement for Service
Program when presented. Enter your Company Information, Administrator Information,
and System Information details. Finally, after completing the setup, you can test the Call
Home feature by clicking Test, as shown in Figure 9-27 on page 264.
Syslog
The Syslog window displays the syslog servers that are configured to receive logs from the
DS8900F system. A user with an administrator role can define, modify, or remove up to eight
syslog target servers. Each syslog server must use the same TLS certificate. Events such as
user login and logout, commands that are issued by an authorized user by using the DS GUI
or DS CLI, and remote access events are forwarded to syslog servers. Additionally, events in
the RAS audit log and Product Field Engineer (PFE) actions are also forwarded to the syslog
servers. Messages from the DS8900F are sent by using facility code 10 and severity level 6.
Note: A DS8900F server must use TLS for its communications with the syslog server.
To configure TLS, the customer must generate their own trusted certificate for the
DS8900F syslog process with the CA and import the trusted CA file, signed machine (in
this case the HMC and syslog process) syslog server certificate file, and key file, as
shown in Figure 9-29.
For more information about the setup of the SYSLOG server with TLS, see Encrypting
Syslog Traffic with TLS (Secure Sockets Layer) (SSL).
The process involves external entities such as your trusted CA and potentially the use
of the openssl command to retrieve the syslog server generated key if it is not already
provided by the CA.
The files that are entered into the fields that are shown in Figure 9-29 are:
CA Certificate (ca.pem)
HMC Signed Certificate (cert.pem)
HMC Key (key.pem)
4. In the Enable TLS window, browse for the following certificate files on your local machine:
– The CA certificate file (Example: ca.pem).
– The syslog communications certificate file, which is signed by the CA. (Example:
hmc.pem).
– The extracted Private Key file, which is the private key for the storage system.
(Example: key.pem).
5. Click Enable to complete the TLS configuration.
6. To add a syslog server, click Add Syslog Server, as shown in Figure 9-30, and provide
the following parameters:
– IP Address: The IP address of the external syslog server.
– Port: The TCP port for the external syslog server (the default is 514).
7. After you review the details, click Add to create the syslog server entry.
266 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
8. After the required syslog servers are created, you can Modify, Test, Activate, Deactivate,
and Remove a selected syslog server, as shown in Figure 9-31.
Note: To enable TLS, all existing syslog servers must be deleted first. Then, you can
enable TLS and create the syslog servers.
You can configure the RSC access to stay open continuously, close 2 hours after RSC logs
off, or keep it closed. You can require IBM service to use an access code for remote support
connections with the HMC on your storage system. Click Generate to generate an access
code or enter your own access code. The access code is case-sensitive and must be fewer
than 16 characters.
Assist On-site
If AOS is used for an IBM Support connection to the HMC, you can Start, Stop, or Restart
the AOS service from the GUI, as shown in Figure 9-33.
To configure AOS, click Show Full Configuration and enter the required settings, as shown
in Figure 9-33.
Troubleshooting
Use the Troubleshooting tab to perform actions that resolve common issues with your
storage system:
Restart HMCs
If there are connectivity issues with the storage management software (DS GUI, DS CLI,
IBM Copy Services Manager, or IBM Spectrum Control), click Restart HMC. You can also
use this feature to restart an HMC after you modify the settings of the HMC.
Refresh GUI Cache
If there are inconsistencies between what is displayed in the DS GUI and the DS CLI or
IBM Spectrum Control, click Refresh GUI Cache.
Reset Communications Path
To restart the web servers and communication paths that are used by IBM ESSNI, click
Reset Communications Path.
268 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-34 shows the Troubleshooting tab.
GUI Preferences
Use the GUI Preferences tab that is shown in Figure 9-35 to set the following options for the
DS GUI:
Login Message
With an administrator role, you can enter a message that is displayed when users log in to
either the DS GUI or the DS CLI.
General GUI settings
On the General tab of the GUI Preferences window, you can set the default logout time for
the DS GUI.
When the storage pools are created, arrays are first assigned to the pools, and then volumes
are created in the pools. FB volumes are connected through host ports to an open system
host. CKD volumes require LSSs to be created so that they can be accessed by an IBM Z
host.
Pools must be created in pairs to balance the storage workload. Each pool in the pool pair is
controlled by a processor node (either Node 0 or Node 1). Balancing the workload helps to
prevent one node from performing most of the work and results in more efficient I/O
processing, which can improve overall system performance. Both pools in the pair must be
formatted for the same storage type, either FB or CKD storage. Multiple pools can be created
to isolate workloads.
When you create a pool pair, all available arrays can be assigned to the pools, or the choice
can be made to manually assign them later. If the arrays are assigned automatically, the
system balances them across both pools so that the workload is distributed evenly across
both nodes. Automatic assignment also ensures that spares and device adapter (DA) pairs
are distributed equally between the pools.
If the storage connects to an IBM Z host, you must create the LSSs before you create the
CKD volumes.
It is possible to create a set of volumes that share characteristics, such as capacity and
storage type, in a pool pair. The system automatically balances the capacity in the volume
sets across both pools. If the pools are managed by Easy Tier, the capacity in the volumes is
automatically distributed among the arrays. If the pools are not managed by Easy Tier, it is
possible to choose to use the rotate capacity allocation method, which stripes capacity
across the arrays.
When you plan your configuration with the DS8900F, all volumes, including standard
provisioned volumes, use metadata capacity when they are created, which causes the usable
capacity to be reduced. The 1 (gibibyte) GiB extents that are allocated for metadata are
subdivided into 16 mebibyte (MiB) subextents. The metadata capacity of each volume that is
created affects the configuration planning.
If the volumes must connect to an IBM Z host, the next steps of the configuration process are
completed on the host. For more information about logically configuring storage for IBM Z,
see 9.7, “Logical configuration for Count Key Data volumes” on page 292.
If the volumes connect to an open system host, map the volumes to the host, and then add
host ports to the host and map them to FC ports on the storage system.
FB volumes can accept I/O only from the host ports of hosts that are mapped to the volumes.
Host ports are zoned to communicate only with certain FC ports on the storage system.
Zoning is configured either within the storage system by using FC port masking, or on the
SAN. Zoning ensures that the workload is spread correctly over FC ports and that certain
workloads are isolated from one another.
270 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Host configuration is simplified by the DS8900F microcode. Host ports are now automatically
updated and host mappings can be performed during the volume creation step of the logical
configuration. In addition, host port topology can be safely changed by using the DS GUI and
DS CLI. New host commands are available for DS CLI to make, change, delete, list, and show
a host connection. For more information, see Chapter 10, “IBM DS8900F Storage
Management Command-line Interface” on page 339.
Note: Deleting a pool with volumes is available in the GUI. A warning is displayed, and the
user must enter a code that is presented by the DS8900F to confirm the delete. A “force
deletion” option is also available. For more information, see Figure 9-88 on page 306.
Note: If the requirement is to create a single pool, see “Creating a single pool” on
page 277.
2. Click the Create Pool Pair tab, as shown in Figure 9-37. The Create Pool Pair window
opens.
Note: You can automatically assign arrays when creating a pool pair. The arrays are
created with the default redundant array of independent disks (RAID) type, RAID 6. To
configure other supported raid types, select the Custom option under the Create Pool
Pair dialog, or assign arrays manually to an existing storage pool from the Unassigned
Arrays. RAID 5 needs a Request for Price Quotation (RPQ), but it is not recommended.
For more information, see “Creating Fixed-Block pools: Custom” on page 274.
3. Specify the pool pair parameters, as shown in Figure 9-38 on page 273:
– Storage type: Ensure that Open Systems (FB) is selected.
– Name prefix: Add the pool pair name prefix. A suffix ID sequence number is added
during the creation process.
272 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-38 Creating an FB pool pair and assigning arrays
4. Select from the listed drive types and select the number of arrays for each drive type that
you want to assign to the pool pair.
Important: The number of specified arrays must be even. Trying to specify an odd
number results in a message that states “Arrays must be spread evenly across the
pool pair”. The GUI increases the number of arrays by one to achieve an even
number.
5. When pool pair parameters are correctly specified, click Create to proceed. Figure 9-39
shows a pool pair that is created and assigned arrays.
Available options for extent size are 1 GiB (large), or 16 mebibytes (MiB) (small). Small
extent size is the preferred option because it provides better capacity utilization. For large
systems that use Easy Tier, it might be preferable to use large extents. For an in-depth
description about large and small extents, see Chapter 4, “Virtualization concepts” on
page 107.
Note: RAID 5 is supported only for drives less than 1 TB and requires an RPQ. If
selected, you must acknowledge your understanding of the risks that are associated
with RAID 5 before continuing. For more information about the supported drive types
and available RAID levels, see Chapter 2, “IBM DS8900F hardware components and
architecture” on page 25.
7. When the pool pair parameters are correctly specified, click Create to proceed, as shown
in Figure 9-40 on page 275.
274 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-40 Create FB pools (Custom)
2. Select the target pool from the drop-down list, and the RAID level that you want.
3. Select the Redistribute checkbox to redistribute all existing volumes across the pool,
including the new array.
4. Click Assign.
Note: In a pool that is managed by Easy Tier, redistributing volumes across the pool is
automatic. This redistribution is called Dynamic Pool Reconfiguration. For more
information, see IBM DS8000 Easy Tier (Updated for DS8000 R9.0), REDP-4667.
276 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Creating a single pool
Occasionally, you are required to create a single pool, as opposed to creating a pool pair for
balancing a workload. To create a single storage pool, complete these steps:
1. Create a pool pair, as shown in Figure 9-42. However, do not assign any arrays to the new
pool pair.
278 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-45 Views for Volumes
2. Selecting one of the first two options opens a view listing all the volumes or pools on the
system. Figure 9-46 shows the Volumes by Pool view.
3. From this view, click Create Volumes. The Create Volumes dialog opens, as shown in
Figure 9-47.
280 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
– (Optional) Host: Optionally, map the volumes to a target host or host cluster.
– Provisioning: Select type of Storage allocation:
• Standard: Fully provisioned Volume
• Thin provisioning: Thin provisioning defines logical volume sizes that are larger than
the usable capacity installed on the system. The volume allocates capacity on an
as-needed basis as a result of host-write actions. The thin provisioning feature
enables the creation of extent space-efficient (ESE) logical volumes.
The administrator or user, while creating the new volumes, can assign the address range
to the volume in the Advanced section, as shown in Figure 9-49. It is possible to specify
the volumes by using the T10 Data Integrity Field (DiF)/Protection Information. After you
specify the volume set that you want to create, click Save. Then, you either create another
one by selecting ⊕ New Volume Set, or, once all the volume sets are specified, click
Create to create them in a row all at once.
Tips:
By providing a target host or host cluster, you can create volumes and map them to
the host in one step.
Selecting the suitable range of addresses for the new volume set is important from
the copy service planning point of view and the CPC preferred path affinity. After you
create a volume, you cannot change its address.
When FlashCopy is used on FB volumes, the source and the target volumes must
have the same protection type, that is, they both must use T10-DIF or standard.
Note: Release 9 and later supports Dynamic Volume Expansion (DVE) of IBM i 050
and 099 volume types in increments of 1 - 2000 GB. The minimum software level of the
IBM i hosts must be IBM i 7.3 TR6 or IBM i 7.4 and later.
Optionally, you can map the volumes to a defined IBM i host in this step too.
Further volume sets can be prepared and saved before you create what is defined in a
row.
282 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Setting the Fibre Channel port topology
For an open system host to access FB volumes that are configured on the DS8900F, the host
must be connected to the DS8900F through a FICON. The Fibre Channel Protocol (FCP)
must be configured on the FC port so that the host can communicate with the volume on the
DS8900F.
DS8900F has two kinds of host adapters: 4-port 16-gigabit Fibre Channel (GFC) and 4-port
32 GFC (referred to in the DS GUI as 16 Gbps or 32 Gbps). Each port can be independently
configured to one of the following FC topologies:
FCP: Also known as FC-switched fabric (which is also called switched point-to-point) for
open system host attachment, and for Metro Mirror (MM), Global Copy (GC), Global Mirror
(GM), and Metro/Global Mirror (MGM) connectivity
FICON: To connect to IBM Z hosts, and for zGM connectivity
Note: With DS8900F, Fibre Channel Arbitrated Loop (FC-AL) is no longer supported.
To set the FC port topology for open system hosts, complete the following steps:
1. From the DS GUI left navigation menu, click Settings → Network and select Fibre
Channel Ports to open the Fibre Channel Ports window (Figure 9-51).
2. Select the port to modify. Multiple ports can be selected by using the Shift or Ctrl key.
4. Choose from the available protocols to modify the selected host adapter port or ports. For
open system hosts attachment, select SCSI FCP (Small Computer System Interface
(SCSI) FCP).
5. Click Modify to perform the action.
For reference, a host port is the FC port of the host bus adapter (HBA) FC adapter that is
installed on the host system. It connects to the FC port of the host adapter that is installed on
the DS8900F.
284 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
To configure an open system host, complete the following steps:
1. Create a host: Configure a host object to access the storage system.
2. Assign a host port: Assign a host port to a host object by identifying one of the WWPNs of
the HBA that is installed on the host system.
3. Modify the FC port mask: Modify the FC port mask (on the DS8900F) to allow or disallow
host communication to and from one or more ports on the system.
Creating clusters
To configure a cluster object, complete these steps:
1. Click the Hosts icon from the DS GUI navigation pane on the left.
2. Select Hosts from the menu, as shown in Figure 9-53.
3. The Hosts window opens, as shown in Figure 9-54. Click Create Cluster.
4. The Create Cluster window opens, as shown in Figure 9-54. Specify the name of the
cluster, and click Create.
Creating hosts
To configure a host object, complete these steps:
1. Click the Hosts icon from the DS GUI navigation pane on the left.
2. Click Hosts, as shown in Figure 9-53.
Note: This window always appears when host port definitions are made by using the
DS CLI (mkhostport) and are not yet fully reflected in the GUI. So, the GUI offers to
move the CLI definitions fully into the GUI.
If canceled or closed, the Suggested Tasks window can be reopened by clicking the
attention message that is shown in Figure 9-56.
4. Click Automatically Create Hosts. A list of detected hosts and unassigned host ports is
displayed, as shown in Figure 9-57. Verify the list and click Create to complete the
assignment task automatically.
5. Select Manually Create Hosts to manually assign host ports to the existing hosts or click
Create Host to create hosts to assign the ports.
286 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
6. In the Hosts window, click Create Host, as shown in Figure 9-58.
7. The Add Hosts window opens (Figure 9-59). Specify the following items:
– Name: The user-defined name for the host to add.
– Type: The operating system (OS) of the host to add.
– Host port (WWPN): Optionally, provide the WWPN of the host port. If the host port
logged in to the system, it can be selected from the Host Port (WWPN), list as shown in
Figure 9-58.
Figure 9-59 Add Host window that shows options for the host type
3. The Assign Host window opens. From the drop-down list, select the cluster to which to add
the host.
4. Click Assign to complete the action.
5. Repeat the previous actions to add all hosts that are required in the cluster. After you
complete all the hosts, assigned hosts are listed under the cluster in the Hosts window, as
shown in Figure 9-61.
288 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Assigning host ports
After the host is added, host ports must be assigned to the defined host. To do so, complete
these steps:
1. From the Hosts window, select the host to which to assign the host port. Either right-click,
or from the Actions tab, select Assign Host Port, as shown in Figure 9-62.
Note: When there are multiple FC connections to the DS8900F from a host, you should
use native multipathing software that is provided by the host OS to manage these
paths.
Figure 9-63 Host properties showing the Fibre Channel port mask
If the system administrator wants to restrict the FC ports that can communicate with the host,
FC port masking must be defined. Modify the FC port mask to allow or disallow host
communication to and from one or more ports on the system.
290 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The properties of the selected host now reflect the number of FC ports that have access, as
shown in Figure 9-65.
Note: When mapping volumes to a cluster, volumes that are mapped to the cluster are
public volumes that are seen by all hosts in the cluster. Volumes that are mapped to a
single host in a cluster are private volumes.
It is the responsibility of the system administrator to ensure that the correct clustering
software is implemented to ensure data integrity when a volume is mapped to more
than one host.
Figure 9-67 shows a mixture of public and private volumes that are mapped to a cluster.
292 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
9.7.2 Creating CKD storage pools
For the best performance and a balanced workload, two pools must be created. The DS GUI
helps the system administrator to create a balanced configuration by creating pools as a pair.
The pools are configured so that one pool of the pair is managed by node 0, and the other
pool of the pair is managed by node 1.
Figure 9-68 Creating the CKD pool pair and assigning arrays to the pool pair
5. If multiple drive classes are installed on the storage system, decide how many arrays of
each drive class are required in each pool.
6. Ensure that storage type CKD is selected.
7. Assign a name to the pool pair. This name is used as the prefix for the pool pair ID.
8. Click Create.
9. After the pool pair creation is complete, the arrays are assigned to the pool pair, as shown
in Figure 9-69. The DS GUI configures the selected arrays for CKD storage and distributes
them evenly between the two pools.
Note: You can automatically assign arrays when creating a pool pair. The arrays are
created with the default RAID type RAID 6. To configure other supported RAID types,
you must use the advanced configuration for pool creation, or assign the arrays
manually to an existing storage pool from the unassigned arrays. For more information,
see “Creating CKD Pools: Advanced configuration” on page 294.
Figure 9-69 CKD pool pair that is created and arrays that are assigned to the pool pair
294 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-70 shows the custom configuration options for this procedure.
Note: RAID 6 is the recommended and default RAID type for all drives.
RAID 5 is allowed for drives less than 1 TB with an accepted RPQ. When configuring RAID 5
for the supported drives, you must accept the disclaimer acknowledging that you understand
the risks that are associated with RAID 5, as shown in Figure 9-71. A timestamped record
with user information is created for audit purposes.
For more information about the supported drive types and available RAID levels for DS8900F
models, see Chapter 2, “IBM DS8900F hardware components and architecture” on page 25.
Figure 9-72 shows the Arrays by Pool window, which shows how to assign the arrays.
Note: You can create LSS ranges, exact volume address ranges, and aliases in one step.
For an example, see 9.7.4, “Creating CKD volumes” on page 299.
The DS8000 LSS emulates a CKD storage control unit image (LCU). A CKD LSS must be
created before CKD volumes can be associated to the LSS.
296 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
3. Click the Create CKD LSSs tab from the Volumes by LSS window, as shown in
Figure 9-73.
4. The Create CKD LSSs window opens, as shown in Figure 9-74. Enter the required
information. After you enter the values for the LSS range, subsystem identifier (SSID)
prefix, and LSS type, click Create. The Need Help icon shows information about how the
unique SSID for each LSS is determined based on the SSID prefix that is provided.
Note: The CKD LSSs cannot be created in an address group that already contains FB
LSSs. The address groups are identified by the first digit in the two-digit LSS ID.
5. The unique SSID for each LSS is automatically determined by combining the SSID prefix
with the ID of the LSS. The SSID can be modified if needed, as shown in Figure 9-76.
Important: This situation is important in an IBM Z environment where the SSIDs were
previously defined in input/output definition files (IODFs) and might differ from the
SSIDs that are automatically generated by the Storage Management GUI. Be careful
when changing SSIDs because they must be unique in an IBM Z environment, and they
are used in Copy Services definitions. A change must not be done unless you first
removed all related copy services relationships (including PPRC paths).
298 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Note: Occasionally, the DS8900F GUI view does not immediately update after
modifications are made. After you modify the SSID, if the view is not updated, refresh
the GUI cache to reflect the change by clicking Settings → Support →
Troubleshooting → Refresh GUI cache. For more information, see “Troubleshooting”
on page 268.
Note: The storage administrator can create configurations that specify new LSS
ranges, exact volume address ranges, and aliases in one step.
4. Determine the LSS range for the volumes that you want to create.
5. Determine the name prefix and the quantity of volumes to create for each LSS.
Enter a prefix name and capacity for each group. The capacity can be specified in three
ways:
– Device: Select one of these choices from the list: 3380-2, 3380-3, 3390-1, 3390-3,
3390-9, 3390-27, 3390-54, or 3390-A (extended address volume (EAV)). These device
types have a fixed capacity that is based on the number of cylinders of each model. A
3390 disk volume contains 56,664 bytes for each track, 15 tracks for each cylinder, and
849,960 bytes for each cylinder. The most common 3390 model capacities are shown:
• 3390-1 = 1113 cylinders
• 3390-3 = 3339 cylinders
• 3390-9 = 10017 cylinders
• 3390-27 = 30051 cylinders
• 3390-54 = 60102 cylinders
In the first version of PAV, the disk controller assigns a PAV to a UCB (static PAV). The second
version of PAV processing, Workload Manager (WLM), reassigns a PAV to new UCBs from
time to time (dynamic PAV).
The restriction for configuring PAVs is that the total number of base and alias addresses for
each LSS cannot exceed 256 (00 - FF). These addresses must be defined in the IODF so that
they match the correct type, base, or alias.
Typically, when you configure PAVs in the IODF, the base addresses start at 00 and increment
toward FF. Alias addresses typically are configured to start at FF and decrement (decrease)
toward 00. A system administrator might configure only 16 or 32 aliases for each LSS.
However, no restrictions exist other than the total of 256 addresses that are available to the
LSS (bases and aliases).
The DS GUI configures aliases in this manner, starting at FF and descending. The storage
administrator can either configure many aliases against the LSS, in which case those aliases
are assigned to the lowest address in the LSS. Alternatively, the system administrator can
define any number of aliases to any specific base address. For more information about PAVs,
see IBM DS8900F and IBM Z Synergy DS8900F: Release 9.3 and z/OS 2.5, REDP-5186.
300 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-78 Creating aliases for the LSS
3. Select Manage Aliases to open the Aliases for LSS xx (where xx = 00 - FE) dialog box.
Click Create Aliases to open the dialog box that is shown in Figure 9-79. Enter the
number of aliases to create. The example in Figure 9-79 shows 32 aliases being created
for LSS 80.
4. Click Create. The aliases are created for LSS 80, as shown in Figure 9-80.
Figure 9-81 Thirty-two aliases against the lowest base volume address
6. To display the aliases, select the base volume with those aliases that are assigned to it
and then click Action → Manage Aliases.
A list with the addresses of all aliases that are assigned to the base volume is displayed,
as show in Figure 9-82.
Figure 9-82 List of aliases with their alias IDs starting at FF and in descending order
Note: The alias IDs start at FF and they are in descending order, as shown in
Figure 9-83 on page 303.
302 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
7. Aliases can also be created for a single base volume by selecting the base volume,
right-clicking, and selecting Action → Manage Aliases. Then, select Create Aliases.
Enter the number of aliases that you want for the base volume, as shown in Figure 9-83.
The five aliases for a single base volume address (ITSO_CKD_8006) are created with a
starting address of DF and end with DB in descending order, as shown in Figure 9-84.
(Alias E0-FF was created before).
Figure 9-84 List of five aliases that are created for a single base address
To set the FC port protocols of the FC ports that the host uses to communicate with the
DS8900F, complete these steps:
1. Select Settings → System → Fibre Channel Ports, and then select Actions → Modify
Fibre Channel port protocols.
2. Select one or multiple ports to modify and select Actions → Modify.
3. The Modify Protocol for the selected Ports window opens. Select the FICON protocol.
4. Click Modify to set the topology for the selected FC ports, as shown in Figure 9-85.
The following example shows the steps that are required to expand an IBM i volume of type
099 (see Figure 9-86 on page 305):
1. Go to any Volumes view, such as Volumes → Volumes by Pool.
2. Select the volume and select Actions → Expand. You can also open the Actions menu
by selecting the volume and right-clicking it.
3. The Expand Volume dialog opens. Enter the new capacity for the volume and click
Expand.
4. A warning appears that informs you that certain OSs do not support this action, and it asks
for confirmation to continue the action. Verify that the OS of the host to which the volume is
mapped supports the operation, and click Yes.
A task window opens and is updated with progress on the task until it is completed.
304 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-86 Expanding a volume
The storage administrator can also expand the Safeguarded capacity, as shown in
Figure 9-87. For more information, see IBM DS8000 Safeguarded Copy (Updated for DS8000
R9.2), REDP-5506.
You can instruct the GUI to force the deletion of volumes that are in use by selecting the
optional checkbox in the Delete Volumes dialog box, as shown by #1 in Figure 9-89 on
page 307. This setting does not apply to volumes that are in a Safeguarded Copy
relationship.
The following example shows the steps that are needed to delete an FB volume that is in use:
1. Go to the hosts-centric Volumes view by selecting Volumes → Volumes by Host.
2. Select the volume to be deleted, and then select Actions → Delete, as shown by #2 in
Figure 9-89 on page 307.
306 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5
1
2
308 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-91 Assigning a volume to a drive class
For more information about the settings that are available to configure Easy Tier, see “Easy
Tier settings” on page 260.
For more information about Easy Tier, see IBM DS8000 Easy Tier (Updated for DS8000
R9.0), REDP-4667.
310 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Alerting events
Error and warning alerts are displayed as badges on the Alerts (Bell) icon in the banner of
the DS GUI (shown in Figure 9-93). Click the Alerts icon to see the alerts. Click the
specific alert to view the corresponding event in the Events window.
This section provides more information about these hardware components, including how to
see more information about them from the system window.
Note: For the DS8910F Rack-Mounted model 993, the status of the various hardware
components is available from the System Health Overview. You can click each component
to see more details about it. There is no hardware view for this model.
Processor nodes
Two processor nodes exist that are named ID 0 and ID 1. Each node consists of a CPC and
the Licensed Internal Code (LIC) that runs on it. You also can display the system health
overview by clicking the System Health View icon, as shown in Figure 9-94.
Here are the node attributes that are shown in Figure 9-95:
ID: The node identifier, which is node 0 or node 1.
State: The current state of the node is shown:
– Online: The node is operating.
– Initializing: The node is starting or not yet operational.
– Service required: The node is online, but it requires service. A call home was initiated
to IBM Hardware Support.
– Service in progress: The node is being serviced.
– Drive service required: One or more drives that are online require service. A call home
was initiated.
– Offline: The node is offline and non-operational. A call home was initiated.
Release: The version of the Licensed Machine Code (LMC) or hardware bundle that is on
the node.
Processor: The type and configuration of the processor that is on the node.
Memory: The amount of raw system memory that is installed in the node.
Location Code: Logical location of the processor node.
312 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Here are the attributes that are displayed for the HMC component in Figure 9-96 on
page 312:
Name: The name of the HMC as defined by the user.
State: The status of the HMC is shown:
– Online: The HMC is operating normally.
– Code updating: The HMC software is being updated.
– Service required: The HMC is online, but it requires service. A call home was initiated
to IBM Hardware Support.
– Offline with redundancy: The HMC redundancy is compromised. A call home was
initiated to IBM Hardware Support.
– Offline: The HMC is offline and non-operational.
Release: The version of the LMC that is installed on the HMC.
Host address: The IP address for the host system of the HMC.
Role: The primary or secondary HMC.
Location Code: The logical location code for the HMC. If the HMC is external, the location
is identified as off-system.
Storage enclosures
A storage enclosure is a specialized chassis that houses and powers the flash drives in the
DS8900F storage system. The storage enclosure also provides the mechanism to allow the
drives to communicate with one or more host systems. All enclosures in the DS8900F are
High-Performance Flash Enclosures (HPFEs). These enclosures contain flash drives, which
are Peripheral Component Interconnect Express (PCIe)-connected to the I/O enclosures.
To view detailed information about the enclosures that are installed in the system, select
Storage Enclosures, as shown in Figure 9-94 on page 311. The attributes of the storage
enclosure are shown in Figure 9-97.
A drive is a data storage device. From the GUI perspective, a drive can be either a Flash Tier
0, Flash Tier 1, or Flash Tier 2 drive. To see the data storage devices and their attributes from
the Hardware view, click a storage enclosure when the magnifying glass pointer appears
(Figure 9-92 on page 310). This action shows the storage enclosure and installed storage
devices in more detail (Figure 9-98). You can also select Drives from the System Health
Overview to display information for all installed drives.
I/O enclosures
The I/O enclosure contains the I/O adapters. To see the I/O adapters and their attributes from
the Hardware view, click an I/O enclosure (Figure 9-92 on page 310) when the magnifying
glass pointer appears.
This action displays the I/O enclosure adapter view (rear of the enclosure).
314 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
To see the attributes for all installed DAs or host adapters in the System Health overview,
select Device Adapters or Host Adapters from System Health Overview (Figure 9-94 on
page 311) to open the dialog that is shown in Figure 9-99.
The attributes for the I/O enclosure are described in the following list:
ID: The enclosure ID.
State: The current state of the I/O enclosure is one of the following states:
– Online: Operational and normal.
– Offline: Service is required. A service request to IBM was generated.
– Service: The enclosure is being serviced.
Location Code: Logical location code of the I/O enclosure.
DA: Number of DAs that are installed.
Host adapter: Number of host adapters that are installed.
The FC ports are ports on a host adapter that connect the DS8900F to hosts, switches, or
another storage system either directly or through a switch.
316 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
9.13.2 Viewing components health and state from the system views
The Hardware view and System Health overview are useful tools in the DS GUI to visualize
the state and health of hardware components in the system. Two sample scenarios are
illustrated here:
Failed drives: Figure 9-101 shows two failed drives in one of the storage enclosures of the
system. Clicking this enclosure from the Hardware view provides a detailed view of the
enclosure with all the drives, including the failed drives. Hovering your cursor over the
failed drives provide more information about them.
The same information can be obtained by clicking the Show System Health Overview
icon, as shown in Figure 9-101.
The System Health Overview opens and shows the failed state of the drives, as shown in
Figure 9-102.
The same information can be obtained from the System Health Overview, as shown in
Figure 9-104.
318 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
9.13.3 Monitoring system events
The Events window displays all events that occurred within the storage system, whether they
are initiated by a user or by the system.
The Events table updates continuously so that you can monitor events in real time and track
events historically.
To access the Events window, click Monitoring → Events. The Events window can also be
displayed by clicking the Event Status icon, as shown in Figure 9-105.
The events can be exported as a CSV file by selecting Export Table on the Events window.
The Export Table action creates a CSV file of the events that are displayed in the Events table
with detailed descriptions.
320 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
primary and
secondary,Notcapable,Disabled,0x0002,002107,996,0000000DMC01,Unknown
I0002,0x000001,0x5005076306009339,0x5005076306FFD339,Auth Capable
Only,Disabled,1,0,0,0,0x5005076306005339,0x5005076306FFD339,0x0001,002107,996,I
BM,0000000DMC01,0x5005076306005339,0x5005076306FFD339,0x000002,Yes,Mirroring
primary and
secondary,Notcapable,Disabled,0x0001,002107,996,0000000DMC01,Unknown
I0003,0x000001,0x500507630600D339,0x5005076306FFD339,Auth Capable
Only,Disabled,1,0,0,0,0x10000000C9CED91B,0x20000000C9CED91B,0x0000,.....
Figure 9-107 illustrates the formatting that occurs when you import the data in XLS format.
Data is presented as one or more lines per port, depending on number of logins. This data
illustrates four ports on one adapter, which are split into three captures for presentation.
Local Port
Local Port Security Local Port Local Port
Local Port Local Port Security Security Local Port Capable Authentication Encrypted
Local Port ID FC_ID Local Port WWPN Local Port WWNN Capability Config Logins Logins Only Logins Logins
Attached Port
Attached Port WWPN Attached Port WWNN Attached Port Interface ID Attached Port Type Attached Port Model Manufacturer Attached Port SN
0x202E8894715EC810 0x10008894715EC810 0x002E 8960 F64 IBM 0000010550HA
0x5005076306009339 0x5005076306FFD339 0x0002 2107 996 IBM 0000000DMC01
0x5005076306005339 0x5005076306FFD339 0x0001 2107 996 IBM 0000000DMC01
0x10000000C9CED91B 0x20000000C9CED91B Unknown Unknown Unknown Unknown Unknown
Remote Port Remote Port Remote Port Security Remote Port Remote Port Remote Port Remote Port Remote Port
Remote Port WWPN Remote Port WWNN FC_ID PRLI Complete Remote Port Login Type State Security Config Interface ID Type Model Remote Port SN System Name
0x5005076306041339 0x5005076306FFD339 0x010600 Yes Mirroring secondary Not capable Disabled 0x0040 2107 996 0000000DMC01 Unknown
Mirroring primary and
0x5005076306009339 0x5005076306FFD339 0x000001 Yes secondary Not capable Disabled 0x0002 2107 996 0000000DMC01 Unknown
Mirroring primary and
0x5005076306005339 0x5005076306FFD339 0x000002 Yes secondary Not capable Disabled 0x0001 2107 996 0000000DMC01 Unknown
0x10000000C9CED91B 0x20000000C9CED91B 0x000002 Yes FCP host Not capable Disabled Unknown Unknown Unknown Unknown Unknown
The audit log is an unalterable record of all actions and commands that were initiated by
users on the system through the DS GUI, DS CLI, DS Network Interface (DSNI), or
IBM Spectrum Control. The audit log does not include commands that were received from
host systems or actions that were completed automatically by the storage system. The audit
log is downloaded as a compressed text file.
You can create your own performance graphs for the storage system, pools, volumes, and FC
ports. You can use predefined graphs and compare performance statistics for multiple pools,
up to six volumes at a time, or FC ports.
322 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-110 shows a comprehensive view of the available performance functions.
To learn how to obtain statistics from the DS CLI, see 10.5, “Metrics with DS CLI” on
page 391.
Important: All the listed performance statistics are averaged over 1 minute. The
performance graphs cover data that is collected for the last 7 days. For long-term
performance statistics, use IBM Spectrum Control.
324 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
HOST performance metrics:
– IOPS: Number of requests that are processed on the host, in KIOPS for the default I/O
operations (read, write, and total).
– Latency: Response time in ms for the default I/O operations (read, write, and average)
that is processed on the host.
– Bandwidth: Number of MBps for the selected bandwidth type (read, write, and total)
that is processed on the host.
FC port performance metrics:
– IOPS: Number of processed requests in KIOPS for the selected I/O operations (read,
write, and total) on the FC port.
– Latency: Response time in ms for the selected I/O operations on the FC port.
– Transfer Size: Number of KB per operation for the selected I/O operations on the FC
port.
– Bandwidth: Number of MBps for the selected bandwidth type on the FC port.
You can use these performance metrics to define your own graphs. To add the custom graph
to the Favorites menu, click the star icon, as shown in Figure 9-109 on page 322. You may
also export the sample date that is used to create the performance graphs into a CSV file by
clicking the Save icon, as shown in Figure 9-111.
For detailed performance analysis, you can define more detailed statistics and graphs, which
can help identify and isolate problems. You can perform the following actions:
Define your own performance graphs on demand.
Add defined graphs to the Favorites menu.
Pin defined graphs to the toolbar.
Set defined graphs as a default in the Performance window.
Rename or delete your graphs. You cannot delete predefined graphs.
Change the time range of displayed graphs.
326 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-114 demonstrates how to complete the following steps (each step number is
referenced in the figure):
1. Select Array from the resources to monitor.
2. Select the arrays to monitor.
3. Select the metrics that you want (I/O, Bandwidth, or Utilization).
To create a graph of a pool’s performance, see Figure 9-113 on page 326, which shows how
to create a chart, and then complete the following steps:
1. Select Pool from the resources to monitor.
2. Select the pool name to monitor.
3. Select the metrics that you want.
Figure 9-116 shows the metric options that are available for the selected pool.
328 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
You can monitor Easy Tier directly in the DS GUI by using the workload categorization report
and migration report. Figure 9-118 shows the Easy Tier pool level workload settings for
creating total data movement report that is shown in Figure 9-119.
Figure 9-118 Example of Easy Tier settings for total Data Movement
Figure 9-119 Easy Tier pool level workload reports (Total Data Movement)
An example report of Easy Tier Data Activity for a pool is shown in Figure 9-121.
330 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Creating system performance graphs
Figure 9-113 on page 326 shows the steps to start a chart. To create the graph of the
system’s performance as shown in Figure 9-122, complete the following steps:
1. Select System from the resources to monitor.
2. Select the system to monitor.
3. Select the metrics that you want.
332 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Creating volume performance graphs
Start a chart as shown in Figure 9-113 on page 326. To create the Easy Tier performance
graphs for the volume, as shown in Figure 9-126, complete the following steps:
1. Select Volume from the resources to monitor.
2. Select the volumes to monitor (you can select up to six volumes at a time for a graph).
3. Select the metrics that you want for I/O: Bandwidth or Easy Tier.
Note: The Performance action on the Volume, Host, and LSS resources is also available
in all pages where they are shown. For any Volume, Host, or LSS, select Performance
from the Action menu and then click the metric that you want to monitor. The performance
window for the selected resource and metric opens. Figure 9-128 shows the performance
actions and metrics that are available for a volume on the Volume by LSS page.
334 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 9-129 Adding a graph to the Favorites menu
The following list shows all of the statistics that are available for port error checking:
Total errors. The total number of errors that were detected on the FC port.
Error frame: An FC frame was received that was not consistent with the FCP.
Link failure: FC connectivity with the port was broken. This type of error can occur when
the system that is connected to the port is restarted, replaced, or serviced, and the FC
cable that is connected to the port is temporarily disconnected. It can also indicate a faulty
connector or cable. Link failures result in degraded performance of the FC port until the
failure is fixed.
Loss of sync: A synchronization loss error was detected on the FC link. This type of error
can occur when the system that is connected to the port is restarted, replaced, or
serviced, and the FC cable that is connected to the port is temporarily disconnected. It
also can indicate a faulty connector or cable. If a synchronization loss error persists, it can
result in a link failure error.
Loss of signal: A loss of signal was detected on the FC link. This type of error can occur
when the system that is connected to the port is replaced or serviced, and the FC cable
that is connected to the port is temporarily disconnected. It also can indicate a faulty
connector or cable. If a loss of signal error persists, it can result in a link failure error.
Cyclic redundancy check (CRC) error: An FC frame was received with CRC errors. This
type of error is often fixed when the frame is retransmitted. This type of error is often
recoverable and it does not degrade system performance unless the error persists and the
data cannot be relayed after retransmission.
Primitive sequence protocol error: A primitive sequence protocol error was detected. A
primitive sequence is an ordered set that is transmitted and repeated continuously to
indicate specific conditions within the port. The set also might indicate conditions that are
encountered by the receiver logic of the port. This type of error occurs when an
unexpected primitive sequence is received.
336 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Transmission word count: A bit error was detected. A transmission word is the smallest
transmission unit that is defined in FC. This unit consists of four transmission characters,
4 x 10, or 40 bits. This type of error can include code violations, invalid special code
alignment, and disparity errors.
Link reset transmitted: The state of the FC port changed from active (AC) to link recovery
(LR1).
Link reset received: The state of the FC port changed from active (AC) to link recovery
(LR2) state.
Out of order data: A missing frame was detected. The frame was either missing from a
data sequence or it was received beyond the FC port’s sequence reassembly threshold.
Out of order acknowledgment (ACK): An out of order ACK frame was detected. ACK
frames signify that the transmission was received. The frame was either missing from a
data sequence or it was received beyond the FC port’s sequence reassembly threshold.
Duplicate frame: A frame that was detected as previously processed was received.
Invalid relative offset: A frame with an invalid relative offset parameter in the frame header
was received.
Sequence timeout: The FC port detected a timeout condition when a sequence initiator
was received.
Uncorrectable bad blocks: A data block with errors was unable to be fixed by Forward
Error Correction (FEC).
Correctable bad blocks: A data block with errors was fixed by FEC.
Transport mode write retries: A transport mode write operation retry was requested. The
buffer was not large enough to receive unsolicited data.
Note: This chapter illustrates only a few essential commands. For a list of all commands
and their parameters, see IBM DS8000 Series Command-Line Interface User's Guide,
SC27-9562.
The following list highlights a few of the functions that you can perform with the DS CLI:
Manage user IDs and passwords that can be used with DS GUI, DS CLI, and Hardware
Management Console (HMC).
Install activation keys for licensed features.
Manage storage complexes and units.
Configure and manage storage facility images (SFIs).
Create and delete redundant array of independent disks (RAID) arrays, ranks, and extent
pools.
Create and delete logical volumes.
Manage the host access to volumes.
Check the current Copy Services (CS) configuration that is used by the storage unit.
Create, modify, or delete CS configuration settings.
Integrate Lightweight Directory Access Protocol (LDAP) policy usage and configuration.
Implement encryption functions.
Single installation: In almost all cases, you can use a single installation of the current
version of the DS CLI for all of your system needs. However, it is not possible to test every
version of DS CLI with every Licensed Machine Code (LMC) level. Therefore, an
occasional problem might occur despite every effort to maintain that level of compatibility.
If you suspect a version incompatibility problem, install the DS CLI version that
corresponds to the LMC level that is installed on your system. You can have more than one
version of DS CLI installed on your system, each in its own directory.
Important: For more information about supported OSs, specific preinstallation concerns,
and installation file locations, see IBM Documentation.
340 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Before you can install the DS CLI, ensure that at least Java 8 or later is installed. A suitable
level of Java might already be installed on many hosts. The installation program checks for
this requirement during the installation process and does not install the DS CLI if a suitable
version of Java is not already installed.
The installation process can be performed through a shell, such as the bash or Korn shell, the
Windows command prompt, or through a GUI. If the installation process is installed by using a
shell, the installation can be a silent installation by using a profile file. The installation process
also installs software that allows the DS CLI to be uninstalled when the DS CLI is no longer
required.
This chapter focuses on the DS CLI that is natively run from another system interacting with
the DS8900F. For clarity purposes, the DS CLI is also available through the DS GUI by using
the DS CLI option in the lower left of the dashboard.
After you ensure that Java 8 or later is installed, complete one of the following actions to
correct the Java virtual machine Not Found error:
Run the DS CLI installer again from the console, and provide the path to the Java virtual
machine (JVM) by using the LAX_VM option. The following examples represent paths to
the correct version of Java:
– For a Windows system, specify the following path:
dsclisetup.exe LAX_VM "C:\Program Files\java-whatever\jre\bin\java.exe"
Note: Due to a space in the Program Files directory name, you are required to add
quotation marks around the directory name.
Note: If you use the LAX_VM argument, the installer attempts to use whatever JVM
that you specify, even if it is an unsupported version. If an unsupported version is
specified, the installation might complete successfully, but the DS CLI might not run
and return an Unsupported Class Version Error message. You must ensure that
you specify a supported version.
For instances where Java is already set up, the installation starts with running the
dsclisetup.exe program that is found in your installation media, as shown in Figure 10-1.
The DS CLI runs under UNIX System Services for z/OS, and has a separate FMID HIWN62M.
You can also install the DS CLI separately from IBM Copy Services Manager.
For more information, see IBM DS CLI on z/OS Program Directory, GI13-3563. You can use
the order number (GI13-3563) to search for it at IBM Publications Center.
After the installation is done, the first thing to do is to access your UNIX System Services for
z/OS. This process can vary from installation to installation. Ask your z/OS system
programmer how to access it.
Tip: Set your Time Sharing Option (TSO) REGION SIZE to 512 MB to allow the DS CLI to
run.
342 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
In our test system, we logged on to TSO and used option 6 ISPF Command Shell, we issued
the command OMVS to start the z/OS UNIX Shell, as shown in Figure 10-2.
===> omvs
=> omvs
Figure 10-2 OMVS command to start the z/OS UNIX Shell
The default installation path for the z/OS DS CLI is /opt/IBM/CSMDSCLI. To run the DS CLI,
change your working directory to the installation path by issuing the following command, as
shown in Figure 10-3:
cd /opt/IBM/CSMDSCLI
IBM
Licensed Material - Property of IBM
...
GSA ADP Schedule Contract with IBM Corp.
-----------------------------------------------------------------------
Business Notice:
IBM's internal systems must only be used for conducting IBM's
business or for purposes authorized by IBM management.
-----------------------------------------------------------------------
===> cd /opt/IBM/CSMDSCLI
INPUT
ESC=¢ 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr
8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
Figure 10-3 The cd /opt/IBM/CSMDSCLI command
IBM
Licensed Material - Property of IBM
...
GSA ADP Schedule Contract with IBM Corp.
$ cd /opt/IBM/CSMDSCLI
$
===> ./dscli
INPUT
ESC=¢ 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr
8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
Figure 10-4 The ./dscli command
If you change your mind and decide to quit here, instead of typing ./dscli, press F2 to
activate the SubCmd, as shown in Figure 10-4 (2=SubCmd). The OMVS Subcommand line is
displayed, and you can issue a quit command.
As shown in Figure 10-5, the message CEE5210S The signal SIGHUP was received followed
by *** appears. Press Enter to quit OMVS.
IBM
Licensed Material - Property of IBM
...
IBM is a registered trademark of the IBM Corp.
...
-----------------------------------------------------------------------
$ cd /opt/IBM/CSMDSCLI
$
OMVS Subcommand ==> quit
SUBCOMMAND
ESC= 1=Help 2=SubCmd 3=Return 4=Top 5=Bottom 6=TSO 7=BackScr
8=Scroll 9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
-------------------------------------------------------
CEE5210S The signal SIGHUP was received.
***
Figure 10-5 Sequence to leave the DS CLI
344 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
By using the DS CLI on z/OS, you can issue single commands, use a script mode, or go into
batch mode by using z/OS Job Control Language (JCL). Figure 10-6 shows how to access
the command interface.
Business Notice:
IBM's internal systems must only be used for conducting IBM's
business or for purposes authorized by IBM management.
-----------------------------------------------------------------------
$ cd /opt/IBM/CSMDSCLI
$ ./dscli
Enter the primary management console IP address: <enter-your-machine-ip-address>
Enter the secondary management console IP address:
Enter your username: <enter-your-user-name-as-defined-on-the-machine>
Enter your password: <enter-your-user-password-to-access-the-machine>
dscli> ver -l
...
dscli>
===>
INPUT
ESC=¢ 1=Help 2=SubCmd 3=HlpRetrn 4=Top 5=Bottom 6=TSO 7=BackScr 8=Scroll
9=NextSess 10=Refresh 11=FwdRetr 12=Retrieve
The command that you run on DS CLI on z/OS has the same syntax as in other platforms.
Some examples of those commands are shown in Figure 10-7.
dscli> lssi
Name ID Storage Unit Model WWNN State ESSNet
========================================================================================
IBM.2107-75ACA91 IBM.2107-75ACA91 IBM.2107-75ACA90 980 5005076303FFD13E Online Enabled
dscli> lsckdvol -lcu EF
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
===========================================================================================
ITSO_EF00 EF00 Online Normal Normal 3390-A CKD Base - P1 262668
ITSO_EF01 EF01 Online Normal Normal 3390-9 CKD Base - P1 10017
dscli> mkckdvol -dev IBM.2107-75ACA91 -cap 3339 -datatype 3390 -eam rotateexts -name ITSO_#h -extpool P1
EF02-EF02
CMUC00021I mkckdvol: CKD Volume EF02 successfully created.
dscli> lsckdvol -lcu EF
Name ID accstate datastate configstate deviceMTM voltype orgbvols extpool cap (cyl)
===========================================================================================
ITSO_EF00 EF00 Online Normal Normal 3390-A CKD Base - P1 262668
ITSO_EF01 EF01 Online Normal Normal 3390-9 CKD Base - P1 10017
ITSO_EF02 EF02 Online Normal Normal 3390-3 CKD Base - P1 3339
dscli> rmckdvol EF02
CMUC00023W rmckdvol: The alias volumes associated with a CKD base volume are automatically deleted
before deletion of the CKD base volume. Are you sure you want to delete CKD volume EF02? Ýy/n¨: y
CMUC00024I rmckdvol: CKD volume EF02 successfully deleted.
dscli>
===>
Example 10-3 shows a JCL to run several commands in a row, each in single-shot mode.
346 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
//STDPARM DD *
SH
echo "Command 1:";
/opt/IBM/CSMDSCLI/dscli -cfg /opt/IBM/CSMDSCLI/profile/aca91.profile
ver -l;
echo "Command 2:";
/opt/IBM/CSMDSCLI/dscli -cfg /opt/IBM/CSMDSCLI/profile/aca91.profile
lsrank;
echo "Command 3:";
/opt/IBM/CSMDSCLI/dscli -cfg /opt/IBM/CSMDSCLI/profile/aca91.profile
lsarray;
/*
Example 10-4 shows an example of the ver command, where the customer uses a earlier DS
CLI version.
The installsoftware command is used to install a new version of IBM Copy Services
Manager software on the HMC, as shown in Example 10-6.
The default user ID is admin and the password is admin. The system forces you to change the
password at the first login. If you forget the admin password, a reset can be performed that
resets the admin password to the default value.
The following commands are used to manage user IDs by using the DS CLI:
mkuser
A user account that can be used with the DS CLI and the DS GUI is created by using this
command. Example 10-7 shows the creation of a user that is called JohnDoe, which is in
the op_storage group. The temporary password of the user is passw0rd. The user must
use the chpass command when they log in for the first time.
348 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
rmuser
An existing user ID is removed by using this command. Example 10-8 shows the removal
of a user called JaneSmith.
chuser
Use this command to change the password or group (or both) of an existing user ID. It also
can be used to unlock a user ID that was locked by exceeding the allowable login retry
count. The administrator can also use this command to lock a user ID. In Example 10-9,
we unlock the user, change the password, and change the group membership for a user
that is called JohnDoe. The user must use the chpass command the next time that they log
in.
lsuser
By using this command, a list of all user IDs can be generated. Example 10-10 shows a
list of three users, including the administrator account.
dscli> lsuser
Name Group State
===============================================
JohnDoe op_storage active
secadmin admin active
admin admin active
showuser
The account details of a user ID can be displayed by using this command. Example 10-11
lists the details of the user JohnDoe.
chpass
By using this command, you can change two password policies: Password expiration (in
days) and the number of failed logins that are allowed. Example 10-13 shows changing
the expiration to 365 days and five failed login attempts.
showpass
The properties for passwords (Password Expiration days and Failed Logins Allowed) are
listed by using this command. Example 10-14 shows that passwords are set to expire in
90 days and that four login attempts are allowed before a user ID is locked.
If you create one or more profiles to contain your preferred settings, you do not need to
specify this information every time that you use the DS CLI. When you start the DS CLI, you
can specify a profile name by using the dscli command. You can override the values of the
profile by specifying a different parameter value for the dscli command.
350 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
When you install the command-line interface software, a default profile is installed in the
profile directory with the software. The file name is dscli.profile. For example, use
C:\Program Files (x86)\IBM\dscli for Windows and
/opt/ibm/dscli/profile/dscli.profile for AIX (UNIX) and Linux platforms.
Default profile file: The default profile file that you created when you installed the DS CLI
might be replaced every time that you install a new version of the DS CLI. It is a best
practice to open the default profile and then save it as under a new file. You can then create
multiple profiles and reference the relevant profile file by using the -cfg parameter.
The following example uses a different profile when it starts the DS CLI:
dscli -cfg newprofile.profile (or whatever name you gave to the new profile)
These profile files can be specified by using the DS CLI command parameter -cfg
<profile_name>. If the profile name is not specified, the default profile of the user is used. If a
profile of a user does not exist, the system default profile is used.
Two default profiles: If two default profiles are called dscli.profile, one profile in the
default system’s directory and one profile in your personal directory, your personal profile is
loaded.
Default newline delimiter: The default newline delimiter is a UNIX delimiter, which can
render text in the notepad as one long line. Use a text editor that correctly interprets
UNIX line endings.
devid: IBM.2107-75HAL91
hmc1: 10.0.0.1
username: admin
pwfile: c:\mydir\75HAL91\pwfile.txt
Adding the serial number by using the devid parameter and adding the HMC IP address
by using the hmc1 parameter are suggested. These additions help you to avoid mistakes
when you use more profiles, and you do not need to specify this parameter for certain
dscli commands that require it. Additionally, if you specify the dscli profile for CS usage,
the remotedevid parameter is suggested for the same reasons. To determine the ID of a
storage system, use the lssi CLI command.
Add the username and an encrypted password file by using the managepwfile command. A
password file that is generated by using the managepwfile command is placed in the
user_home_directory/dscli/profile/security/security.dat directory. Specify the
location of the password file with the pwfile parameter.
Important: Be careful if you add multiple devid and HMC entries. Uncomment (remove
the number sign (#) one entry at a time. If multiple hmc1 or devid entries exist, the DS
CLI uses the entry that is closest to the bottom of the profile.
352 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.1.9 Configuring the DS CLI to use the second HMC
The second HMC can be specified on the CLI or in the profile file that is used by the DS CLI.
To specify the second HMC in a command, use the -hmc2 parameter, as shown in
Example 10-17.
Alternatively, you can modify the following lines in the dscli.profile (or any profile) file:
# Management Console/Node IP Addresses
# hmc1 and hmc2 are equivalent to -hmc1 and -hmc2 command options.
hmc1:10.0.0.1
hmc2:10.0.0.5
After these changes are made and the profile is saved, the DS CLI automatically
communicates through HMC2 if HMC1 becomes unreachable. By using this change, you can
perform configuration and run CS commands with full redundancy.
Two HMCs: If you specify only one HMC in a DS CLI command (or profile), any changes
that you make to users are still replicated onto the other HMC.
You must supply the login information and the command that you want to process at the same
time. To use the single-shot mode, complete the following steps:
1. At the OS shell prompt, enter one of the following commands:
– dscli -hmc1 <hostname or ip address> -user <adm user> -passwd <pwd> <command>
– dscli -cfg <dscli profile> -pwfile <security file> <command>
Important: Avoid embedding the username and password into the profile. Instead, use
the -pwfile command.
Important: When you are typing the command, you can use the hostname or the IP
address of the HMC. When a command is run in single-shot mode, the user must be
authenticated. The authentication process can take a considerable amount of time.
The interactive command mode provides a history function that simplifies repeating or
checking earlier command usage.
Interactive mode: In interactive mode for long outputs, the message Press Enter To
Continue appears. The number of rows can be specified in the profile file. Optionally, you
can turn off the paging feature in the profile file by using the paging:off parameter.
354 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Example 10-19 shows using interactive command mode by using the profile
DS8900F.profile.
dscli> lsarraysite -l
arsite DA Pair dkcap (10^9B) diskrpm State Array diskclass encrypt
========================================================================
S1 8 1600.0 65000 Assigned A2 FlashTier0 supported
S2 8 1600.0 65000 Assigned A3 FlashTier0 supported
S3 10 3840.0 65000 Assigned A1 FlashTier1 supported
S4 10 3840.0 65000 Assigned A0 FlashTier1 supported
dscli> lssi
Name ID Storage Unit Model WWNN State ESSNet
========================================================================================
IBM.2107-75HAL91 IBM.2107-75HAL91 IBM.2107-75HAL90 996 5005076309FFD462 Online Enabled
dscli>
Example 10-20 shows the contents of a DS CLI script file. The file contains only DS CLI
commands, although comments can be placed in the file by using a number sign (#). Empty
lines are also allowed. One advantage of using this method is that scripts that are written in
this format can be used by the DS CLI on any OS on which you can install the DS CLI. Only
one authentication process is needed to run all of the script commands.
For script command mode, you can turn off the banner and header for easier output parsing.
Also, you can specify an output format that might be easier to parse by your script.
Example 10-21 shows starting the DS CLI by using the -script parameter and specifying a
profile and the name of the script that contains the commands from Example 10-20.
Important: The DS CLI script can contain only DS CLI commands. Using shell commands
results in a process failure.
The return codes that are used by the DS CLI are listed in Command-Line Interface User’s
Guide, SC27-9562.
Click the Command-line interface tab to access user assistance. You can also get user
assistance when using the DS CLI program by running the help command. The following
examples of usage are included:
help Lists all the available DS CLI commands.
help -s Lists all the DS CLI commands with brief descriptions of each
command.
help -l Lists all the DS CLI commands with their syntax information.
To obtain information about a specific DS CLI command, enter the command name as a
parameter of the help command. The following examples of usage are included:
help <command name> Provides a detailed description of the specified command.
help -s <command name> Provides a brief description of the specified command.
help -l <command name> Provides syntax information about the specified command.
Man pages
A man page is available for every DS CLI command. Man pages are most commonly seen in
UNIX OSs, and provide information about command capabilities. This information can be
displayed by issuing the relevant command followed by the -h, -help, or -? flags.
356 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.2 I/O port configuration
Set the I/O ports to the topology that you want. Example 10-22 lists the I/O ports by using the
lsioport command. I0030 - I0033 are on one adapter, and I0100 - I0103 are on another
adapter.
The following possible topologies for each I/O port are available:
Small Computer System Interface - Fibre Channel Protocol (SCSI-FCP): Fibre Channel
(FC)-switched fabric, which is also called switched point-to-point. This port type is also
used for mirroring.
Fibre Channel connection (IBM FICON): This port type is for IBM Z system hosts only.
The Security field indicates the status of the IBM Fibre Channel Endpoint Security feature.
For more information, see IBM Fibre Channel Endpoint Security for IBM DS8900F and IBM Z,
SG24-8455.
If added to the setioport command, the -force parameter allows a topology change to an
online I/O port even if a topology is set. Example 10-23 shows setting I/O ports without and
with the -force option to the FICON topology, and then checking the results.
To monitor the status for each I/O port, see 10.5, “Metrics with DS CLI” on page 391.
Important: For more information about the current drive choices and RAID capacities, see
IBM Documentation.
Important: For a DS8900F, one rank is assigned to one array. An array is made of only
one array site. An array site contains eight drives. There is a one to one relationship
among array sites, arrays, and ranks.
358 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Example 10-24 Listing array sites
dscli> lsarraysite -l
arsite DA Pair dkcap (10^9B) diskrpm State Array diskclass encrypt
==========================================================================
S1 8 1600.0 65000 Unassigned - FlashTier0 supported
S2 8 1600.0 65000 Unassigned - FlashTier0 supported
S3 10 3840.0 65000 Assigned A1 FlashTier1 supported
S4 10 3840.0 65000 Assigned A0 FlashTier1 supported
In Example 10-24, you can see two unassigned array sites. Therefore, you can create two
arrays. The -l option reports the diskclass information.
You can issue the mkarray command to create arrays, as shown in Example 10-25. The
example uses one array site to create a single RAID 6 array. If you want to create a RAID 10
array, change the -raidtype parameter to 10.
You can now see the arrays that were created by using the lsarray command, as shown in
Example 10-26.
Example 10-27 Listing the high-performance flash disk class flash Tier 0 by using the lsarraysite command
dscli> lsarraysite -l -diskclass flashtier0
arsite DA Pair dkcap (10^9B) diskrpm State Array diskclass encrypt
========================================================================
S1 8 1600.0 65000 Assigned A2 FlashTier0 supported
S2 8 1600.0 65000 Assigned A3 FlashTier0 supported
Example 10-28 Listing the disk class by using the lsarray -l command
dscli> lsarray -l
Array State Data RAIDtype arsite Rank DA Pair DDMcap (10^9B) diskclass encrypt
===========================================================================================
A0 Assigned Normal 6 (5+P+Q+S) S4 R0 10 3840.0 FlashTier1 supported
A1 Assigned Normal 6 (5+P+Q+S) S3 R1 10 3840.0 FlashTier1 supported
A2 Unassigned Normal 6 (5+P+Q+S) S1 - 8 1600.0 FlashTier0 supported
A3 Unassigned Normal 6 (5+P+Q+S) S2 - 8 1600.0 FlashTier0 supported
360 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.3.3 Creating the ranks
After you create all of the required arrays, create the ranks by using the mkrank command.
The format of the command is mkrank -array Ax -stgtype xxx, where xxx is Fixed-Block
(FB) or Count Key Data (CKD), depending on whether you are configuring for open systems
hosts or IBM Z system hosts.
After all the ranks are created, the lsrank command is run. This command displays this
information:
All the ranks that were created.
The server to which the rank is attached (attached to none, in the example up to now).
The RAID type.
The format of the rank (fb or ckd).
Example 10-30 shows the mkrank command and the result of a successful lsrank command.
Example 10-30 Creating and listing ranks by using the mkrank and lsrank commands
When defining a rank, you can also specify the extent size. You can have ranks and extent
pools with large 1 (gibibyte) GiB FB extents or small 16 mebibytes (MiB) FB extents. The
extent unit is specified by the -extsize parameter of the mkrank command. The first rank that
is added to an extent pool determines the extent size of the extent pool.
For easier management, create empty extent pools that relate to the type of storage or the
planned usage for that pool. For example, create an extent pool pair for FB open systems
environment and create an extent pool pair for the CKD environment.
When an extent pool is created, the system automatically assigns it an extent pool ID, which
is a decimal number that starts from 0, preceded by the letter P. The ID that was assigned to
an extent pool is shown in the CMUC00000I message, which is displayed in response to a
successful mkextpool command.
Extent pools that are associated with rank group 0 receive an even ID number. Extent pools
that are associated with rank group 1 receive an odd ID number. The extent pool ID is used
when you refer to the extent pool in subsequent DS CLI commands. Therefore, it is best
practice to note the ID.
Example 10-31 shows one example of extent pools that you can define on your system. This
setup requires a system with at least four ranks.
The mkextpool command forces you to name the extent pools. To do so, complete these
steps:
1. Create empty extent pools by using the mkextpool command, as shown in Example 10-32.
2. List the extent pools to obtain their IDs.
3. Attach a rank to an empty extent pool by using the chrank command.
4. List the extent pools again by using lsextpool and note the change in the capacity of the
extent pool.
Example 10-32 Creating an extent pool by using mkextpool, lsextpool, and chrank
dscli> mkextpool -rankgrp 0 -stgtype fb FB_0
CMUC00000I mkextpool: Extent pool P2 successfully created.
dscli> mkextpool -rankgrp 1 -stgtype fb FB_1
CMUC00000I mkextpool: Extent Pool P3 successfully created.
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=========================================================================================
FB_0 P2 fb 0 full 0 100 0 0 0
FB_1 P3 fb 1 full 0 100 0 0 0
dscli> chrank -extpool P2 R2
362 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
CMUC00008I chrank: Rank R2 successfully modified.
dscli> chrank -extpool P3 R3
CMUC00008I chrank: Rank R3 successfully modified.
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=========================================================================================
FB_0 P2 fb 0 below 7220 0 462099 64 0
FB_1 P3 fb 1 below 8665 0 554583 64 0
After a rank is assigned to an extent pool, you can see this change when you display the
ranks.
In Example 10-33, you can see that rank R0 is assigned to extpool P0.
Example 10-33 Displaying the ranks after a rank is assigned to an extent pool
dscli> lsrank
ID Group State datastate Array RAIDtype extpoolID stgtype
===========================================================
R0 0 Normal Normal A0 6 P0 fb
R1 1 Normal Normal A1 6 P1 fb
R2 0 Normal Normal A2 6 P2 fb
R3 1 Normal Normal A3 5 P3 fb
R8 0 Normal Normal A8 6 P4 ckd
R11 1 Normal Normal A11 6 P5 ckd
Although an FB-type volume can be created as standard (thick) and thin (extent space
efficient (ESE)-type volumes, this section describes the creation of the standard type only.
Example 10-35 shows the creation of eight volumes, each with a capacity of 10 GiB. The first
four volumes are assigned to rank group 0, and are assigned to LSS 20 with volume numbers
00 - 03. The second four volumes are assigned to rank group 1, and are assigned to LSS 21
with volume numbers of 00 - 03.
Looking closely at the mkfbvol command that is used in Example 10-35, you see that
volumes 2000 - 2003 are in extpool P2. That extent pool is attached to rank group 0, which
means server 0. Rank group 0 can contain only even-numbered LSSs, which means that
volumes in that extent pool must belong to an even-numbered LSS. The first two digits of the
volume serial number are the LSS number. So, in this case, volumes 2000 - 2003 are in LSS
20.
For volumes 2100 - 2103 in extpool P3 in Example 10-35, the first two digits of the volume
serial number are 21 (an odd number), which signifies that they belong to rank group 1. The
-cap parameter determines the size. However, because the -type parameter was not used,
the default type is GiB or ds, which is a binary size of 230 bytes.
Therefore, these volumes are 10 GiB binary, which equates to 10,737,418,240 bytes. If you
used the -type ess parameter, the volumes are decimally sized, and they are a minimum of
10,000,000,000 bytes in size.
Example 10-35 named the volumes by using the naming scheme fb_0_#h, where #h means
that you are using the hexadecimal volume number as part of the volume name. This naming
convention is shown in Example 10-36, where you list the volumes that you created by using
the lsfbvol command. You then list the extent pools to see how much space is left after the
volume is created.
Example 10-36 Checking the machine after the volumes are created by using lsfbvol and lsextpool
dscli> lsfbvol
Name ID accstate datastate configstate deviceMTM datatype extpool cap (2^30B) cap (10^9B) cap (blocks)
================================================================================================================
fb_0_2000 2000 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
fb_0_2001 2001 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
fb_0_2002 2002 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
fb_0_2003 2003 Online Normal Normal 2107-900 FB 512 P2 10.0 - 20971520
364 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
fb_1_2100 2100 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
fb_1_2101 2101 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
fb_1_2102 2102 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
fb_1_2103 2103 Online Normal Normal 2107-900 FB 512 P3 10.0 - 20971520
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
=========================================================================================
FB_0 P2 fb 0 below 7180 0 459531 64 4
FB_1 P3 fb 1 below 8625 0 552015 64 4
Important considerations:
For a DS8000, the LSSs can be ID 00 - FE. The LSSs are in address groups. Address
group 0 is LSSs 00 - 0F, address group 1 is LSSs 10 - 1F, and so on, except group F,
which is F0 - FE. When you create an FB volume in an address group, that entire
address group can be used only for FB volumes. Be aware of this fact when you plan
your volume layout in a mixed FB and CKD DS8000. The LSS is automatically created
when the first volume is assigned to it.
The -perfgrp <perf_group_ID> flag option is still available on the create volume
commands for compatibility with earlier version, but the feature for Performance I/O
Priority Manager was discontinued as of Release 9 DS8000 products.
Resource group: You can configure a volume to belong to a certain resource group by
using the -resgrp <RG_ID> flag in the mkfbvol command. For more information, see IBM
System Storage DS8000 Copy Services Scope Management and Resource Groups,
REDP-4758.
You configure T10 DIF by adding the -t10dif parameter to the mkfbvol command. It is
possible to create T10 DIF volumes and use them as standard volumes, and then enable
them later without configuration changes.
You can also specify that you want the extents of the volume that you create to be evenly
distributed across all ranks within the extent pool. This allocation method is called rotate
extents. The storage pool striping spreads the I/O of a logical unit number (LUN) to multiple
ranks, which improves performance and greatly reduces hot spots.
Default allocation policy: For DS8900F, the default allocation policy is rotate extents.
The showfbvol command with the -rank option (Example 10-38) shows that the volume that
you created is distributed across two ranks. It also shows how many extents on each rank
were allocated for this volume. Compared to the examples above, the extent pool P2 now
consists of two ranks, R2 and R3.
366 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
safeguarded no
SGC Recovered no
==============Rank extents==============
rank extents capacity (MiB/cyl) metadata
========================================
R2 480 7680 yes
R3 480 7680 no
The largest LUN size is 16 TiB. CS is not supported for LUN sizes larger than 4 TiB.
New capacity: The new capacity must be larger than the previous capacity. You cannot
shrink the volume.
Because the original volume included the rotateexts attribute, the other extents are also
striped, as shown in Example 10-40. See both examples and check the difference.
Important: Before you can expand a volume, you must delete all CS relationships for that
volume.
The extent size is defined by the extent pools where the volume is created.
More DS CLI commands are available to control and protect the space in an extent pool for
thin-provisioned volumes. One of these commands is the mksestg command, which reserves
space for thin-provisioned volumes. For more information about thin-provisioning, see
IBM DS8880 Thin Provisioning (Updated for Release 8.5), REDP-5343.
368 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Deleting volumes
FB volumes can be deleted by using the rmfbvol command. The command includes options
to prevent the accidental deletion of volumes that are in use. An FB volume is considered to
be in use if it is participating in a CS relationship or if the volume received any I/O operation in
the previous 5 minutes.
Volume deletion is controlled by the -safe and -force parameters (they cannot be specified
at the same time) in the following manner:
If none of the parameters are specified, the system performs checks to see whether the
specified volumes are in use. Volumes that are not in use are deleted and the volumes that
are in use are not deleted.
If the -safe parameter is specified and if any of the specified volumes are assigned to a
user-defined volume group, the command fails without deleting any volumes.
The -force parameter deletes the specified volumes without checking whether they are in
use.
Example 10-42 shows the creation of volumes 2200 and 2201, and then the assignment of
volume 2200 to a volume group. You try to delete both volumes with the -safe option, but the
attempt fails without deleting either of the volumes. You can delete volume 2201 by using the
-safe option because the volume is not assigned to a volume group. Volume 2200 is not in
use, so you can delete it by not specifying either parameter.
The command includes options to prevent the accidental reinitialization of volumes that are in
use. An FB volume is considered to be in use if it is participating in a CS relationship or if the
volume received any I/O operation in the previous 5 minutes. All data is lost when this
command is used.
Note: There are two ways to create volume groups and map the volumes to the hosts.
Volume groups can be created manually in single steps or automatically. The automatic
method is done by using the mkhost and chhost commands, and it is the recommended
method for mapping volumes to host systems.
The following sections describe both ways, but understand that the manual way is only there
for compatibility with earlier versions and must be applied with care to make sure that all the
steps are done to fully reflect the results and views in the DS GUI.
Example 10-44 Listing the host types by running the lshostype command
dscli> lshosttype -type scsimask
HostType Profile AddrDiscovery LBS
===========================================================================
Hp HP - HP/UX reportLUN 512
SVC SAN Volume Controller reportLUN 512
SanFsAIX IBM pSeries - AIX/SanFS reportLUN 512
pSeries IBM pSeries - AIX reportLUN 512
pSeriesPowerswap IBM pSeries - AIX with Powerswap support reportLUN 512
zLinux IBM zSeries - zLinux reportLUN 512
370 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
dscli> lshosttype -type scsimap256
HostType Profile AddrDiscovery LBS
=================================================
AppleOSX Apple - OSX LUNPolling 512
Fujitsu Fujitsu - Solaris LUNPolling 512
HpTru64 HP - Tru64 LUNPolling 512
HpVms HP - Open VMS LUNPolling 512
Linux Linux Server LUNPolling 512
Novell Novell LUNPolling 512
SGI SGI - IRIX LUNPolling 512
SanFsLinux - Linux/SanFS LUNPolling 512
Sun SUN - Solaris LUNPolling 512
VMWare VMWare LUNPolling 512
Windows Windows Server LUNPolling 512
iLinux IBM iSeries - iLinux LUNPolling 512
nSeries IBM N series Gateway LUNPolling 512
pLinux IBM pSeries - pLinux LUNPolling 512
Example 10-45 Creating a volume group by using mkvolgrp and displaying it by using lsvolgrp
dscli> mkvolgrp -type scsimask -volume 2000-2002,2100-2102 AIX_VG_01
CMUC00030I mkvolgrp: Volume group V1 successfully created.
dscli> lsvolgrp -l -type scsimask
Name ID Type
============================
v0 V0 SCSI Mask
AIX_VG_01 V1 SCSI Mask
pE950_042 V2 SCSI Mask
pE950_048 V3 SCSI Mask
pseries_cluster V7 SCSI Mask
dscli> showvolgrp V1
Name AIX_VG_01
ID V1
Type SCSI Mask
Vols 2000 2001 2002 2100 2101 2102
You might also want to add or remove volumes to this volume group later. To add or remove
volumes, use the chvolgrp command with the -action parameter.
Important: Not all OSs can manage a volume removal. To determine the safest way to
remove a volume from a host, see your OS documentation.
You can use a set of cluster commands (mkcluster, lscluster, showcluster, and rmcluster)
to create clusters to group hosts that have the same set of volume mappings, map or unmap
volumes directly to these clusters, or both. These commands were added to provide
consistency between the GUI and DS CLI. For more information, see “Creating open system
clusters and hosts” on page 284.
Clusters are grouped hosts that must share volume access with each other. A cluster usually
contains several hosts. Single hosts can exist without a cluster. Clusters are created by
running the mkcluster command. This command does not need many parameters and is
there only to organize hosts.
The mkhost command now has two generic host types that are available: Linux Server and
Windows Server. These types were created to simplify and remove confusion when
configuring these host types. You must define the host type first by running the mkhost
command, as shown in Example 10-47.
Example 10-47 Creating generic host types Linux Server and Windows Server
dscli> mkcluster cluster_1
CMUC00538I mkcluster: The cluster cluster_1 is successfully created.
Usage: mkhost [ { -help|-h|-? } ] [-v on|off] [-bnr on|off] [-dev storage_image_ID] -type
AIX|AIX with PowerSwap|HP OpenVMS|HP-UX|IBM i AS/400|iLinux|Linux RHEL|Linux SUSE|Linux
Server|N series Gateway|Novell|pLinux|SAN Volume Controller|Solaris|VMware|Windows
2003|Windows 2008|Windows 2012|Windows Server|zLinux [-hostport wwpn1[,wwpn2,...]]
[-cluster cluster_name] Host_Name | -
372 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
dscli> mkhost -type “Linux Server” -cluster cluster_1 Host_1
CMUC00530I mkhost: The host Host_1 is successfully created.
dscli> mkhost -type “Linux Server” -cluster cluster_1 Host_2
CMUC00530I mkhost: The host Host_2 is successfully created.
dscli> lshost
Name Type State Cluster
===========================================
Host_1 Linux Server Offline cluster_1
Host_2 Linux Server Offline cluster_1
pE950_042 AIX Online -
pE950_048 AIX Online -
x3550_2_03 Windows Server Online -
x3650_2_54 Linux Server Online -
More commands are also available: chhost, lshost, showhost, chhost, and rmhost. These
commands were added to provide consistency between the DS GUI and DS CLI.
Example 10-48 provides examples of the commands.
dscli> lshost
Name Type State Cluster
===========================================
Host_1 Linux Server Offline cluster_2
Host_2 Linux Server Offline cluster_1
pE950_042 AIX Online -
pE950_048 AIX Offline -
x3550_2_03 Windows Server Online -
x3650_2_54 Linux Server Online -
To link the logical hostname with a real physical connection, host ports must be assigned to
the host by running the mkhostport command. This task also can be done by running the
mkhost command with the -hostport wwpn1[,wwpn2,...] option during host creation to save
the additional configuration step.
To see a list of unassigned worldwide port names (WWPNs), which are already logged in to
the storage system and represent the physical HBA ports of the hosts, run the lshostport
command. Specifying -unknown shows all the free ports and -login shows all the logged-in
ports. It takes a while until the storage system shows the new logged-in ports. It is also
possible to add them manually, and they do not need to be logged in to create the host
configuration and volume mappings.
374 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The last step is to assign volumes to the host or cluster to allow host access to the volumes.
The volumes that are assigned to the host are seen only by the host, and the volumes that
are assigned to the cluster can be seen by all hosts inside the cluster. The volume groups for
the cluster and the host are generated automatically by running the chhost/chcluster
-action map command. The automatically created volume groups have the same name as
the host or cluster itself but are different objects.
Example 10-51 shows some change and removal commands. Unassigning the host from the
cluster means that the host also keeps the cluster volume mappings and the cluster also
keeps them.
The automatically created volumes groups are removed only if the host is removed or you run
the rmvolgrp command.
Example 10-52 on page 377 shows the creation of a host connection that represents two
HBA ports in this AIX host. Use the -hosttype parameter to include the host type that you
used in Example 10-44 on page 370. Allocate it to volume group V1. If the storage area
network (SAN) zoning is correct, the host can see the LUNs in volume group V1.
376 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Example 10-52 Creating host connections by using mkhostconnect and lshostconnect
dscli> lshostport -unknown
WWNN WWPN ESSIOport
===========================================
20000120FA13B914 10000090FA13B914 I0032
20000120FA13B915 10000090FA13B915 I0103
dscli> lsvolgrp -type scsimask
Name ID Type
============================
v0 V0 SCSI Mask
AIX_VG_01 V1 SCSI Mask
pE950_042 V2 SCSI Mask
pE950_048 V3 SCSI Mask
pseries_cluster V7 SCSI Mask
dscli> showvolgrp V1
Name AIX_VG_01
ID V1
Type SCSI Mask
Vols 2000 2001 2002 2100 2101 2102
dscli> mkhostconnect -wwname 10000090FA13B914 -hosttype pSeries -volgrp V1 AIX_Server_01
CMUC00012I mkhostconnect: Host connection 000A successfully created.
dscli> mkhostconnect -wwname 10000090FA13B915 -hosttype pSeries -volgrp V1 AIX_Server_01
CMUC00012I mkhostconnect: Host connection 000B successfully created.
dscli> lshostconnect
Name ID WWPN HostType Profile portgrp volgrpID ESSIOport
=========================================================================================
AIX_Server_01 000A 10000090FA13B914 pSeries IBM pSeries - AIX 0 V1 all
AIX_Server_01 000B 10000090FA13B915 pSeries IBM pSeries - AIX 0 V1 all
You can also use -profile instead of -hosttype. However, this method is not a best practice.
Using the -hosttype parameter reflects both parameters (-profile and -hosttype). In
contrast, using -profile leaves the -hosttype column unpopulated.
The option in the mkhostconnect command to restrict access only to certain I/O ports is also
available by using the -ioport parameter. Restricting access in this way is unnecessary. If
you want to restrict access for certain hosts to certain I/O ports on the DS8900F, perform
zoning on your SAN switch.
The mkhostconnect command normally is sufficient to allow the volumes to access the
specified host ports. The command works, but it is not reflected in the modernized GUI
interface. The modernized GUI interface introduced host and cluster grouping for easier
management of groups of hosts with many host ports. If no host or cluster is assigned to the
created connection, the GUI still shows the ports as unassigned host ports with mapped
volumes.
Figure 10-8 Volumes that are mapped to a host port without a host
The lshostconnect -l command in Example 10-53 shows that the relationship between the
volume group and a host connection was not built up. The assigned host is missing in the last
column and portgrp 0 is used, which is not recommended because it is the default port group
for new host ports. There is no host that is created yet for the AIX connection in our example.
The first column does not show the hostname: It is a symbolic name for the connection for
better recognition. The ID field makes the connection unique.
Note: As a best practice, it is not advisable to use the host port group ID of 0. This ID ties
together a group of SCSI host port objects that are accessing a common volume group. If
the port group value is set to zero, the host port is not associated with any port group. It is
used by default for ports that are not grouped yet.
When hosts are created, you can specify the -portgrp parameter. By using a unique port
group number for each attached server, you can detect servers with multiple HBAs.
If you want to use a single command to change the assigned volume group of several host
connections at the same time, assign the host connections to a unique port group. Then, run
the managehostconnect command. This command changes the assigned volume group for all
host connections that are assigned to a particular port group.
378 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Changing host connections
If you want to change a host connection, run the chhostconnect command. This command
can be used to change nearly all parameters of the host connection, except for the WWPN.
Example 10-54 shows the steps to finish the configuration of the mapping. A host must be
created, and then the new host must be assigned to the existing connections A and B and
volume group V1 relationship.
Note: Using the mkvolgrp and mkhostconnect commands for storage partitioning to map
volumes to hosts is not the preferred method. It is available for compatibility for earlier and
existing volume groups. It is better to use the mkhost command from the beginning to
assign hosts, host ports, volume groups, and volume mappings together. It combines the
needed functions in one command and makes sure that no step is forgotten. It also
reduces the number of steps that is needed.
You log on to this host and start DS CLI. It does not matter which HMC that you connect to
when you use the DS CLI. Then, run the lshostvol command.
Important: The lshostvol command communicates only with the OS of the host on which
the DS CLI is installed. You cannot run this command on one host to see the attached
disks of another host.
Note: The Subsystem Device Driver Path Control Module (SDDPCM) (a multipath solution
on AIX) and Subsystem Device Driver Device Specific Module (SDDDSM) (a multipath
solution on Windows) are no longer developed for DS8000. Instead, use an OS-native
solution such as AIX Multipath I/O Path Control Module (AIXPCM) (the AIX default
multipath solution) or Microsoft Device Specific Module (MSDSM) (the Windows default
multipath solution). These solutions are fully supported on open systems.
You do not need to create volume groups or host connects for CKD volumes. If I/O ports in
FICON mode exist, access to CKD volumes by FICON hosts is granted automatically, and
follows the specifications in the input/output definition file (IODF).
380 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.4.1 Creating the arrays
Array creation for CKD volumes is the same as for FB volumes. For more information, see
10.3.2, “Creating the arrays” on page 358.
Example 10-56 Rank and extent pool creation for CKD volumes
dscli> mkrank -array A2 -stgtype ckd
CMUC00007I mkrank: Rank R2 successfully created.
dscli> lsrank
ID Group State datastate Array RAIDtype extpoolID stgtype
==============================================================
R0 0 Normal Normal A0 6 P0 fb
R1 1 Normal Normal A1 6 P1 fb
R2 - Unassigned Normal A2 6 - ckd
dscli> mkextpool -rankgrp 0 -stgtype ckd CKD_HPF_0
CMUC00000I mkextpool: Extent Pool P2 successfully created.
dscli> chrank -extpool P2 R2
CMUC00008I chrank: Rank R2 successfully modified.
dscli> lsextpool
Name ID stgtype rankgrp status availstor (2^30B) %allocated available reserved numvols
===========================================================================================
ITSO_FB P0 fb 0 below 15232 12 15232 1 17
ITSO_FB P1 fb 1 below 15833 8 15833 1 7
CKD_HPF_0 P2 ckd 0 below 7135 0 8098 1 0
When defining a rank, you can also specify the extent size. You can have ranks and extent
pools with large 1113 cylinder CKD extents, or small 21 cylinder CKD extents. The extent unit
is specified with the -extsize parameter of the mkrank command. The first rank that is added
to an extent pool determines the extent size of the extent pool. Example 10-57 shows ranks
with different extent sizes.
Example 10-58 Trying to create CKD volumes without creating an LCU first
dscli> mkckdvol -extpool p2 -cap 262668 -name CKD_EAV1_#h C200
CMUN02282E mkckdvol: C200: Unable to create CKD logical volume: CKD volumes require a CKD
logical subsystem.
To create the LCUs, run the mklcu command. The command uses the following format:
mklcu -qty XX -id XX -ss XXXX
Note: For the z/OS hardware definition, the subsystem identifier (SSID) (-ss -id) must be
unique for all connected storage systems. The z/OS hardware admin usually provides the
SSID to use.
To display the LCUs that you created, run the lslcu command.
Example 10-59 shows the creation of two LCUs by running the mklcu command and then
listing the created LCUs by running the lslcu command. By default, the LCUs that were
created are the 3990-6 type.
Because two LCUs were created by using the parameter -qty 2, the first LCU, which is ID BC
(an even number), is in address group 0, which equates to rank group 0. The second LCU,
which is ID BD (an odd number), is in address group 1, which equates to rank group 1. By
placing the LCUs into both address groups, performance is maximized by spreading the
workload across both servers in the DS8900F.
Important: For the DS8900F, the CKD LCUs can be ID 00 - FE. The LCUs fit into one of
16 address groups. Address group 0 is LCUs 00 - 0F, address group 1 is LCUs 10 - 1F, and
so on, except group F is F0 - FE. If you create a CKD LCU in an address group, that
address group cannot be used for FB volumes. Likewise, if, for example, FB volumes were
in LSS 40 - 4F (address group 4), that address group cannot be used for CKD. Be aware of
this limitation when you plan the volume layout in a mixed FB and CKD DS8900F. Each
LCU can manage a maximum of 256 volumes, including alias volumes for the parallel
access volume (PAV) feature.
382 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.4.4 Creating the CKD volumes
Now that an LCU is created, the CKD volumes can be created by using the mkckdvol
command. The mkckdvol command uses the following format:
mkckdvol -extpool P2 -cap 262668 -datatype 3390-A -eam rotatevols -name
CKD_EAV1_#h BC06
The greatest difference with CKD volumes is that the capacity is expressed in cylinders or as
mod1 (Model 1) extents (1113 cylinders). To not waste space, use volume capacities that are
a multiple of 1113 cylinders.
The support for extended address volumes (EAVs) was enhanced. The DS8900F supports
EAV volumes up to 1,182,006 cylinders. The EAV device type is called 3390 Model A.
Important: For 3390-A volumes, the size can be specified as 1 - 65,520 in increments of 1,
and from 65,667, which is the next multiple of 1113, to 1,182,006 in increments of 1113.
The last parameter in the command is the volume_ID. This value determines the LCU that the
volume belongs to and the unit address (UA) for the volume. Both of these values must be
matched to a control unit and device definition in the input/output configuration data set
(IOCDS) that an IBM Z system server uses to access the volume.
The volume_ID has a format of LLVV. LL (00 - FE) equals the LCU to which the volume
belongs, and VV (00 - FF) equals the offset for the volume. Only one volume of an LCU can
use a unique VV of 00 - FF.
Example 10-60 shows the creation of 3390-A volumes with a capacity of 262,668 cylinders
that are assigned to LCU BC with an offset of 00 - 05.
You can create only CKD volumes in LCUs that you already created. Volumes in
even-numbered LCUs must be created from an extent pool that belongs to rank group 0.
Volumes in odd-numbered LCUs must be created from an extent pool in rank group 1. With
one mkckdvol command, volumes for one LCU can be defined.
Important: You can configure a volume to belong to a certain resource group by using the
-resgrp <RG_ID> flag in the mkckdvol command. For more information, see IBM System
Storage DS8000 Copy Services Scope Management and Resource Groups, REDP-4758.
More DS CLI commands are available to control and protect the space in an extent pool for
thin-provisioned volumes. One of these commands, the mksestg command, reserves space
for thin-provisioned volumes. For more information about thin-provisioning, see IBM DS8880
Thin Provisioning (Updated for Release 8.5), REDP-5343.
You can also specify that you want the extents of the volume to be evenly distributed across
all ranks within the extent pool. This allocation method is called rotate extents.
Rotate extents: For the DS8900F, the default allocation policy is rotate extents.
The EAM is specified with the -eam rotateexts or the -eam rotatevols option of the
mkckdvol command (Example 10-62).
The showckdvol command with the -rank option (Example 10-63) shows that the volume that
was created is distributed across two ranks. It also displays how many extents on each rank
were allocated for this volume. In this example, the pool P3 uses small 21-cylinder CKD
extents.
384 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
datastate Normal
configstate Normal
deviceMTM 3390-9
volser -
datatype 3390
voltype CKD Base
orgbvols -
addrgrp B
extpool P3
exts 477
cap (cyl) 10017
cap (10^9B) 8.5
cap (2^30B) 7.9
ranks 2
sam Standard
repcapalloc -
eam rotateexts
reqcap (cyl) 10017
cap (Mod1) 9.0
realextents 477
virtualextents 2
realcap (cyl) 10017
migrating 0
migratingcap (cyl) 0
perfgrp PG0
migratingfrom -
resgrp RG0
tierassignstatus Unknown
tierassignerror -
tierassignorder Unknown
tierassigntarget Unknown
%tierassigned 0
etmonpauseremain -
etmonitorreset unknown
safeguardedcap (cyl) -
safeguardedloc -
usedsafeguardedcap (cyl) -
safeguarded no
SGC Recovered no
==============Rank extents==============
rank extents capacity (MiB/cyl) metadata
========================================
R2 239 5019 no
R3 238 4998 yes
Important: Before you can expand a volume, you first must delete all CS relationships for
that volume. Also, you cannot specify both -cap and -datatype in the same chckdvol
command.
386 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
It is possible to expand a 3390 Model 9 volume to a 3390 Model A. Expand the volume by
specifying new capacity for an existing Model 9 volume. When you increase the size of a
3390-9 volume beyond 65,520 cylinders, its device type automatically changes to 3390-A.
Important: A 3390 Model A can be used only in z/OS V1.10 (depending on the size of the
volume) and later, as shown in Example 10-66.
CMUC00332W chckdvol: Some host operating systems do not support changing the volume size.
Data can be at risk if the host does not support this action. Are you sure that you want to
resize the volume? [Y/N]: y
CMUC00022I chckdvol: CKD Volume BD01 successfully modified.
388 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
You cannot reduce the size of a volume. If you try to reduce the size, an error message is
displayed.
CKD volumes can be deleted by using the rmckdvol command. FB volumes can be deleted
by using the rmfbvol command.
The command includes a capability to prevent the accidental deletion of volumes that are in
use. A CKD volume is considered in use if it participates in a CS relationship, or if the IBM Z
path mask indicates that the volume is in a grouped state or online to any host system.
If the -force parameter is not specified with the command, volumes that are in use are not
deleted. If multiple volumes are specified and several volumes are in use and several volumes
are not, the volumes that are not in use are deleted.
If the -force parameter is specified on the command, the volumes are deleted without
checking to see whether they are in use.
Example 10-67 shows an attempt to delete two volumes, BD02 and BD03. Volume BD02 is
online on a host. Volume BD03 is not online on any host and not in a CS relationship. The
rmckdvol BD02-BD03 command deletes only volume BD03, which is offline. To delete volume
BD02, use the -force parameter.
The command includes options to prevent the accidental reinitialization of volumes that are in
use. A CKD volume is considered to be in use if it is participating in a CS relationship or if the
IBM Z system path mask indicates that the volume is in a grouped state or online to any host
system. All data is lost when this command is used.
For more information about resource groups, see IBM System Storage DS8000 Copy
Services Scope Management and Resource Groups, REDP-4758.
Easy Tier Heat Map Transfer (HMT) allows the transfer of Easy Tier heat maps from primary
to auxiliary storage sites.
For more information about Easy Tier, see the following publications:
IBM DS8000 Easy Tier (Updated for DS8000 R9.0), REDP-4667
IBM DS8870 Easy Tier Heat Map Transfer, REDP-5015
390 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.5 Metrics with DS CLI
This section describes several command examples from the DS CLI that analyze the
performance metrics from different levels in a storage unit. The DS GUI also provides new
capabilities for performance monitoring, as described in 9.14, “Performance monitoring” on
page 322.
Important: The help command shows specific information about each of the metrics.
Performance metrics: All performance metrics are an accumulation starting from the
most recent counter-wrap or counter-reset. The performance counters are reset on the
following occurrences:
When the storage unit is turned on.
When a server fails and the failover and fallback sequence is run.
Example 10-69 shows an example of the showfbvol command. This command displays the
detailed properties for an individual volume and includes a -metrics parameter that returns
the performance counter-values for a specific volume ID.
Example 10-70 show an example of the showckdvol command. This command displays the
detailed properties for an individual volume and includes a -metrics parameter that returns
the performance counter-values for a specific volume ID.
392 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
phbytewrite 0
recmoreads 0
sfiletrkreads 0
contamwrts 0
PPRCtrks 0
NVSspallo 0
timephread 90
timephwrite 0
byteread 0
bytewrit 0
timeread 0
timewrite 0
zHPFRead 0
zHPFWrite 0
zHPFPrefetchReq 0
zHPFPrefetchHit 0
GMCollisionsSidefileCount 0
GMCollisionsSendSyncCount 0
Example 10-71 shows an example of the output of the showrank command. This command
generates two types of reports. One report displays the detailed properties of a specified
rank, and the other report displays the performance metrics of a specified rank by using the
-metrics parameter.
Example 10-72 shows an example of the showioport command. This command shows the
properties of a specified I/O port and the performance metrics by using the -metrics
parameter. Monitoring the I/O ports is one of the most important tasks of the system
administrator. The I/O port is where the HBAs, SAN, and DS8900F exchange information. If
one of these components has problems because of hardware or configuration issues, all of
the other components are affected.
394 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The output of the showioport command includes several metric counters. For example, the
%UtilizeCPU metric for the CPU utilization of the HBA and the CurrentSpeed that the port
uses might be useful information.
Example 10-72 on page 393 shows the many important metrics that are returned by the
command. It provides the performance counters of the port and the FC link error counters.
The FC link error counters are used to determine the health of the overall communication.
Example 10-73 Full output for the showioport -rdp Ixxxx command for a specific I/O port
dscli> showioport -rdp I0032
ID I0032
WWPN 5005076309039462
Attached WWPN 200D00051EF0EC72
Physical Type FC-FS-3
Link Failure Error 0
Loss of sync Error 0
Loss of Signal Error 0
Primitive Sequence Error 0
Invalid Transmission Word Error 0
CRC Error 0
FEC Status Inactive
Uncorrected Blocks -
Corrected Blocks -
Port Speed Capabilities 8GFC 16GFC 32GFC
Port Operating Speed 8GFC
Advertised B-B Credit 90
Attached Port B-B Credit 8
Nominal RTT Link Latency Unknown
Connector Type SFP+
Tx Type Short Wave Laser
Transceiver Temperature 39.9 C [Operating Range -128 - +128 C]
Tx Bias Current 6.5 mAmps [Operating Range 0 - 131 mAmps]
Transceiver Supply Voltage 3364.4 mV [Operating Range 0 - 3600 mVolts]
Rx Power 448.8 uW(-3.5 dBm) [Operating Range 0 - 6550 uW]
Tx Power 681.3 uW(-1.7 dBm) [Operating Range 0 - 6550 uW]
Last SFP Read time 10/12/2019 09:48:04 CEST
======SFP Parameters Alarm Levels=======
Element High Warning Low Warning High Alarm Low Alarm
========================================================================
Transceiver Temperature 0 0 0 0
Tx Bias Current 0 0 0 0
Transceiver Supply Voltage 0 0 0 0
Tx Power 0 0 0 0
Rx Power 0 0 0 0
============================Attached Port=============================
ID N/A
WWPN 200D00051EF0EC72
Attached WWPN 5005076309039462
Physical Type FC-FS-3
Link Failure Error 0
Loss of sync Error 3
396 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Loss of Signal Error 2
Primitive Sequence Error 0
Invalid Transmission Word Error 0
CRC Error 0
FEC Status Inactive
Uncorrected Blocks -
Corrected Blocks -
Port Speed Capabilities 1GFC 2GFC 4GFC 8GFC
Port Operating Speed 8GFC
Advertised B-B Credit 0
Attached Port B-B Credit 0
Nominal RTT Link Latency Unknown
Connector Type SFP+
Tx Type Short Wave Laser
Transceiver Temperature 39.0 C [Operating Range -128 - +128 C]
Tx Bias Current 9.0 mAmps [Operating Range 0 - 131 mAmps]
Transceiver Supply Voltage 3281.1 mV [Operating Range 0 - 3600 mVolts]
Rx Power 690.2 uW(-1.6 dBm) [Operating Range 0 - 6550 uW]
Tx Power 479.8 uW(-3.2 dBm) [Operating Range 0 - 6550 uW]
Last SFP Read time 10/12/2019 08:40:30 CEST
======SFP Parameters Alarm Levels=======
Element High Warning Low Warning High Alarm Low Alarm
========================================================================
Transceiver Temperature 0 0 0 0
Tx Bias Current 0 0 0 0
Transceiver Supply Voltage 0 0 0 0
Tx Power 0 0 0 0
Rx Power 0 0 0 0
dscli>
The result of the command in Example 10-74 is a .csv file with detailed information. For more
information, see Figure 10-9.
398 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
showaccess
This command displays the current setting for each access that is managed with the
manageaccess command. It also displays the remote service access settings that are
provided with the lsaccess command.
The following security commands are available to manage remote service access settings:
chaccess
Use the chaccess command to change the following settings of an HMC:
– Enable and disable the command-line shell access to the HMC.
– Enable and disable the WUI access on the HMC.
– Enable and disable Assist On-site (AOS) or IBM Remote Support Center (RSC) access
to the HMC.
Important:
This command affects service access only and does not change access to the
system by using the DS CLI or DS Storage Manager.
Only users with administrator authority can access this command.
lsaccess
The lsaccess command displays the access settings of an HMC. If you add the -l
parameter, the command also displays the AOS or RSC status. If AOS or RSC is active,
an AOS or RSC connection shows as enabled. An AOS or RSC connection is used only
for remote support purposes. For more information, see Example 10-75.
The following commands enable the TLS protocol for secure syslog traffic. TLS must be
enabled before configuring all syslog servers. If you specify TLS, all syslog servers
configurations use the same protocol and certificates.
mksyslogserver
Example 10-76 shows the new DS CLI command mksyslogserver, which configures
syslogserver as TLS-enabled. The certificate authority (CA) certificate, HMC Certificate,
and HMC private key locations are required when configuring the first syslogserver.
Important: For more information about security issues and overall security management
to implement NIST 800-131a compliance, see IBM DS8870 and NIST SP 800-131a
Compliance, REDP-5069.
The following DS CLI commands specify a custom certificate for communication between the
encryption key servers (typically IBM Security Key Lifecycle Manager and the storage system:
managekeygrp -action -importcert
Specifies the customer-generated certificate to import in Public-Key Cryptography
Standard (PKCS) #12 format, as shown in Example 10-78.
For more information, see IBM DS8000 Series Command-Line Interface User's Guide,
SC27-9562.
These commands are not described in this chapter. For more information about these
commands, see IBM DS8000 Copy Services: Updated for IBM DS8000 Release 9.1,
SG24-8367.
400 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
10.8 Earlier DS CLI commands and scripts
As versions of the DS8900F code evolve, new commands are introduced that are supported
by the DS CLI. Additionally, in some cases, older commands are adjusted or changed to
support new functions. To ensure continuity for any scripts and programming customers might
have created by using these older commands, the DS CLI still supports many of these
commands even though they might no longer be documented in the DS CLI Command
Reference or available any longer with the DSI CLI help command, as shown in
Example 10-80.
Even though the command may no longer be referenced in the help pages of the DS CLI, the
command is still supported, as shown in Example 10-81.
Some new commands and their older equivalents are shown in Table 10-1.
402 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Part 4
In addition, the Licensed Internal Code (LIC) and internal operating system (OS) that run on
the Hardware Management Consoles (HMCs) and each central processor complex (CPC)
can be updated. As IBM continues to develop the DS8900F, new features are released
through new LIC levels.
When IBM releases new LIC for the DS8900F, it is released in the form of a bundle. The term
bundle is used because a new code release can include updates for various DS8900F
components. These updates are tested together, and then the various code packages are
bundled together into one unified release. Components within the bundle each include their
own revision levels.
For more information about a DS8900F cross-reference table of code bundles, see DS8900F
Code Bundle Information.
The cross-reference table shows the levels of code for released bundles. The cross-reference
information is updated as new code bundles are released.
In addition to keeping your LIC up to date, make sure to maintain a current version of the Data
Storage Command-line Interface (DS CLI).
The DS8900F uses the following naming convention for bundles: PR.MM.FFF.EEEE, where the
components are:
P: Product (8 = DS8000)
R: Release Major (X, 9 = DS8900F)
MM: Release Minor (xx)
FFF: Fix Level (xxx)
EEEE: Fix Level (0 is base, and 1..n is the interim fix build that is later than the base level.)
The 9.30 in Example 11-1 stands for the Release 9.3 without a Service Pack.
406 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
If DS CLI is used, you can obtain the CLI and LMC code level information by using the ver
command, as shown in Example 11-2. The ver command uses the following optional
parameters and displays the versions of the CLI, Storage Manager, and LMC:
-s (optional) The -s parameter displays the version of the CLI program. You cannot
use the -s and -l parameters together.
-l (optional) The -l parameter displays the versions of the CLI, Storage Manager,
and LMC. You cannot use the -l and -s parameters together.
-cli (optional) Displays the version of the CLI program. Version numbers are in the
format version.release.modification.fixlevel.
-stgmgr (optional) Displays the version of the Storage Manager.
This ID is not for the GUI (Storage Manager GUI). This ID relates to
HMC code bundle information.
-lmc (optional) Displays the version of the LMC.
The Bundle version (Release) also can be retrieved from the DS Storage Manager by clicking
Actions → Properties from the Dashboard window, as shown in Figure 11-1.
Important: The LMC is usually provided by and installed by IBM Remote Support
Personnel, or by an IBM Systems Service Representative (IBM SSR). With the release of
R9.3, the customer may manage the entire process from the DS8000 Storage Manager
GUI. When this process is handled by the IBM SSR or IBM Remote Support Personnel,
they review the “Prerequisites” section or “Attention Must Read” section in the LIC update
instructions, and inform the customer during the planning phase if any prerequisites must
be considered.
The bundle package contains the following new levels of updated code:
HMC code levels:
– HMC OS and managed system base
– DS Storage Manager
– IBM Copy Services Manager
Managed system code levels
Interim fix code levels
Storage facility image (SFI) code levels
Host adapter code levels
DA code level
I/O enclosure code level
Power code levels
Rack power control card (RPCC) code level
Flash Drive Enclosure Service Module (ESM) interface card code levels
Flash Drive enclosure power supply unit (PSU) code levels
Flash drive module firmware code level
The code is either updated remotely or locally at the HMC by an IBM SSR. Upgrading the
code remotely can be done by IBM through Remote Code Load (RCL) or by the client through
the DS8000 Storage Manager GUI. RCL is the default method. If the client wants the code
updated by the local IBM SSR onsite, then the Feature Code for remote code exception must
be ordered with the system.
Note: When the customer does not opt for Expert Care Premium, Customer Code Load is
the default on the DS8900F system.
Other than the actions of acquiring the microcode, the process of distribution and activation is
the same.
The Code Distribution and Activation (CDA) software preinstall is the method that is used to
run the concurrent code load (CCL) distribution. By using the CDA software preinstall, the
IBM SSR performs every non-impacting CCL step for loading code by inserting the physical
media into the primary HMC or by running a network acquisition of the code level that is
needed. The IBM SSR can also download the bundle to their Notebook and then load it on
the HMC by using a service tool, or download the bundle from IBM Fix Central on the HMC for
RCL.
408 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
After the CDA software preinstallation starts, the following steps occur automatically:
1. The release bundle is downloaded from either the DVD or network to the HMC hard disk
drive (HDD).
2. The HMC receives any code update-specific fixes.
3. Code updates are distributed to the logical partition (LPAR) and staged on an alternative
base operating system (BOS) repository.
4. Scheduled precheck scans are performed until the distributed code is activated by the
user. After 30 days without activation, the code expires and is automatically removed from
the alternative BOS.
Anytime after the software preinstallation completes, when the user logs in to the primary
HMC, the user is guided automatically to correct any serviceable events that might be open,
update the HMC, and activate the previously distributed code on the storage facility. The
overall process is also known as CCL.
Although the microcode installation process might seem complex, it does not require
significant user intervention. IBM Remote Support Personnel normally start the CDA process
and then monitors its progress by using the HMC. The customer’s experience with upgrading
the code of the DS8000 is the same.
Important: The default setting for this feature is off, but can be enabled in the Storage
Manager GUI. For more information, contact your IBM SSR.
To enable this feature, log in to the Storage Manager GUI, select Settings → System →
Advanced, and select Automatic code management, as shown in Figure 11-2.
To address this situation, a new HMC Code Image Server function was introduced in DS8000
Release 9.0. With the HMC Code Image Server function, a single HMC in the customer data
center can acquire code from IBM Fix Central. One HMC sends those images to other HMCs
by using the client Ethernet network. The advantage of this approach is that there is no need
to download the image from IBM Fix Central multiple times, and the code bundles can be
copied locally by using that download.
410 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The HMC Code Image Server function works with bundle images and other updates, such as
ICS Images. HMC Recovery Images are performed if they are available on the source HMC.
Figure 11-3 shows the two new menu options in the Updates menu of the service console
Web User Interface (WUI).
At the site where the code bundle was acquired and downloaded, the HMC Code Image
Server function must be enabled. The target site then uses the Remote File Download
function to copy the available code bundles to a local repository. All images on the source
HMC are copied to the target HMC.
This process copies only the update image files to the local /extra/BundleImage/ directory
on the target HMC. Then, the normal acquisition step still must be performed, and the local
directory on the target HMC must be selected, as shown on Figure 11-4.
Note: Bundles that were acquired from physical DVD media cannot be served by the HMC
Code Image Server function because they are imported directly to the HMC software
library and are not available as single bundle image afterward.
Figure 11-4 on page 411 also shows that it is possible to acquire a bundle from the storage
system LPAR. Every IBM Fix Central acquired image is copied to the HMC and the LPARs of
the storage system and then imported into the HMC library. Because there is a copy on the
LPAR, the partner HMC can now use the LPAR as a source for the acquisition step. This
action can be done on both HMCs on the same storage system because only these HMCs
have access to the internal network to copy the files from the LPARs. Copying from the LPARs
does not require using the HMC Code Image Server function.
RCL is a trusted process where an IBM Remote Support engineer securely connects to a
DS8000 system, enables the remote acquisition, and performs the distribution and activation
of LIC bundles and ICS images.
The RCL process is concurrent, that is, it can be run without interruptions to business
operations. This process consists of the following steps, as illustrated in Figure 11-5.
412 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
1. IBM Remote Support Personnel work with IBM Technical Advisors to plan the microcode
update to ensure that the client’s environment is in the planning phase.
2. When an RCL is agreed on and scheduled, IBM Remote Support Personnel in the
IBM Support Center initiate a session with the target HMC.
3. During the agreed on maintenance window, IBM Remote Support Personnel direct the
HMC to acquire the code images from the IBM Fix Central repository and prepare for code
activation.
4. During the customer maintenance window, IBM Remote Support Personnel initiate the
activation request and update the HMCs and DS8000 to the new target microcode level.
Code bundles are pulled to the HMC. They are not pushed.
Note: Customer Code Load runs the same background processes as the RCL.
There is a 30-day countdown between these two parts of the upgrade in which the client can
decide when to proceed with the second part. During the 30 days, the system stores the
downloaded code bundle, and it can be activated at any point.
Figure 11-6 shows the Update System menu when there are no code bundles downloaded to
the DS8000 storage system. The Health Check and Activate option is enabled after the
download of the microcode completes.
Important: The storage system must be in a healthy hardware status to avoid issues
during the code upgrade process. For that reason, at any point in time, the client can select
the option Health Check to confirm whether the system is ready for the upgrade.
414 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 11-7 Microcode level query
2. After the code level is selected, the process downloads the new code bundle to the
DS8000 Hardware Management Console, and then distributes the separate firmware
packages to each internal component in the DS8000 system.
3. The user can monitor the process until completion. After the download completes, click
Close Status, as shown in Figure 11-8.
Note: After completing this step, the Health Check option still is available, but it is not
mandatory to select it before proceeding with the code activation. A health check still is
performed before and after the activation when the user selects Health Check and
Activate.
5. To activate the code, select Health Check and Activate. A new attention message
appears in the Storage Manager GUI and notifies you about the actions that are about to
be performed, as shown in Figure 11-10 on page 417. To start, click Yes.
416 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 11-10 Code Activation Message
6. The code activation progress can be tracked in the Storage Manager GUI until the end.
After it completes, it displays a message confirming that the activation is complete (see
Figure 11-11).
If there is any unexpected situation such as hardware failure or code upgrade failure during
the process, the customer is notified in the window that is shown in Figure 11-11. There will
be a case number in that notification, which can be used as a reference for when the client
engages IBM Remote Support.
Best practice: Many clients with multiple DS8000 systems follow the update schedule that
is detailed in this chapter. In this schedule, the HMC is updated a day or two before the rest
of the bundle is applied. If a large gap exists between the present and destination level of
bundles, certain DS CLI commands (especially DS CLI commands that relate to IBM Copy
Services (CS)) might not be able to be run until the SFI is updated to the same level of the
HMC. Your IBM SSR or IBM Technical Advisor or Technical Account Manager can help you
in this situation.
Before you update the CPC OS and LIC, a pre-verification test is run to ensure that no
conditions exist that prohibit a successful code load. The HMC code update installs the latest
version of the pre-verification test. Then, the newest test can be run.
If problems are detected, one or two days are available before the scheduled code installation
window date to correct them. This procedure is shown in the following example:
Thursday:
a. Acquire the new code bundle and send them to the HMCs.
b. Update the HMCs to the new code bundle.
c. Run the updated pre-verification test.
d. Resolve any issues that were identified by the pre-verification test.
Saturday:
Update (Activate) the SFI code.
The average code load time varies depending on the hardware that is installed, but 2.5 - 4
hours is normal. Always speak with your IBM SSR about proposed code load schedules.
Additionally, check multipathing drivers and storage area network (SAN) switch firmware
levels for their current levels at regular intervals.
This fast update means that single path hosts, hosts that boot from SAN, and hosts that do
not have multipathing software do not need to be shut down during the update. They can
keep operating during the host adapter update because the update is so fast. Also, no
Subsystem Device Driver (SDD) path management is necessary.
418 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Interactive host adapters also can be enabled if you want to control the host path manually. If
so, before the host adapters are updated, a notification is sent and a confirmation is needed.
You can then take the corresponding host paths offline and switch to other available paths.
This function is usually enabled by default. For more information about how to enable this
function, contact your IBM SSR.
CUIR allows the DS8900F to request that all attached system images set all paths that are
required for a particular service action to the offline state. System images with the correct
level of software support respond to these requests by varying off the affected paths. The
image then notifies the DS8900F subsystem that the paths are offline or that it cannot take
the paths offline. CUIR reduces manual operator intervention and the possibility of human
error during maintenance actions.
CUIR also reduces the time that is required for the maintenance window. This feature is
useful in environments in which many systems are attached to a DS8900F.
Starting with Release 9.3, loading the microcode can now be performed entirely by the client.
The microcode is downloaded from IBM Fix Central, and the client can perform all the steps
by using the DS8000 Storage Manager GUI. For more information, see 11.2.2, “Customer
Code Load” on page 413.
FPCCL is automatically set as the preferred code load function, assuming that the
requirements of the bundle to be activated satisfy the requirements for FPCCL.
The FPCCL requirements were expanded to include the following features. The delta of the
“coming from” level and the “go to” level consists of these elements.
SFI code:
– LPAR code
– DA
High-Performance Flash Enclosure (HPFE):
– Small Computer System Interface (SCSI) Enclosure Services (SES) processor
firmware
– PSU firmware
Host adapter firmware
AIX interim fix
IBM Power firmware for iPDUs and RPCCs
Important: The code load function reverts to traditional CCL if any additional components,
other than the components that are listed previously, are included in the update.
FPCCL includes an autonomic recovery function, which means that FPCCL is far more
tolerant to temporary non-critical errors that might surface during the activation. During the
FPCCL, if an error is posted, the LIC automatically analyzes the error and evaluates whether
CCL can continue. If it cannot, the LIC suspends the CCL and calls for service. The DS8900F
system can continue with the code update with tolerable errors. The DS8900F FPCCL update
is more robust with a much shorter duration. After the code update completes, your IBM SSR
works to resolve any of the problems that were generated during the code update at a
convenient time, allowing DS8900F clients to schedule the code update in a controlled
manner.
During an update, a system is under less redundant conditions because certain components
are undergoing a firmware update. With FPCCL, firmware activation time is drastically
reduced. Therefore, system redundancy is improved with less exposure to non-redundant
durations. In addition, firmware distribution time is also minimized because fewer components
are involved in the code update.
The CCL duration of the DS8000 family continues to advance with the introduction of new
technology. With the latest DS8900F firmware, the LIC preinstall can be arranged before your
code update service window performs the code activation, distribution, and HMC update. The
activation times of various components are greatly reduced.
420 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
11.7 Postinstallation activities
After a new code bundle is installed, you might need to complete the following tasks:
1. Upgrade the DS CLI of external workstations. For most new release code bundles, a
corresponding new release of the DS CLI is available. The LMC version and the DS CLI
version are usually identical. Ensure that you upgrade to the new version of the DS CLI to
take advantage of any improvements.
A current version of the DS CLI can be downloaded from IBM Fix Central.
2. Verify the connectivity from each DS CLI workstation to the DS8900F.
3. Verify the DS Storage Manager connectivity by using a supported browser.
4. Verify the DS Storage Manager connectivity from IBM Spectrum Control to the DS8900F.
5. Verify the DS Storage Manager connectivity from IBM Copy Services Manager to the
DS8900F.
6. Verify the connectivity from the DS8900F to all IBM Security Guardium Key Lifecycle
Manager instances, or other servers in use.
11.8 Summary
IBM might release changes to the DS8900F LMC. These changes might include code fixes
and feature updates that relate to the DS8900F.
These updates and the information about them are documented in the DS8900F Code
cross-reference website. You can find this information for a specific bundle under the Bundle
Release Note information section in DS8000 Code Recommendation.
The chapter also describes the outbound (Call Home and support data offload) and inbound
(code download and remote support) communications for the IBM DS8000 family.
SNMP alert traps provide information about problems that the storage unit detects. You or the
service provider must correct the problems that the traps detect.
The DS8900F does not include an installed SNMP agent that can respond to SNMP polling.
The default Community Name parameter is set to public.
The management server that is configured to receive the SNMP traps receives all of the
generic trap 6 and specific trap 3 messages, which are sent in parallel with the call home to
IBM.
To configure SNMP for the DS8900F, first get the destination address for the SNMP trap and
information about the port on which the trap daemon listens.
Standard port: The standard port for SNMP traps is port 162.
The file is in the snmp subdirectory on the Data Storage Command-line Interface (DS CLI)
installation CD or the DS CLI installation CD image. The image is available at IBM Fix Central.
424 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Each trap message contains an object identifier (OID) and a value, as shown in Table 12-1 on
page 424, to notify you about the cause of the trap message. You can also use type 6, the
enterpriseSpecific trap type, when you must send messages that do not fit the predefined trap
types. For example, the DS8900F uses this type for notifications that are described in this
chapter.
A serviceable event is posted as a generic trap 6, specific trap 3 message. The specific trap 3
is the only event that is sent for serviceable events and hardware service-related actions
(data offload and remote secure connection). For reporting CS events, generic trap 6 and
specific traps 100, 101, 102, 200, 202, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220,
225, or 226 are sent.
Note: Consistency group traps (200 and 201) must be prioritized above all other traps.
They must be surfaced in less than 2 seconds from the real-time incident.
The SNMP trap is sent in parallel with a call home for service to IBM and email notification (if
configured).
For open events in the event log, a trap is sent every 8 hours until the event is closed.
This chapter describes only the messages and the circumstances when traps are sent by the
DS8900F. For more information about these functions and terms, see IBM DS8000 Copy
Services: Updated for IBM DS8000 Release 9.1, SG24-8367.
If one or several links (but not all links) are interrupted, a trap 100 (Example 12-2) is posted.
Trap 100 indicates that the redundancy is degraded. The reference code (RC) column in the
trap represents the return code for the interruption of the link.
Example 12-2 Trap 100: Remote Mirror and Remote Copy links degraded
PPRC Links Degraded
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-981 75-ZA571 12
SEC: IBM 2107-981 75-CYK71 24
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 15
2: FIBRE 0213 XXXXXX 0140 XXXXXX OK
If all of the links are interrupted, a trap 101 (Example 12-3) is posted. This event indicates that
no communication between the primary and the secondary system is possible.
Example 12-3 Trap 101: Remote Mirror and Remote Copy links are inoperable
PPRC Links Down
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-981 75-ZA571 10
SEC: IBM 2107-981 75-CYK71 20
Path: Type PP PLink SP SLink RC
1: FIBRE 0143 XXXXXX 0010 XXXXXX 17
2: FIBRE 0213 XXXXXX 0140 XXXXXX 17
426 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
After the DS8900F can communicate again by using any of the links, trap 102 (Example 12-4)
is sent after one or more of the interrupted links are available again.
Example 12-4 Trap 102: Remote Mirror and Remote Copy links are operational
PPRC Links Up
UNIT: Mnf Type-Mod SerialNm LS
PRI: IBM 2107-981 75-ZA571 21
SEC: IBM 2107-981 75-CYK71 11
Path: Type PP PLink SP SLink RC
1: FIBRE 0010 XXXXXX 0143 XXXXXX OK
2: FIBRE 0140 XXXXXX 0213 XXXXXX OK
Example 12-5 Trap 200: LSS pair consistency group Remote Mirror and Remote Copy pair error
LSS-Pair Consistency Group PPRC-Pair Error
UNIT: Mnf Type-Mod SerialNm LS LD SR
PRI: IBM 2107-981 75-ZA571 84 08
SEC: IBM 2107-981 75-CYM31 54 84
Trap 202, as shown in Example 12-6, is sent if a Remote Copy pair goes into a suspend state.
The trap contains the serial number (SerialNm) of the primary and secondary machine, the
LSS (LS), and the logical device (LD). To avoid SNMP trap flooding, the number of SNMP traps
for the LSS is throttled. The complete suspended pair information is represented in the
summary.
The last row of the trap represents the suspend state for all pairs in the reporting LSS. The
suspended pair information contains a hexadecimal string of 64 characters. By converting this
hex string into binary code, each bit represents a single device. If the bit is 1, the device is
suspended. Otherwise, the device is still in full duplex mode.
Example 12-6 Trap 202: Primary Remote Mirror and Remote Copy devices on LSS suspended due to
an error
Example 12-7 Trap 210: Global Mirror initial consistency group successfully formed
Asynchronous PPRC Initial Consistency Group Successfully Formed
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Session ID: 4002
Trap 211, as shown in Example 12-8, is sent if the GM setup is in a severe error state in which
no attempts are made to form a consistency group.
Trap 212, as shown in Example 12-9, is sent when a consistency group cannot be created in
a GM relationship for one of the following reasons:
Volumes were taken out of a copy session.
The Remote Copy link bandwidth might not be sufficient.
The Fibre Channel (FC) link between the primary and secondary system is not available.
Example 12-9 Trap 212: Global Mirror consistency group failure - Retry is attempted
Asynchronous PPRC Consistency Group Failure - Retry will be attempted
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Session ID: 4002
Example 12-10 Trap 213: Global Mirror consistency group successful recovery
Asynchronous PPRC Consistency Group Successful Recovery
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Session ID: 4002
Trap 214, as shown in Example 12-11, is sent if a GM session is ended by using the DS CLI
rmgmir command or the corresponding GUI function.
428 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
As shown in Example 12-12, trap 215 is sent if, in the GM environment, the master detects a
failure to complete the FlashCopy commit. The trap is sent after many commit retries fail.
Example 12-12 Trap 215: Global Mirror FlashCopy at remote site unsuccessful
Asynchronous PPRC FlashCopy at Remote Site Unsuccessful
A UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-CZM21
Session ID: 4002
Trap 216, as shown in Example 12-13, is sent if a GM master cannot end the GC relationship
at one of its subordinates. This error might occur if the master is ended by using the rmgmir
command but the master cannot end the copy relationship on the subordinate.
You might need to run a rmgmir command against the subordinate to prevent any interference
with other GM sessions.
Trap 218, as shown in Example 12-15, is sent if a GM exceeded the allowed threshold for
failed consistency group formation attempts.
Example 12-15 Trap 218: Global Mirror number of consistency group failures exceeds threshold
Global Mirror number of consistency group failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Session ID: 4002
Example 12-16 Trap 219: Global Mirror first successful consistency group after prior failures
Global Mirror first successful consistency group after prior failures
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Session ID: 4002
Example 12-17 Trap 220: Global Mirror number of FlashCopy commit failures exceeds threshold
Global Mirror number of FlashCopy commit failures exceed threshold
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Session ID: 4002
Trap 225, as shown in Example 12-18, is sent when a GM operation paused on the
consistency group boundary.
Example 12-18 Trap 225: Global Mirror paused on the consistency group boundary
Global Mirror operation has paused on the consistency group boundary
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-CYM31
Session ID: 4002
Trap 226, in Example 12-19, is sent when a GM operation failed to unsuspend one or more
GC members.
03 The host system sent a command to the primary volume of a Remote Mirror and
Remote Copy volume pair to suspend copy operations. The host system might
specify an immediate suspension or a suspension after the copy completes and the
volume pair reaches a full duplex state.
04 The host system sent a command to suspend the copy operations on the secondary
volume. During the suspension, the primary volume of the volume pair can still
accept updates, but updates are not copied to the secondary volume. The
out-of-sync tracks that are created between the volume pair are recorded in the
change recording feature of the primary volume.
05 Copy operations between the Remote Mirror and Remote Copy volume pair were
suspended by a primary storage unit secondary device status command. This
system resource code can be returned only by the secondary volume.
06 Copy operations between the Remote Mirror and Remote Copy volume pair were
suspended because of internal conditions in the storage unit. This system resource
code can be returned by the control unit of the primary volume or the secondary
volume.
07 Copy operations between the Remote Mirror and Remote Copy volume pair were
suspended when the auxiliary storage unit notified the primary storage unit of a
state change transition to the simplex state. The specified volume pair between the
storage units is no longer in a copy relationship.
430 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
SR code Description
09 The Remote Mirror and Remote Copy volume pair was suspended when the
primary or auxiliary storage unit was restarted or when the power was restored. The
paths to the auxiliary storage unit might not be unavailable if the primary storage
unit was turned off. If the auxiliary storage unit was turned off, the paths between
the storage units are restored automatically, if possible. After the paths are restored,
run the mkpprc command to resynchronize the specified volume pairs. Depending
on the state of the volume pairs, you might need to run the rmpprc command to
delete the volume pairs and run an mkpprc command to reestablish the volume
pairs.
0A The Remote Mirror and Remote Copy pair was suspended because the host issued
a command to freeze the Remote Mirror and Remote Copy group. This system
resource code can be returned only if a primary volume was queried.
Example 12-20 Trap 221: Space-efficient repository or overprovisioned volume reached a warning
Space Efficient Repository or Over-provisioned Volume has reached a warning
watermark
Unit: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Volume Type: repository
Reason Code: 1
Extent Pool ID: f2
Percentage Full: 100%
Example 12-21 Trap 223: Extent pool capacity reached a warning threshold
Extent Pool Capacity Threshold Reached
UNIT: Mnf Type-Mod SerialNm
IBM 2107-981 75-ZA571
Extent Pool ID: P1
Limit: 95%
Threshold: 95%Status: 0
The network management server that is configured on the HMC receives all of the generic
trap 6, specific trap 3 messages, which are sent in parallel with any events that call home to
IBM.
The SNMP alerts can contain a combination of a generic and a specific alert trap. The Traps
list outlines the explanations for each of the possible combinations of generic and specific
alert traps. The format of the SNMP traps, the list, and the errors that are reported by SNMP
are available in the “Generic and specific alert traps” section of the Troubleshooting section of
for the DS8900F IBM Documentation.
SNMP alert traps provide information about problems that the storage unit detects. You or the
IBM SSR must perform corrective action for the related problems.
Note: To configure the operation-related traps, use the DS CLI, as shown in 12.3.3, “SNMP
configuration with the DS CLI” on page 436.
432 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Complete the following steps to configure SNMP at the HMC.
1. Log in to the Service Management section on the HMC, as shown in Figure 12-1.
4. To verify the successful setup of your environment, create a Test Event on your DS8900F
MC by selecting the IP address and Test SNMP Trap, as shown in Figure 12-4.
You must check the SNMP server for the successful reception of the test trap.
434 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
6. The test generates the Service Reference Code BEB20010, and the SNMP server
receives the SNMP trap notification, as shown in Figure 12-6.
Errors occurring after configuring the SNMP traps on the HMC are sent to the SNMP server,
as shown in Example 12-22.
PMH=29xxx,075,724
Reporting HMC Hostname=ds8k-r9-xxxx.mainz.de.ibm.com."
dscli> showsp
Name IbmStoragePlex
desc -
acct -
SNMP Enabled
SNMPadd 10.10.10.1,10.10.10.2
emailnotify Disabled
emailaddr -
emailrelay Disabled
emailrelayaddr -
emailrelayhost -
numkssupported 4
The Message Information Base file that is delivered with the latest DS8900F DS CLI CD is
compatible with all previous levels of DS8900F Licensed Internal Code (LIC) and previous
generations of the DS8000 Product Family. Therefore, ensure that you loaded the latest
Message Information Base file that is available.
The benefits of remote support are that IBM Support can respond quickly to events that are
reported by you or the system.
The following features can be enabled in the DS8900F for remote support:
Call Home support (outbound remote support):
– Reporting problems to IBM
– Sending heartbeat information
– Off loading data
Remote service (inbound) remote support
IBM Support accesses the DS8900F HMC through a network-based connection.
During the installation and planning phase, complete the remote support worksheets and
supply them to the IBM SSR at the time of the installation.
436 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The worksheets have information about your remote support preferences and the network
communication requirements that must be fulfilled by your local network.
Although the MC is based on a Linux operating system (OS), IBM disabled or removed all
unnecessary services, processes, and IDs, including standard internet services, such as
Telnet (the Telnet server is disabled on the HMC), File Transfer Protocol (FTP), r commands
(Berkeley r-commands and Remote Procedure Call (RPC) commands), and RPC programs.
Call Home
Call Home is the capability of the MC to report serviceable events to IBM. The MC also
transmits machine-reported product data (MRPD) information to IBM through Call Home. The
MRPD information includes installed hardware, configurations, and features. Call Home is
configured by the IBM SSR during the installation of the DS8900F by using the customer
worksheets. A test call home is placed after the installation to register the machine and verify
the Call Home function.
The heartbeat can be scheduled every 1 - 7 days based on the client’s preference. When a
scheduled heartbeat fails to transmit, a service call with an action plan to verify that the Call
Home function is sent to an IBM SSR. The DS8900F uses an internet connection through
Transport Layer Security (TLS), which is also known as Secure Sockets Layer (SSL), for Call
Home functions.
The entire bundle is collected together in a PEPackage. A DS8900F PEPackage can be large,
often exceeding 100 MB. In certain cases, more than one PEPackage might be needed to
diagnose a problem correctly. In certain cases, the IBM Support Center might need an extra
memory dump that is internally created by the DS8900F or manually created through the
intervention of an operator.
OnDemand Data Dump: The OnDemand Data Dump (ODD) provides a mechanism that
allows the collection of debug data for error scenarios. With ODD, IBM can collect data with
no impact to the host I/O after an initial error occurs. ODD can be generated by using the
DS CLI command diagsi -action odd and then offloaded.
The MC is a focal point for gathering and storing all of the data packages. Therefore, the MC
must be accessible if a service action requires the information. The data packages must be
offloaded from the MC and sent in to IBM for analysis. The offload is performed through the
internet through a TLS connection.
When the internet is selected as the outbound connectivity method, the MC uses a TLS
connection over the internet to connect to IBM. For more information about IBM TLS remote
support, planning, and worksheets, see IBM DS8900F Introduction and Planning Guide,
SC27-9560.
438 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Using the CLI to export data
The offloadfile command provides clients with the ability to export different sets of data
files. Data sets include the audit log, the IBM Certified Secure Data Overwrite (SDO)
certificate, the IBM Easy Tier summary data, the configuration settings, packages to be used
by product support teams, the performance summary, the system summary file, and Easy
Tier files. Example 12-24 shows exporting the configuration files.
Note: The offloadfile command cannot be run from the embedded DS CLI window.
Having inbound access that is enabled can greatly reduce the problem resolution time by not
waiting for the IBM SSR to arrive onsite to gather problem data and upload it to IBM. With the
DS8900, the following inbound connectivity options are available to the client:
External Assist On-site (AOS) Gateway
Embedded remote access feature
The remote support access connection cannot be used to send support data to IBM.
The support data offload always uses the Call Home feature.
IBM Support encourages you to use AOS as your remote access method.
The remote access connection is secured with TLS 1.2. In addition, a mechanism is
implemented so that the HMC communicates only as an outbound connection, but you must
specifically allow IBM to connect to the HMC. You can compare this function to a modem that
picks up incoming calls. The DS8900F documentation refers to this situation as an
unattended service.
For more information, see 12.8.4, “Support access management through the DS CLI and DS
GUI” on page 441.
When you prefer to have a centralized access point for IBM Support, then an AOS Gateway
might be the correct solution. With the AOS Gateway, you install the AOS software externally
to a DS8900F HMC. You must install the AOS software on a system that you provide and
maintain. IBM Support provides only the AOS software package. Through port-forwarding on
an AOS Gateway, you can configure remote access to one or more DS8900F systems or
other IBM storage systems.
A simple AOS connection to the DS8000 is shown in Figure 12-7. For more information about
AOS, prerequisites, and installation, see IBM Assist On-site for Storage Overview,
REDP-4889.
440 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
The IBM SSR configures AOS during the installation or a later point by entering information
that is provided by the inbound remote support worksheet. The worksheets can be found in
IBM DS8900F Introduction and Planning Guide, SC27-9560, or in the “Planning” section of
the DS8900F IBM Documentation.
In addition, your firewall must allow outbound traffic from the HMC to the AOS infrastructure.
The inbound remote support worksheet provides information about the required firewall
changes.
For more information about AOS, see IBM Assist On-site for Storage Overview, REDP-4889.
Access to the DS8000 by using RSC is controlled by using either the DS GUI or DS CLI. For
more information about RSC, contact your IBM SSR.
Figure 12-8 Controlling service access through the DS Storage Manager GUI
442 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Using DS CLI to manage service access
You can manage service access to the DS8900F by using DS CLI commands. The following
user access security commands are available:
manageaccess: This command manages the security protocol access settings of an MC for
all communications to and from the DS8000 system. You can also use the manageaccess
command to start or stop outbound virtual private network (VPN) connections instead of
using the setvpn command.
chaccess: The command changes one or more access settings of an HMC. Only users
with administrator authority can access this command. See the command output in
Example 12-25.
chaccess [-commandline enable | disable] [-wui enable | disable] [-aos enable |
disable] [-rsc enable | disable] hmc1 | hmc2
lsaccess: This command displays the access settings of the primary and backup MCs:
lsaccess [hmc1 | hmc2]
See the output in Example 12-26.
Important: The hmc1 value specifies the primary HMC, and the hmc2 value specifies the
secondary HMC, regardless of how -hmc 1 and -hmc 2 were specified during DS CLI start.
A DS CLI connection might succeed even if a user inadvertently specifies a primary HMC
by using -hmc 2 and the secondary backup HMC by using -hmc 1 at DS CLI start.
This on-demand audit log mechanism is sufficient for client security requirements for HMC
remote access notification.
In addition to the audit log, email notifications and SNMP traps also can be configured at the
MC to send notifications in a remote support connection.
The DS CLI offloadauditlog command provides clients with the ability to offload the audit
logs to the client’s DS CLI workstation into a directory of their choice, as shown in
Example 12-27.
The audit log can be exported by using the DS GUI on the Events window by clicking the
Download icon and then selecting Export Audit Log, as shown in Figure 12-11.
The downloaded audit log is a text file that provides information about when a remote access
session started and ended, and the remote authority level that was applied. A portion of the
downloaded file is shown in Example 12-28.
Example 12-28 Audit log entries that relate to a remote support event
MST,,1,IBM.2107-75ZA570,N,8036,Authority_to_root,Challenge Key = 'Fy31@C37';
Authority_upgrade_to_root,,,
U,2021/10/02 12:09:49:000
MST,customer,1,IBM.2107-75ZA570,N,8020,WUI_session_started,,,,
444 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
U,2021/10/02 13:35:30:000
MST,customer,1,IBM.2107-75ZA570,N,8022,WUI_session_logoff,WUI_session_ended_logged
off,,,
The challenge key that is presented to the IBM SSR is a part of a two-factor authentication
method that is enforced on the MC. It is a token that is shown to the IBM SSR who connects
to the DS8900F. The IBM SSR must use the challenge key in an IBM internal system to
generate a response key that is given to the HMC. The response key acts as a one-time
authorization to the features of the HMC. The challenge and response keys change when a
remote connection is made.
The challenge-response process must be repeated if the SSR needs higher privileges to
access the MC command-line environment. No direct user login and no root login are
available on a DS8900F.
Entries are added to the audit file only after the operation completes. All information about the
request and its completion status is known. A single entry is used to log request and
response information. It is possible, though unlikely, that an operation does not complete
because of an operation timeout. In this case, no entry is made in the log.
Audit logs are automatically trimmed (first in, first out (FIFO)) by the subsystem so that they
do not use more than 50 MB of disk storage.
Combining features such as Call Home, data collectors, a streamlined ticketing process, and
proactive support, the problem resolution gains speed so that the stability, capacity, and
performance of the DS8900F can be managed more efficiently.
If a problem occurs, you receive help promptly through the unified support experience by
completing the following tasks:
Open IBM Support tickets for a resource and automatically add a log package to the ticket.
Update tickets with a new log package.
View the ticket history of open and closed tickets for a device.
A lightweight data collector is installed in your data center to stream performance, capacity,
asset, and configuration metadata to your IBM Cloud instance.
The metadata flows in one direction, that is, from your data center to IBM Cloud over HTTPS.
In IBM Cloud, your metadata is protected by physical, organizational, access, and security
controls.
446 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
2. Install a data collector in your data center to stream performance, capacity, and
configuration metadata about storage systems to IBM Storage Insights. Select
Configuration Data Collectors, click Deploy Data Collectors, and get started, as shown
in Figure 12-12:
a. Choose your preferred OS to download the data collector (Windows, Linux, or AIX).
b. Extract the contents of the data collector file on to the virtual machine (VM) or server
where you want it to run. 1 GB of memory and 1 GB of disk space are required.
c. Run installDataCollectorService.sh (Linux or AIX) or
installDataCollectorService.bat (Windows).
An example of an IBM Storage Insights Pro resources view is the detailed volume information
that is shown in Figure 12-14.
448 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
By using IBM Storage Insights, IBM Remote Support personnel can collect log packages from
a device. By default, this feature is not enabled. After the device is added, you must enable
the option for IBM Support to collect logs from the device.
To enable this option for IBM Support, select Configuration → Settings → IBM Support
Log Permission, and then select Edit.
For more information about IBM Storage Insights, see IBM Documentation.
2. A new window opens with two options from which to select. Click Create Ticket, as shown
in Figure 12-17 on page 451.
450 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 12-17 IBM Storage Insights: Create Ticket
Figure 12-18 shows the process of collecting the needed ticket information, which includes
the details of the DS8000 storage system.
4. Click Next. In the window that is shown in Figure 12-20, select the severity of the ticket.
452 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
5. In the window that is shown in,Figure 12-21, select either Software/I don’t know or
Hardware, depending on whether you have a hardware problem or a software problem.
6. Review and verify the details of the IBM ticket that you are about to open, as shown in
Figure 12-22. Provide the name of the contact person, along with a valid contact phone
number and email address.
3. In the window that is shown in Figure 12-24, you can either select one of the open tickets
for this machine or type in the ticket number manually.
.
454 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
4. An automatic process begins after you select the ticket number, which generates a log
package.
5. In the window that is shown in Figure 12-25, you can leave a message for the IBM Support
personnel working on the ticket. Also, you can attach any files that are related to the ticket.
6. In the window that is shown in Figure 12-26, you see a summary of the update that is
about to be left in to the ticket. To complete the process, click Update Ticket.
To set alert policies, select Configuration → Alert Policies, as shown in Figure 12-27.
There are many predefined default policies, but you also can create a policy by selecting
Create Policy. This action opens a window where you define the policy name, and select the
policy type and type of storage system, as shown in Figure 12-28.
Select Create, and the policy is created. Then, define the alert definitions and save the
changes, as shown in Figure 12-29 on page 457.
456 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Figure 12-29 IBM Storage Insights: Alert Definitions
To work with the application, you must first register for IBM Call Home Connect Cloud. After
you can access your IBM Call Home Connect Cloud, register each of your IBM assets by
providing its machine type, model, serial number, customer number, and country code. After
you complete this task for each device, you can see them in the mobile application.
Figure 12-30 IBM Call Home Connect Anywhere: iOS mobile device showing a DS8910F ticket
458 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Abbreviations and acronyms
AC active DRP disaster recovery planning
ACK acknowledgment DSFA data storage feature activation
AIXPCM AIX Multipath I/O Path Control DSNI DS Network Interface
Module
DVE Dynamic Volume Expansion
AMP Adaptive Multi-stream Prefetching
DVR Dynamic Volume Relocation
ANSI American National Standards
DWDM dense wavelength division
Institute
multiplexing
AOS Assist On-site
EAM extent allocation method
API application programming interface
EAV extended address volume
ASIC application-specific integrated
ECC error correction code
circuit
eDRAM embedded dynamic random access
ATS Atomic Test and Set
memory
BF Base Function
EGP Exterior Gateway Protocol
BOS base operating system
EKM Encryption Key Manager
BPM Backup Power Module
EOS end of service
BTU British thermal unit
EPOW emergency power-off warning
CA certificate authority
ER Energy Report
CCL concurrent code load
ESCC IBM EMEA Storage Competence
CDA Code Distribution and Activation Center
CEC Central Electronics Complex ESE extent space efficient
CIM Common Information Model ESM Enclosure Service Module
CKD Count Key Data FB Fixed-Block
CMA cable management arm FC Fibre Channel
CPC central processor complex FC-AL Fibre Channel Arbitrated Loop
CRC cyclic redundancy check FCIC Fibre Channel Interface Card
CS Copy Services FCP Fibre Channel Protocol
CSI Container Storage Interface FDE Full Disk Encryption
CSR certificate signing request FEC Forward Error Correction
CSV comma-separated values FFDC first-failure data capture
CUADD control unit address FICON Fibre Channel connection
CUIR control-unit initiated reconfiguration FIDR FICON Dynamic Routing
DA device adapter FPCCL fast path concurrent code load
DC-UPS direct current uninterruptible power FQDN fully qualified domain name
supply
FRU field-replaceable unit
DDM disk drive module
FSP flexible service processor
DIF Data Integrity Field
GbE gigabit Ethernet
DIMM dual inline memory module
GC Global Copy
DMA direct memory access
GDPS Geographically Dispersed Parallel
DPR Dynamic Path Reconnect Sysplex
DPS Dynamic Path Selection GFC gigabit Fibre Channel
DR disaster recovery GiB gibibyte
460 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
RAID redundant array of independent SPOF single point of failure
disks SR suspension reason
RAS reliability, availability, and SRA Storage Replication Adapter
serviceability
SRC system reference code
RC reference code
SSD solid-state drive
RCD register clock driver
SSH Secure Shell
RCL Remote Code Load
SSIC System Storage Interoperation
RDP Read Diagnostic Parameters Center
REST Representational State Transfer SSID subsystem identifier
RMZ Remote Mirror for z/OS SSL Secure Sockets Layer
RPC Remote Procedure Call SSO single sign-on
RPCC rack power control card SW shortwave
RPM revolutions per minute TCE Translation Control Entry
RPQ Request for Price Quotation TCO total cost of ownership
RSC Remote Support Center TCT Transparent Cloud Tiering
RSCN registered state change notification TLS Transport Layer Security
SAM storage allocation method TSO Time Sharing Option
SAN storage area network UA unit address
SARC Sequential Adaptive Replacement UCB unit control block
Cache
UPS uninterruptible power supply
SCM single-chip module
URI Unified Resource Identifier
SCORE Storage Customer Opportunity
Request VAAI vStorage APIs for Array Integration
The publications that are listed in this section are considered suitable for a more detailed
description of the topics that are covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topics in this
document. Some publications that are referenced in this list might be available in softcopy
only.
Best Practices for DS8000 and z/OS HyperSwap with Copy Services Manager,
SG24-8431
DS8000 Global Mirror Best Practices, REDP-5246
DS8870 Easy Tier Application, REDP-5014
Exploring the DS8870 RESTful API Implementation, REDP-5187
Getting Started with IBM zHyperLink for z/OS, REDP-5493
Getting Started with IBM Z Cyber Vault, SG24-8511
Getting started with z/OS Container Extensions and Docker, SG24-8457
IBM Assist On-site for Storage Overview, REDP-4889
IBM DS8000 Copy Services: Updated for IBM DS8000 Release 9.1, SG24-8367
IBM DS8000 Easy Tier (Updated for DS8000 R9.0), REDP-4667
IBM DS8000 Encryption for Data at Rest, Transparent Cloud Tiering, and Endpoint
Security (DS8000 Release 9.2), REDP-4500
IBM DS8000 High-Performance Flash Enclosure Gen2 (DS8000 R9.0), REDP-5422
IBM DS8900F and IBM Z Synergy DS8900F: Release 9.3 and z/OS 2.5, REDP-5186
IBM DS8000 Safeguarded Copy (Updated for DS8000 R9.2), REDP-5506
IBM DS8000 Transparent Cloud Tiering (DS8000 Release 9.2), SG24-8381
IBM DS8870 Easy Tier Heat Map Transfer, REDP-5015
IBM DS8870 and NIST SP 800-131a Compliance, REDP-5069
IBM DS8880 Thin Provisioning (Updated for Release 8.5), REDP-5343
IBM DS8900F Performance Best Practices and Monitoring, SG24-8501
IBM DS8900F Product Guide Release 9.3, REDP-5554
IBM DS8910F Model 993 Rack-Mounted Storage System Release 9.1, REDP-5566
IBM Fibre Channel Endpoint Security for IBM DS8900F and IBM Z, SG24-8455
IBM Power Systems S922, S914, and S924 Technical Overview and Introduction,
REDP-5497
IBM System Storage DS8000: Remote Pair FlashCopy (Preserve Mirror), REDP-4504
IBM System Storage DS8000: z/OS Distributed Data Backup, REDP-4701
IBM z15 (8561) Technical Guide, SG24-8851
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, web docs, drafts, and additional materials at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
IBM DS8000 Host Systems Attachment Guide, SC27-9563
IBM DS8000 Series Command-Line Interface User’s Guide, SC27-9562
IBM DS8900F Introduction and Planning Guide, SC27-9560
Online resources
These websites are also relevant as further information sources:
DS8000 IBM Documentation:
https://www.ibm.com/docs/en/ds8900
DS8000 Series Copy Services Fibre Channel Extension Support Matrix:
https://www.ibm.com/support/pages/ds8000-series-copy-services-fibre-channel-ext
ension-support-matrix
DS8900F Code Bundle Information (includes Release Notes):
https://www.ibm.com/support/pages/node/1072022
IBM DS8000 Code Recommendations:
https://www.ibm.com/support/pages/ds8000-code-recommendation
IBM Fix Central:
https://www.ibm.com/support/fixcentral/swg/selectFixes?parent=Enterprise%20Stor
age%20Servers&product=ibm/Storage_Disk/DS8900F
IBM Remote Code Load:
https://www.ibm.com/support/pages/ibm-remote-code-load
IBM System Storage Interoperation Center (SSIC) for DS8000:
https://www.ibm.com/systems/support/storage/ssic/
464 IBM DS8900F Architecture and Implementation: Updated for Release 9.3
Help from IBM
IBM Support and downloads
ibm.com/support
SG24-8456-03
ISBN 0738460753
Printed in U.S.A.
®
ibm.com/redbooks