Vmaware PDF
Vmaware PDF
Karan Singh
Søren Aakjær
John Khazraee
Tom Koudstaal
Aderson J.C. Pacini
Patrick Wolf
ibm.com/redbooks
International Technical Support Organization
November 2011
SG24-7975-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page xiii.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Summary of contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Contents iii
2.5.1 Data integrity by volume ownership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.5.2 Allocation Assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.5.3 Copy policy management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
2.5.4 High availability and disaster recovery configurations . . . . . . . . . . . . . . . . . . . . 103
2.5.5 Selective Write Protect for Disaster Recovery testing. . . . . . . . . . . . . . . . . . . . . 111
2.5.6 Removal of a cluster from a grid and cluster clean-up . . . . . . . . . . . . . . . . . . . . 111
2.5.7 Joining of clusters at different code release levels . . . . . . . . . . . . . . . . . . . . . . . 115
Contents v
5.2.3 HCD considerations for multi-cluster grid operation . . . . . . . . . . . . . . . . . . . . . . 296
5.2.4 More HCD and IOCP examples with a two-cluster grid . . . . . . . . . . . . . . . . . . . 298
5.2.5 Display and control your settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
5.2.6 Set values for the Missing Interrupt Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
5.3 TS7700 Virtualization Engine software definitions for z/OS . . . . . . . . . . . . . . . . . . . . 306
5.3.1 z/OS and DFSMS/MVS system-managed tape . . . . . . . . . . . . . . . . . . . . . . . . . 306
5.3.2 Implementing Copy Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
5.4 Implementing Selective Device Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
5.4.1 Implementation of SDAC in z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
5.4.2 Implementation of SDAC from MI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
5.5 TS7700 SETTING function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
5.6 Software implementation in z/VM and z/VSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
5.6.1 General support information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
5.6.2 z/VM native support using DFSMS/VM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
5.6.3 Native z/VSE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
5.6.4 VM/ESA and z/VM guest support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
5.6.5 z/VSE as a z/VM guest using a VSE Guest Server (VGS) . . . . . . . . . . . . . . . . . 334
5.7 Software implementation in Transaction Processing Facility . . . . . . . . . . . . . . . . . . . 336
5.7.1 Usage considerations for TS7700 with TPF . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
5.7.2 Performance considerations for TS7700 multi-cluster grids with TPF . . . . . . . . 338
Contents vii
8.5.4 MVS System commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
8.6 Basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
8.6.1 Clock and time setting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
8.6.2 Library online/offline to host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
8.6.3 Library in Pause mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
8.6.4 Preparing for service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 615
8.6.5 TS3500 Tape Library inventory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
8.6.6 Inventory upload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
8.7 Tape cartridge management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
8.7.1 3592 tape cartridges and labels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
8.7.2 Manual insertion of stacked cartridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619
8.7.3 Exception conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 620
8.8 Managing logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
8.8.1 Scratch volume recovery for logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
8.8.2 Ejecting logical volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
8.9 Messages and displays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
8.9.1 Console name message routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
8.9.2 Alert setting messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
8.9.3 Grid messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624
8.9.4 Display grid status. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
8.9.5 Warning link status degraded messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 627
8.9.6 Warning VTS operation degraded messages . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
8.9.7 Warning cache use capacity (TS7720 Virtualization Engine) . . . . . . . . . . . . . . . 628
8.10 Recovery scenarios. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
8.10.1 Hardware conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
8.10.2 TS7700 Virtualization Engine software failure . . . . . . . . . . . . . . . . . . . . . . . . . 632
8.11 TS7720 Virtualization Engine operation considerations . . . . . . . . . . . . . . . . . . . . . . 632
8.11.1 Management interface for TS7720 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 633
Contents ix
10.4.3 Performing Copy Export Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 769
10.4.4 Restoring the host and library environments. . . . . . . . . . . . . . . . . . . . . . . . . . . 775
10.5 Geographically Dispersed Parallel Sysplex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
10.5.1 GDPS functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
10.5.2 GDPS considerations in a TS7740 grid configuration. . . . . . . . . . . . . . . . . . . . 777
10.6 Disaster recovery testing considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
10.6.1 The test environment represents a point in time . . . . . . . . . . . . . . . . . . . . . . . . 779
10.6.2 Breaking the interconnects between the TS7700 Virtualization Engines . . . . . 779
10.6.3 Writing data during the test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
10.6.4 Protecting production volumes with Selective Write Protect . . . . . . . . . . . . . . . 781
10.6.5 Protecting production volumes with DFSMSrmm . . . . . . . . . . . . . . . . . . . . . . . 782
10.6.6 Control of copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 784
10.6.7 Return-to-scratch processing and test use of older volumes . . . . . . . . . . . . . . 785
10.6.8 Copies flushed or kept as LRU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
10.6.9 Ownership takeover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786
10.7 Disaster recovery testing detailed procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787
10.7.1 TS7700 two-cluster grid using Selective Write Protect . . . . . . . . . . . . . . . . . . . 788
10.7.2 TS7700 two-cluster grid not using Selective Write Protect . . . . . . . . . . . . . . . . 791
10.7.3 TS7700 two-cluster grid not using Selective Write Protect . . . . . . . . . . . . . . . . 794
10.7.4 TS7700 three-cluster grid not using Selective Write Protect. . . . . . . . . . . . . . . 797
10.8 A real disaster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 801
Contents xi
JCL to change volumes in RMM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
REXX EXEC to update the library name. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information about the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® MVS™ System Storage®
AS/400® Netfinity® System x®
CICS® OS/400® System z10®
DB2® Parallel Sysplex® System z9®
DS8000® POWER7® System z®
EnergyScale™ POWER® Tivoli®
ESCON® pSeries® VM/ESA®
eServer™ RACF® WebSphere®
FICON® Redbooks® z/OS®
GDPS® Redbooks (logo) ® z/VM®
Geographically Dispersed Parallel RETAIN® z/VSE®
Sysplex™ RS/6000® z10™
Global Technology Services® S/390® z9®
IBM® System i® zEnterprise™
IMS™ System p® zSeries®
LTO, Ultrium, the LTO Logo and the Ultrium logo are trademarks of HP, IBM Corp. and Quantum in the U.S.
and other countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other
countries.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel
SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redbooks® publication highlights TS7700 Virtualization Engine Release 2.0. It is
intended for system architects who want to integrate their storage systems for smoother
operation. The IBM Virtualization Engine TS7700 offers a modular, scalable, and
high-performing architecture for mainframe tape virtualization for the IBM System z®
environment. It integrates 3592 Tape Drives, high-performance disks, and the new IBM
System p® server into a storage hierarchy. This storage hierarchy is managed by robust
storage management firmware with extensive self-management capability. It includes the
following advanced functions:
Policy management to control physical volume pooling
Cache management
Dual copy, including across a grid network
Copy mode control
The TS7700 Virtualization Engine offers enhanced statistical reporting. It also includes a
standards-based management interface for TS7700 Virtualization Engine management.
The new IBM Virtualization Engine TS7700 Release 2.0 introduces the next generation of
TS7700 Virtualization Engine servers for System z tape:
IBM Virtualization Engine TS7720 Server Model VEB
IBM Virtualization Engine TS7740 Server Model V07
These Virtualization Engines are based on IBM POWER7® technology. They offer improved
performance for most System z tape workloads compared to the first generation of TS7700
Virtualization Engine servers.
TS7700 Virtualization Engine Release 2.0 builds on the existing capabilities of the TS7700
family. It also introduces the following capabilities:
Up to 2,000,000 logical volumes per grid domain
Four active 1-Gb Ethernet links for grid communications
Selective Device Access Control restricting access for defined volume ranges to specified
device addresses
8-Gbps IBM FICON® Channel interfaces for connections to tape volume cache disk arrays
and to the switches supporting physical tape drives
Two longwave Ethernet links for grid communications (only available on TS7720 Server
model VEB and TS7740 Server model V07)
Summary of contents
This book contains valuable information of the TS7700 Virtualization Engine for anyone
interested in this product. The following summary helps you understand the structure of this
book and to decide which of the chapters are of the most interest.
In addition to the material in this book, other IBM publications are available for better
understanding the TS7700 Virtualization Engine. This information is part of this book.
If you have more knowledge, a series of technical documents and white papers describing
many aspects of the TS7700 Virtualization Engine are available. Although the basics of the
product are described in this book, more detailed descriptions are provided in these
documents. For that reason, most of these detailed record descriptions are not in this book,
although you are directed to the appropriate technical document. For these additional
technical documents, go to IBM Techdocs
(http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs) and search for
TS7700.
For a short description of all available technical documents, see “Technical documents on the
IBM Techdocs website” on page 926.
Familiarize yourself with the contents of Chapter 1, “Introducing the IBM Virtualization Engine
TS7700” on page 3, and Chapter 2, “Architecture, components, and functional
characteristics” on page 15. These chapters provide a functional description of all major
features of the product, and are a prerequisite for understanding the other chapters.
If you are planning for the TS7700 Virtualization Engine, see Chapter 3, “Preinstallation
planning and sizing” on page 117. If you have already a TS7700 Virtualization Engine or even
a 3494 VTS installed, see Chapter 6, “Upgrade considerations” on page 341. Chapter 4,
“Hardware implementation” on page 189 describes the hardware implementation aspects.
Chapter 5, “Software implementation” on page 283 describes the major aspects of the
software considerations for the TS7700 Virtualization Engine. For more information about
software, see Chapter 3, “Preinstallation planning and sizing” on page 117, and Chapter 7,
“Migration aspects” on page 385.
Chapter 8, “Operation” on page 451 provides the operational aspects of the TS7700
Virtualization Engine. This information includes the layout of the Management Information
windows to help with daily operation tasks.
If you have a special interest in the performance and monitoring tasks as part of your
operational responsibilities, see Chapter 9, “Performance and Monitoring” on page 635.
Although this chapter gives a good overview, more information is available in the technical
documents on the Techdocs website.
For availability and disaster recovery specialists, and those involved in the planning and
operation in relation to availability and disaster recovery, see Chapter 10, “Failover and
disaster recovery scenarios” on page 749.
Søren Aakjær works as Storage Solution Architect for IBM in GTS Services Delivery. He is
responsible for designing and implementing disk and tape storage solutions for customers in
Denmark. He has specialized in tape products since the launch of IBM Virtual Tape B18, and
has participated in many disaster recovery, technology implementation, and data center
relocation projects.
John Khazraee is a Client Technical Specialist providing primary support to IBM Business
Partners for the entire east region of the United States. He performs tape analysis for client
storage infrastructures, hosts education seminars for IBM Business Partners on IBM Storage
products, and provides consultative support within the Mainframe and Open System tape
arena. John has work experience in the aerospace and defense sectors working for the
Boeing Company, developing Lean processes and cost improvement Value Stream Maps,
and mentoring teams as a Site Team Facilitator. He has also worked within a special program
at NASA (National Aeronautics and Space Administration) as a Project Team Lead.
Tom Koudstaal is a senior IT Specialist working for IBM Global Technology Services® (GTS)
- Storage Implementation Services in IBM Netherlands. He joined IBM in 1969 as systems
programmer. He is currently working as technical consultant and project leader in systems
and storage management projects with a focus on tape technology-related implementations.
He was involved in most of the 3495, 3494, and TS7700 installations in the mainframe area in
the Netherlands.
Aderson J.C. Pacini works in the Tape Support Group in the IBM Brazil Hardware Resolution
Center. He is responsible for providing second-level support for tape products in Brazil.
Aderson has an extensive experience servicing a broad range of IBM products. He has
installed, implemented, and supported all the IBM Tape Virtualization Servers from the Virtual
Tape Server (VTS) B16 to the TS7700 Virtualization Engine. Aderson joined IBM in 1976 as a
Service Representative and his entire career has been in IBM Services.
Patrick Wolf is an IBM Senior Accredited IT Specialist at the European Storage Competence
Center in Mainz, Germany. He has 14 years of experience in tape storage systems and virtual
tape solutions. He is leading the European team of Product Field Engineers focused on
Enterprise tape solutions. He has experiences in various IBM tape drive technologies and
tape libraries, and all generations of the IBM Tape Virtualization Servers.
Preface xvii
Figure 1 Patrick, Aderson, John, Karan, Søren, Tom
Carl Bauske
Jim Fisher
Advanced Technical Skills (ATS), IBM Americas
David Reich
IBM Systems & Technology Group, Development Ops & Tech Support
Gary Anna
Wayne C. Carlson
Khanh M. Ly
Corie D. Neri
Kerri R. Shotwell
Takeshi Sohda
Joseph M. (Joe) Swingler
William H. (Bill) Travis
Roman Yusufov
IBM Systems & Technology Group, Systems Hardware Development
Erika Dawson
IBM Systems & Technology Group, Systems Software Development
Ann Lund
Karen Orlando
Alex Osuna
ITSO
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface xix
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
The TS7700 Virtualization Engine is a modular, scalable, and high performing architecture for
mainframe tape virtualization. It incorporates extensive self-management capabilities
consistent with IBM Information Infrastructure initiatives. These capabilities can improve
performance and capacity. Better performance and capacity help lower the total cost of
ownership for tape processing and avoid human error. A TS7700 Virtualization Engine can
improve the efficiency of mainframe tape operations by efficiently using disk storage, tape
capacity, and tape speed. It can also improve efficiency by providing many tape addresses.
The TS7700 Virtualization Engine uses outboard policy management functions to manage
the following features:
Cache and volume pools
Control selective dual copy
Dual copy across a grid network
Copy mode control
Encryption
Copy export
The engine also includes a new standards-based management interface and enhanced
statistical reporting.
TS7700 Virtualization Engine provides tape virtualization for IBM System z servers. Tape
virtualization can help satisfy the following requirements in a data processing environment:
Improved reliability
Reduction in the time needed for the backup and restore process
Reduction of services downtime caused by physical tape drives and library outages
More efficient procedures for managing daily backup and restore processing
Infrastructure simplification through reduction of the number of physical tape libraries,
drives, and media.
The TS7740 Virtualization Engine and TS7720 Virtualization Engine are members of the
TS7700 Virtualization Engine family. Most functions of the family apply to both models.
However, several specific functions exist for each model. Table 1-1 shows which engines are
referred to by each name.
TS7700 Virtualization Engine TS7740 Virtualization Engine and TS7720 Virtualization Engine
The TS7700 Virtualization Engine creates a storage hierarchy through the integration of the
following components:
For the TS7740 Virtualization Engine (Table 1-2):
– One TS7740 Virtualization Engine Server Model V07
– One TS7740 Virtualization Engine Cache Controller Model CC8
– 600 GB/15 K RPM FC 4-Gbps disk drives go into CC8
– Zero, one, or three TS7740 Virtualization Engine Cache Drawers Model CX7s
Tip: IBM 3494 Tape Library is no longer supported with TS7740 R2.0. R1.6 and
R1.7 code level support for 3494 Tape Library attachment is available only through a
request for price quotation (RPQ).
Table 1-2 shows the cache configuration and capacity for the TS7740 for specific controllers
and drawers.
3956-CC8 3956-CX7
Figure 1-1 illustrates the TS7740 Cache Controller 3956-CC8 front view.
The TS7700 Server also consists of a server and two drawers for I/O adapters.
The TS7700 Server controls virtualization processes such as host connectivity and device
virtualization. It also controls hierarchical storage management (HSM) functions such as
storage, replication, and organization of data across physical media and libraries.
The TS7720 Virtualization Engine has the following components (Table 1-3):
One TS7720 Virtualization Engine Server Model VEB
One TS7720 Virtualization Engine SATA Cache Controller Model CS8
Zero to six TS7720 Virtualization Engine Cache Drawers Model XS7s within base frame
unit
2 11 250.85 TB
6 TS7720 Cache
Drawers (3956-XS7) 3 12 274.84 TB
4 13 298.84 TB
5 14 322.84 TB
6 15 346.83 TB
7 16 370.83 TB
8 17 394.82 TB
9 18 418.82 TB
10 19 442.82 TB
a. The lower controller must control at most one more drawer than the upper controller.
b. The term “Total cache units” refers to the combination of cache controllers and cache drawers.
The TS7720 Virtualization Engine does not attach to any tape library or drives.
A TS7700 Virtualization Engine with Grid Enablement features must be installed on all
TS7700 clusters in a grid configuration.
Each TS7700 Virtualization Engine supports up to a maximum of 256, 3490E virtual tape
drives.
A grid (TS7700 Virtualization Engine single or multi-cluster) supports up to 2,000,000
logical volumes. Each logical volume has a maximum capacity of 1.2 GB to 18 GB
assuming a 3:1 compression ratio and using the 400 to 6000-MB volume sizes. This
capacity is only available on systems with 16 GB of physical memory.
System z
Hosts
FICON FICON
SATA 3592
Tape Volume Cache Tape Volume Cache
Emulated tape drives are also called virtual drives. To the host, virtual 3490E tape drives look
the same as physical 3490E tape drives. Emulation is not apparent to the host and
applications. The host always writes to and reads from virtual tape drives. It never accesses
the physical tape drives (commonly referred to as the back end) attached to TS7740
Virtualization Engine configurations. In fact, it does not need to know that these tape drives
exist. Even an application that supports only 3490E tape technology can use the TS7700
Virtualization Engine without any changes. Therefore, the application benefits from the high
capacity and high performance tape drives in the back-end. For TS7720 Virtualization Engine
configurations, no physical tape attachment exists. However, the virtual tape drives work the
same for the host.
When the host requests a volume that is still in cache, the volume is virtually mounted. No
physical mount is required. After the virtual mount is complete, the host can access the data
at disk speed. Mounting of scratch tapes is also virtual and does not require a physical mount.
Although you define maximum sizes for your volumes, a virtual volume takes up just the
space in cache that the data on the volume actually requires. For this reason, tape
virtualization makes efficient use of disk capacity. In TS7740 Virtualization Engine
configurations, the virtual volumes are copied from disk to tape. They also need only the
amount of tape capacity occupied by the data, making efficient use of disk and tape capacity.
Another benefit from tape virtualization is the large number of drives available to applications.
Each TS7700 Virtualization Engine provides you with a maximum of 256 virtual tape devices.
Often applications are contending for tape drives and jobs must wait because no physical
tape drive is available. Tape virtualization efficiently addresses these issues by providing
many virtual tape drives.
The TS7740 Virtualization Engine manages the physical tape drives and physical volumes in
the tape library. It also controls the movement of data between physical and virtual volumes.
In the TS7740 Virtualization Engine, data written from the host into the tape volume cache is
scheduled for copying to tape later. The process of copying data to tape that still exists in
cache is called premigration. When a volume is copied from cache to tape, the volume on the
tape is called a logical volume. A physical volume can contain many logical volumes. The
process of putting several logical volumes on one physical tape is called stacking. A physical
tape containing logical volumes is therefore referred to as a stacked volume. This concept
does not apply to TS7720 Virtualization Engine because no physical tape devices are
attached to it.
Because many applications are unable to fill the high capacity media of modern tape
technology, you can end up with many underused cartridges. This wastes much space and
requires an excessive number of cartridge slots. Tape virtualization reduces the space
required by volumes and fully uses the capacity of current tape technology. Tape virtualization
allows you to use the full potential of modern tape drive and tape media technology. In
addition, it does so without changes to your applications or JCL.
When space is required in the tape volume cache for new data, volumes that already have
been copied to tape are removed from the cache. By default, removal is based on a least
recently used (LRU) algorithm. Using this algorithm ensures that no new data or recently
accessed data is removed from cache. The process of copying volumes from cache to tape
and then deleting them is called migration. Volumes that have been deleted in the cache and
exist only on tape are called migrated volumes. In a TS7720 Virtualization Engine
configuration, no migrated volumes exist because there is no physical tape attachment.
Instead, logical volumes are maintained in disk until they expire. For this reason, cache
capacity for the TS7720 Virtualization Engine is larger than the capacity for the TS7740
Virtualization Engine.
Host
Write
Read
Stacked
Volumes
TVC
X
ALS124
Recall
Virtual Volumes
Logical Volumes
Another benefit of tape virtualization is the data replication functionality. Two, three, four, five
and six TS7700 Virtualization Engines can be interconnected. The connections can be
through one of the following sets of links:
Two 1 Gb Ethernet links
Four 10 Gbps LW Ethernet links
Two Grid Optical LW Ethernet links
These sets of links form a multi-cluster grid configuration. Adapter types cannot be mixed in
a cluster. They can vary within a grid depending on your network infrastructure. Logical
volume attributes and data are replicated across the clusters in a grid. Any data replicated
between the clusters is accessible through any other cluster in the grid configuration. Through
remote volume access, you can reach any virtual volume through any virtual device. You can
reach volumes even if a replication has not been made.
Setting policies on the TS7700 Virtualization Engines defines where and when you have
multiple copies of your data. You can also specify for certain kinds of data, such as test data,
that you do not need a secondary or tertiary copy.
You can group clusters within a grid into families. Grouping allows you to make improved
decisions for tasks such as replication or tape volume cache selection.
A multi-cluster grid configuration presents itself to the attached hosts as one large library with
the following maximums:
512 virtual devices for a two-cluster grid
768 virtual tape devices for a three-cluster grid
1024 virtual tape devices for a four-cluster grid
1536 virtual devices for a six-cluster grid
The copying of the volumes in a grid configuration is handled by the clusters, and is not
apparent to the host. By intermixing TS7720 and TS7740 Virtualization Engines, you can
build a hybrid two, three, four, five, or six-cluster grid.
Figure 1-5 shows multiple TS7700 Virtualization Engine Grids attached to the same host
system, but operating independent of one another.
Production Site
Hosts TS7700
WAN
Disaster Recovery
Site TS7700
For TS7740 Virtualization Engine Grid configurations, each Virtualization Engine TS7740
Virtualization Engine manages its own set of physical volumes. The TS7740 maintains the
relationship between logical volumes and the physical volumes on which they are located.
IBM offers an extraordinary range of systems, storage, software, and services that are based
on decades of innovation. This range is designed to help you get the right information to the
right person at the right time. It also manages challenges such as exploding data growth, new
applications, dynamic workloads, and new regulations.
IBM Information Infrastructure solutions are designed to help you manage this information
explosion. They also address challenges regarding information compliance, availability,
retention, and security. This approach helps your company move toward improved
productivity and reduced risk without driving up costs.
The TS7700 Virtualization Engine is part of the IBM Information Infrastructure. This strategy
delivers information availability, supporting continuous and reliable access to data. It also
delivers information retention, supporting responses to legal, regulatory, or investigatory
inquiries for information.
The TS7700 Virtualization Engine can be the answer to the following challenges:
Growing storage requirements
Shrinking backup windows
The need for continuous access to data
The following are the main benefits you can expect from tape virtualization:
Brings efficiency to the tape operation environment
Reduces batch window
Provides high availability and disaster recovery configurations
Provides fast access to data through caching on disk
Provides utilization of current tape drive, tape media, and tape automation technology
Provides the capability of filling high capacity media to 100%
Provides many tape drives or concurrent use
Provides data consolidation, protection, and sharing
Decimal units, such as KB, MB, GB, and TB, are commonly used to express data storage
values. However, these values are more accurately expressed using binary units such as KiB,
MiB, GiB, and TiB. At the kilobyte level, the difference between decimal and binary units of
measurement is relatively small (2.4%). This difference grows as data storage values
increase. When values reach terabyte levels, the difference between decimal and binary units
approaches 10%.
Both decimal and binary units are available throughout the TS7700 Tape Library
documentation. Table 1-5 compares the names, symbols, and values of the binary and
decimal units.
Table 1-5 Names, symbols, and values of the binary and decimal units
Decimal Binary
Table 1-6 shows the increasing percentage of difference between binary and decimal units.
Table 1-6 Increasing percentage of difference between binary and decimal units
Decimal value Binary value Percentage difference
Cluster families
User security and user access enhancements
Grid network support for two or four copper/SW 1 Gbps-links, or two LW 10-Gbps links
Immediate copy failure reporting on rewind/unload response
Underlying concepts of tape virtualization within the TS7700 Virtualization Engine
Functional characteristics of the TS7700 Virtualization Engine
Logical WORM support
Enhanced Cache Removal policies for grids containing one or more TS7720 cluster
Selective Write Protect for Disaster Recovery Testing
Device allocation assistance (DAA)
Scratch allocation assistance (SAA)
Selective device access control (SDAC)
On-demand support of up to two million logical volumes
Nodes
Nodes are the most basic components in the TS7700 Virtualization Engine architecture. A
node has a separate name depending on the role associated with it. There are three types of
nodes:
Virtualization nodes
Hierarchical data storage management nodes
General nodes
vNode vNode
vNode
Controller
gNode
hNode
Controller
Cluster
The TS7700 Virtualization Engine cluster combines the TS7700 Virtualization Engine server
with one or more external (from the server’s perspective) disk subsystems. This subsystem is
the TS7700 Virtualization Engine cache controller. This architecture permits expansion of
disk cache capacity. It also allows the addition of v or h nodes in future offerings to enhance
the capabilities of the Tape Virtualization System.
Cache Expansion
hNode
Controller
A TS7700 Virtualization Engine cluster provides FICON host attachment and 256 virtual tape
devices. The TS7740 Virtualization Engine cluster also includes the assigned TS3500 Tape
Library partition, fiber switches, and tape drives. The TS7720 Virtualization Engine can
include an optional cache expansion frame.
I/O drawers
....
Fibre Channel
.... (logical volumes)
....
....
TS3500 - 4 to 16 drives
The TS7700 Cache Controller consists of a redundant array of independent disks (RAID)
controller and associated disk storage media. These items act as cache storage for data. The
TS7700 Cache Controller contains 16 disk drive modules (DDMs). The capacity of each DDM
depends on your configuration. The TS7700 Cache Drawer acts as an expansion unit for the
TS7700 Cache Controller. The drawer and controller collectively are called the TS7700
Cache. The amount of cache available per TS7700 Cache Drawer depends on your
configuration.
The TS7740 Virtualization Engine Cache provides RAID-5 protected virtual volume storage to
temporarily hold data before writing to physical tape. It then caches the data to allow fast
retrieval from disk. The cache consists of a 3956-CC7/CC8 controller, which can contains one
of the following configurations:
16 DDMs of 300 GB (279.39 GiB) in 3956-CC7 model
16 DDMs of 600 GB (558.78 GiB) in 3956-CC8 model
TS7740 Virtualization Engine Cache can be expanded to a maximum of four cache modules.
Each module consists of one Model CC7/CC8 Cache Controller with three Model CX7 Cache
Drawers. These drawers total 28 TB of cache capacity when all drawers are equipped with
600 GB DDMs.
Tip: The 3957-V06 Virtualization Engine equipped with 3956-CC6/CX6 cache was
withdrawn in February, 2009. It had a maximum of one 3956-CC6 Cache controller and five
3956-CX6 expansion drawers. The configuration totalled 9 TB of cache capacity.
The TS7720 Storage Expansion Frame adds two TS7720 Cache Controllers (3956-CS8).
Each controller can attach to a maximum of five TS7720 Cache Drawers (2 TB 3956-XS7).
Each TS7720 Cache Controller and its attached TS7720 Cache Drawers are called a string.
The TS7720 Cache Controller acts as the head of [the] string. The TS7720 Virtualization
Engine cache subsystems can be configured with a maximum of three strings:
One CS7/CS8 and XS7 drawers in TS7720 base frame
Two of the CS8/XS7 elements with an optional Storage Expansion Frame
The TS7720 Virtualization Engine Base Frame must be fully populated (one Model CS7 or
CS8 Cache Controller and six Model XS7 Cache Drawers) before the TS7720 Storage
Expansion Frame can be implemented. The maximum configuration of the TS7720 Storage
Expansion Frame has two Model CS8 Cache Controllers and ten Model XS7 Cache Drawers.
The minimum configuration of the Storage Expansion Frame has two CS8 cache controllers.
Grid
The TS7700 Virtualization Engine R2.0 grid configuration is a series of two, three, four, five,
or six clusters. These clusters are connected by a network to form a disaster recovery and
highly available solution. For high data availability, the logical volume attributes and data are
replicated across one or more clusters joined by the grid network. This data and attribute
replication ensures the continuation of production work should a cluster become unavailable.
Any data replicated between clusters is accessible through any available cluster in a grid
configuration. A logical volume can be mounted through any virtual device in any cluster in
the grid. Access is independent of where the copy of the logical volumes exists.
A grid configuration looks like a single storage subsystem to the hosts. Whether a
stand-alone cluster or multi-cluster configuration, the entire subsystem appears as a single
tape library to the attached hosts. This can be described as a composite library with
underlying distributed libraries. Distributed and composite libraries are explained in more
detail in 2.1.2, “Multi-cluster grid terms” on page 27.
Grid Ethernet links can vary in number of links, media, and speed. The 1-Gbps links (two or
four links) can be copper or shortwave (SW). The 10-Gbps links (only two) are long-wave
(LW). Four links (1 Gbps) or LW links are only supported by 3957 V07/VEB server. Clusters
with 1-Gbps and 10-Gbps can still be interconnected using your infrastructure. If you directly
connect a two-cluster, the grid adapter types must match.
Multiple TS7700 Virtualization Engine Grid configurations can be attached to host systems
and operate independently of one another. Currently, a TS7700 Virtualization Engine R2.0
allows for multi-clustering up to six clusters in a grid. Five and six-cluster grids should be
configured through RPQ. RPQ provides assessment and usage recommendations on the
new configuration.
TS7740 TS7740
ENet Switch ENet Switch
LAN/WAN
ENet Switch ENet Switch
Figure 2-5 on page 23 shows a three-cluster grid. As with the two-cluster grid, each cluster
contains a TS7720 Virtualization Engine. The figure show clusters configured with different
number of links.
TS7720
TS7720
ENet Switch ENet Switch
LAN/WAN
ENet Switch ENet Switch
ENet Switch
LIB001
LIB003
ENet Switch
TS7720
zSeries Backup
Attachment zSeries Attachment
(devices offline)
Figure 2-6 shows a four-cluster hybrid grid. The configuration consists of two TS7720
Virtualization Engines and two TS7740 Virtualization Engines.
TS7720 TS7740
ENet Switch ENet Switch
LAN/WAN
ENet Switch ENet Switch
Backup
TS7720 TS7740 zSeries Attachment
(devices offline)
zSeries zSeries
Attachment Attachment
LIB002 LIB003
During the logical volume mount process, the best tape volume cache is selected. You can
favor a specific cluster or clusters using the Scratch Allocation Assist (SAA) function. For
non-Fast Ready mount processing, Device Allocation Assist (DAA) selects the best available
cluster for the mount point. A tape volume cache is designated as the I/O tape volume cache.
All I/O operations associated with the virtual tape drive are routed to and from its vNode to the
I/O tape volume cache. For more information, see “I/O tape volume cache selection” on
page 96.
Cluster family
The concept of grouping clusters together into families has been around for some time now.
By grouping clusters into families, you can define a common purpose or role to a subset of
clusters within a grid configuration. The TS7700 Virtualization Engine uses the family
mapping to make improved decisions for tasks such replication and tape volume cache
selection. You can group the clusters in the primary site together in, for example, the
Production family. The clusters in D/R site would form, for instance, the Remote family. In this
way, clusters within Production family will favor each other for tape volume cache selection.
Also, for replication a member of Remote family can use as source a volume already
replicated by a peer family cluster. Sourcing within the family saves the bandwidth needed to
cross to the other cluster.
For more information, see Chapter 4, “Hardware implementation” on page 189 and
Chapter 8, “Operation” on page 451.
User roles
Users of the TS7700 Virtualization Engine can be assigned one or more roles. User roles are
levels of access, assigned by the administrator, that allow users to perform certain functions.
User roles are created using the TS7700 Virtualization Engine management interface. When
an administrator creates a new user account, the administrator must specify an initial
password for the account. Multiple roles cannot be assigned to a single user.
Administrators can assign the following roles when defining new users:
Operator The operator has access to monitoring information, but is restricted
from the following activities:
Changing settings for performance
Network configuration
Feature licenses
User accounts
Custom roles.
Inserting and deleting logical volumes
Lead operator The lead operator has almost all of the same permissions as the
administrator. However, they cannot change network configuration,
feature licenses, user accounts, and custom roles.
Administrator The administrator has the highest level of authority, including the
authority to add or remove user accounts. The administrator has
access to all service functions and TS7700 Virtualization Engine
resources.
Manager The manager has access to health and monitoring information, jobs
and processes information, and performance data and functions.
However, the manager is restricted from changing most settings,
including those for logical volume management, network configuration,
and feature licenses.
Composite library
The composite library is the logical image of the grid that is presented to the host. It is
distinctly different from the IBM Virtual Tape Server. A stand-alone cluster TS7700
Virtualization Engine has a five-character hexadecimal Composite Library ID defined. A
composite library is presented to the host with both TS7700 Virtualization Engine stand-alone
and multi-cluster grid configurations. In the case of a stand-alone TS7700 Virtualization
Engine, the host sees a logical tape library with sixteen 3490E tape control units. These units
each have sixteen IBM 3490E tape drives, and are attached through two or four FICON
channel attachments.
Host A
In the case of a multi-cluster grid, the host sees a logical tape library with sixteen 3490E tape
controllers per cluster. Each controller has sixteen IBM 3490E tape drives, and is attached
through four FICON channel attachments per cluster.
Cluster 0 Cluster 1
Distributed Library Distributed Library
Sequence # Sequence #
A1111 B2222
Host A Host B
Cluster 2
Distributed Library
Sequence #
C3333
UCBs 0x3000-0x30FF
SCDS (Host C)
UCBs 0x3000-0x30FF
Important: A Composite Library ID must be defined both for a multi-cluster grid and a
stand-alone cluster. For a stand-alone cluster, the Composite Library ID must not be the
same as the Distributed Library ID. For a multiple grid configuration, the Composite Library
ID must differ from any of the unique Distributed Library IDs. Both the Composite Library ID
and Distributed Library ID are five-digit hexadecimal strings.
Distributed library
Each cluster in a grid is a distributed library, which consists of a TS7700 Virtualization Engine.
In the case of a TS7740 Virtualization Engine, it is also attached to a physical TS3500 tape
library.
On each cluster in a multi-cluster grid, a copy consistency point setting is specified for the
local cluster and one for each of the other clusters. The settings can be different on each
cluster in the grid. When a volume is mounted on a virtual tape device, the copy consistency
point policy of the cluster to which the virtual device belongs is honored.
For more information, see 2.5.3, “Copy policy management” on page 90.
When a TS7740 Virtualization Engine virtual volume in the tape volume cache is closed and
demounted, it is scheduled to be copied to a stacked volume. Volumes that have previously
You control how these volumes are treated by the TS7700 using a set of management
policies. By default, candidates for removal from cache are selected using a least recently
used (LRU) algorithm.
Virtual volumes in a TS7720 Virtualization Engine configuration always remain in the tape
volume cache in a stand-alone engine. They remain in tape volume cache because no
physical tape drives are attached to the TS7720 Virtualization Engine. However, cache
management policies can be configured to manage virtual volumes in a multi-cluster
configuration. For more information, see 2.4.2, “Tape Volume Cache Management” on
page 60.
Virtual drives
From a host perspective, each TS7700 Virtualization Engine appears as sixteen logical IBM
3490E tape control units. Each control unit has sixteen unique drives attached through
FICON channels. Virtual tape drives and control units are defined just like physical IBM 3490s
through hardware configuration definition (HCD). Defining a preferred path for the virtual
drives gives you no benefit. There is no advantage because the IBM 3490 control unit
functions inside the TS7700 Virtualization Engine are emulated to the host.
Each virtual drive has the following characteristics of physical tape drives:
Uses host device addressing
Is included in the I/O generation for the system
Is varied online or offline to the host
Signals when a virtual volume is loaded
Responds and processes all IBM 3490E I/O commands
Becomes not ready when a virtual volume is rewound and unloaded
For software transparency reasons, the functionality of the 3490E integrated cartridge loader
(ICL) is also included in the virtual drive's capability. All virtual drives indicate that they have
an ICL. For scratch mounts, using the emulated ICL in the TS7700 Virtualization Engine to
preload virtual cartridges is of no benefit. When the Fast Ready attribute is set, the virtual
volume is created directly in the tape volume cache. You do not need to copy any data from a
stacked volume. No mechanical operation is required to mount a logical scratch volume.
Physical drives
The physical tape drives used by a TS7740 Virtualization Engine are installed in a TS3500
Tape Library. The physical tape drives are not addressable by any attached host system, and
are controlled by the TS7740 Virtualization Engine. The TS7740 Virtualization Engine
supports IBM 3592-J1A, TS1120, and TS1130 physical tape drives. For more information,
see 2.3.3, “TS7740 Virtualization Engine components” on page 48.
Remember: Do not change the assignment of physical tape drives attached to a TS7740
in the Drive Assignment window of the TS3500 IBM Tape Library - Advanced Library
Management System (ALMS) WEB Interface. Consult your IBM Service Representative
(SSR) for configuration changes.
Virtual volumes
A virtual volume is created in the tape volume cache when the host writes data to the TS7700
Virtualization Engine subsystem. All host interaction with tape data in a TS7700 Virtualization
Engine is through virtual volumes and virtual tape drives.
Tip: Different size volumes have proportionally different recall or copy time. Also, cache
occupancy profile can change depending on the size of the logical volumes. Select the
best choice for your installation.
Data compression is based on the IBMLZ1 algorithm by the FICON channel card in a TS7700
Virtualization Engine node. The actual host data stored on a virtual CST or ECCST volume
can vary from 1.2 GB up to 18 GB (assuming a 3:1 compression ratio). The default logical
volume sizes of 400 MiB or 800 MiB are defined at insert time. These volume sizes can be
overwritten at every individual scratch mount using a Data Class construct.
Virtual volumes can exist only in a TS7700 Virtualization Engine. You can direct data to a
virtual tape drive by directing it to a system-managed storage (SMS) tape Storage Group of
the TS7700 Virtualization Engine. use the automatic class selection (ACS) routines of a
system-managed tape environment. SMS will pass Data Class, Management Class, Storage
Class, and Storage Group names to the TS7700 as part of the mount operation.The TS7700
Virtualization Engine uses these constructs outboard to further manage the volume. This
process uses the same policy management constructs defined through the ACS routines.
Beginning with TS7700 Virtualization Engine R2.0, a maximum of 2,000,000 virtual volumes
per stand-alone cluster or multi-cluster grid is now supported. The default maximum number
of supported logical volumes is still 1,000,000 per grid. Support for additional logical volumes
can be added in increments of 200,000 volumes using FC5270. The VOLSERs for the logical
volumes are defined through the management interface.
To speed up the scratch mount process, associate a “Fast Ready” attribute with a scratch
category of TS7700 Virtualization Engine virtual volumes. Although you might want to use
larger logical volume sizes, define CST or ECCST emulated cartridges to the TS7700
Virtualization Engine. These simulate MEDIA1 with 400 MiB capacity or MEDIA2 with 800
MiB capacity. Virtual volumes go through the same cartridge entry processing as native
cartridges inserted in a library.
The TS7700 Virtualization Engine emulates a 3490E tape of a specific size. However, the
space used in the tape volume cache is the number of bytes of data written to the virtual
volume after compression. When the TS7740 Virtualization Engine virtual volume is written to
the physical tape, it uses only the space occupied by the compressed data. The tape volume
cache and physical tapes do not preallocate space in 400 MiB, 800 MiB, 1000 MiB, 2000 MiB,
4000 MiB, or 6000 MiB segments.
Because virtual volumes are copied from the tape volume cache to a physical cartridge, they
are stacked on the cartridge end to end. Therefore, they take up only the space written by the
host application after compression. This arrangement maximizes use of a cartridge's storage
capacity. The storage management software within the TS7740 Virtualization Engine node
manages the location of the logical volumes on the physical cartridges. You can influence the
location of the data by using volume pooling. For more information, see “Physical volume
pooling” on page 68. A logical volume that cannot fit in the current filling stacked volume will
not span across two or more physical cartridges. Instead, the stacked volume is marked full
and the logical volume is written on another stacked volume from the assigned pool.
Although the use of the Data Class construct is applied to a volume during a scratch mount
request, you can select additional logical volume sizes. The default logical volume sizes of
400 and 800 MiB are extended to 1000, 2000, 4000 or 6000 MiB. The default logical volume
sizes used at insert time can be overwritten during a scratch mount. The sizes are changed to
the size specified for the volume's currently assigned Data Class. Any write from Begin of
Tape (BOT) also inherits the currently assigned Data Class size. All definitions are outboard,
and can be defined through the TS7700 Virtualization Engine management interface.
Stacked volume
Physical cartridges used by the TS7740 Virtualization Engine to store logical volumes are
under the control of the TS7740 Virtualization Engine node. The cartridges are not known to
the hosts. Physical volumes are called stacked volumes. Stacked volumes must have unique,
system-readable VOLSERs and external labels like any other cartridges in a tape library.
Remember: Stacked volumes do not need to be initialized before inserting them into the
TS3500. However, the internal labels must match the external labels if they were
previously initialized.
Through the TS3500 Tape Library Specialist, define which physical cartridges are to be used
by the TS7740 Virtualization Engine. Logical volumes stored on those cartridges are mapped
by the TS7740 Virtualization Engine internal storage management software. When you use
pooling, your stacked volumes can be assigned to individual pools. Logical volumes can then
be assigned to specific stacked volume pools. In out-of-scratch scenarios, pools can be set
up to enable “borrowing” from other pools. The methodology and configuration of volume
pooling is covered in “Physical volume pooling” on page 68.
Tokens are internal data structures that are not directly visible to you. However, they can be
retrieved through reports generated with the Bulk Volume Information Retrieval (BVIR) facility.
Service prep
The transition of a cluster into Service mode is called service prep. Service prep allows a
cluster to be gracefully and temporarily removed as an active member of the grid. The
remaining sites can acquire ownership of the volumes while the site is away from the grid. If a
volume owned by the service cluster is not accessed during the outage, ownership is retained
by the original cluster. Operations that target the distributed library entering service are
completed by the site going into service before transition to service completes. Other
distributed libraries within the composite library will remain available. The host device
addresses associated with the site in service send Device State Change alerts to the host.
When service prep completes and the cluster enters Service mode, nodes at the site in
Service mode remain online. However, the nodes are prevented from communicating with
other sites. This stoppage allows service personnel to perform maintenance tasks on the
site’s nodes, run hardware diagnostics, and so on, without impacting other sites.
Only one service prep can occur within a composite library at a time. If a second service prep
is attempted at the same time, it will fail. After service prep is complete and the cluster is in
Service mode, another cluster can be placed in service prep.
A site in service prep automatically cancels and reverts back to an ONLINE state if any
ONLINE peer in the grid experiences an unexpected outage. The last ONLINE cluster in a
multi-cluster configuration cannot enter the service prep state. This restriction includes a
stand-alone cluster. Service prep can be cancelled using the Management Interface or by the
IBM System Services Representative (SSR) at the end of the maintenance procedure.
Cancelling service prep returns the subsystem to a normal state.
After a cluster completes service prep and enters service mode, it remains in this state. The
cluster must be explicitly taken out of service mode by the operator or the IBM SSR.
Place only one cluster into service mode at a time to avoid creating a single point of failure
within the grid. Each cluster in a grid can be upgraded in a serial manner. Two or more
clusters can be placed in service mode at the same time, but do this only for special cases.
NTP
Server
GRID
WAN
Gateway
Internet
Public
NTP
Server
The NTP server address is configured into system VPD on a system-wide scope. Therefore,
all nodes will access the same NTP server. All clusters in a grid need to be able to
communicate with the same NTP server defined in VPD. In the absence of a NTP server, all
nodes will coordinate time with Node 0 or the lowest cluster index designation. The lowest
index designation is Cluster 0, if Cluster 0 is available. If not, it uses the next available cluster.
This section addresses the architectural design of the TS7700 Virtualization Engine and its
potential capabilities. It includes a short description of the VTS architecture to help you
understand the differences.
All these concerns have been addressed in the architectural design of the TS7700
Virtualization Engine.
The TS7700 Virtualization Engine is built on a distributed node architecture. This architecture
is composed of nodes, clusters, and grid configurations. The elements communicate with
each other through standard based interfaces. In the current implementation, vNode and
hNode are combined into a gNode, running on a single pServer. The Tape Virtual Cache
module is a high-performance RAID Disk Controller or Controllers. The tape volume cache
has redundant components for high availability, and is attached through Fibre Channel to the
Virtualization Engine.
A TS7700 Virtualization Engine and the previous VTS design are shown in Figure 2-12.
In a multi-cluster grid with more than two clusters, copy consistency point policies are
important. This is especially true in combination with Override settings. You must plan where
you want a copy of your data to reside and when to initiate it.
In addition to completely new statistical record formats, major changes have been made to
the following characteristics:
The frequency of collection
Data storage
Host data retrieval
Reporting tools
The TS7700 Virtualization Engine provides two types of statistical information that can be
useful for performance tuning and capacity planning. This information helps you evaluate how
the TS7700 Virtualization Engine subsystem is performing in your environment.
Point-in-Time Statistics are performance related. The TS7700 Virtualization Engine
updates these statistics every 15 seconds, but does not retain them. Each 15-second
update overwrites the prior data. You can retrieve the Point-in-Time statistics at any time
using the BVIR facility. A subset of Point-In-Time statistics is also available through the
TS7700 Virtualization Engine management interface.
Historical Statistics encompass a wide selection of performance and capacity planning
information. Historical Statistics are collected by the TS7700 Virtualization Engine every
15 minutes. This information is stored for 90 days in a TS7700 Virtualization Engine
database. For long term retention, you can retrieve these records with the Bulk Volume
Information Retrieval (BVIR) function.
For more information, see 9.9, “Bulk Volume Information Retrieval” on page 711.
Additionally, starting with TS7700 Virtualization Engine R2.0, you will be able to create and
display a 24-hours chart. This chart shows the performance of a selected cluster directly on
the Management Interface using the Historical Summary window. You can select the period of
time and variables you are interested in. This data can be downloaded from the TS7700
Virtualization Engine Management Interface in a comma-separated (CSV) file. The following
data types are available in 15 minute periods:
Tape Drive throughput
Host Channel Read and Write data throughput, both compressed and raw
Grid network throughput
Reclaim mounts
Status information is transmitted to the IBM Support Center for problem evaluation. An IBM
SSR can be dispatched to the installation site if maintenance is required. Call Home is part of
the service strategy adopted in the TS7700 family. It is also used in a broad range of tape
products, including VTS models and Tape Controllers like 3592-C06.
After the call home is received by the assigned IBM support group, the associated information
is examined and interpreted. Following analyses, an appropriate course of action is defined to
resolve the problem. For instance, an IBM Service Representative (SSR) might be sent to the
site location to take the corrective actions. Or the problem might be the repaired or solved
remotely by the IBM support personnel. This can be done through a broadband (if available)
or telephone connection.
The TS3000 System Console (TSSC) is the subsystem component responsible for placing
the service call or call home whenever necessary. The call itself can go through a telephone
line or can be placed over a broadband connection, if available. The TS3000 System Console
is equipped with an internal or external modem, depending on the model.
This section describes the hardware components that are part of the TS7700 Virtualization
Engine. These components include the TS7720 Virtualization Engine disk-only solution and
the TS7740 Virtualization Engine with its TS3500 tape library.
2.3.1 Common components for the TS7720 Virtualization Engine and TS7740
Virtualization Engine models
The components highlighted in this section are common for both models of the TS7700
Virtualization Engine.
In a TS7700 Virtualization Engine configuration, the 3952 Tape Base Frame is used for the
installation of the following components:
The TS7700 Virtualization Engine node
The TS7700 Virtualization Engine Cache Controller
The TS7700 Virtualization Engine Cache Modules
Two Ethernet switches
Optionally, the TS3000 Service Console
These components are discussed in detail for the TS7700 Virtualization Engine specific
models in the following sections.
If using a new TS3000 System Console, install in the TS7700 Virtualization Engine 3952-F05
Base Frame or an existing rack.
When a TS3000 System Console is ordered with a TS7700 Virtualization Engine, it can be
pre-installed in the 3952-F05 frame. The TS3000 System Console is a 1U server that
includes a keyboard, display, mouse, and one 16-port Ethernet switch.
Ethernet switches
Previous Ethernet routers are replaced by new 1 Gb Ethernet switches in all new TS7700
Virtualization Engines. The new switches (two for redundancy) are used in the TS7700
internal network communications.
The communications to the external network uses a set of dedicated Ethernet ports.
Communications were previously handled by the routers as in Management Interface
addresses and encryption key management.
When replacing an existing TS7700 Virtualization Engine model V06/VEA with a new
V07/VEB model, the old routers stay in place. They are, however, be re-configured and used
solely as regular switches. The existing external network connections will be re-configured
and connected directly to the new V07/VEB server.
Figure 2-14 shows new 1 Gb switch and old Ethernet router for reference.
With the TS7700 Virtualization Engine 3957-V07/VEB server, the grid adapters have been
moved to the Expansion Drawers, slot 1. The grid adapters for V06/VEA engine are plugged
into slots 4 and 5. The dual-ported 1 Gbps Ethernet adapters can be copper RJ45 or optical
fiber (short wave). These optical adapters have an LC duplex connector.
For improved bandwidth and additional availability, TS7700 Virtualization Engine R2.0 now
supports two or four 1-Gb links. Feature Code 1034 is needed to enable the second
connection port in each grid adapter. This port can be either fiber Short-Wave or copper. With
the new V07/VEB server hardware platform, there is a choice of two Long-Wave single-ported
Optical Ethernet adapters (FC 1035) for two 10-Gb links. Your network infrastructure needs to
support 10 Gbps for the LW option.
The Ethernet adapters cannot be intermixed within a cluster. Both grid adapters in a TS7700
Virtualization Engine must be the same feature code.
Expanded memory
This feature was introduced by TS7700 Virtualization Engine Release 1.7 microcode. It
provided capabilities to support additional physical memory for R1.7 systems. All new
3957-V06/VEA systems are being shipped with 16 GB of physical memory installed. This
started with Hydra 1.7 Licensed Internal Code (LIC).
With new TS7700 Virtualization Engine R2.0 Licensed Internal Code (LIC), 16 GB of physical
memory is now a vital requirement. TS7700 R2.0 LIC will simply not run in the previous 8 GB
physical memory size. If you have a 3957-V06/VEA server and plan to upgrade it to R2.0 LIC,
order Feature Code 3461 (Memory Upgrade). You must install it before the microcode
upgrade, or during the upgrade itself in an intermediary step. Consult with your IBM Customer
Engineer planning this upgrade to see if this fits your installation.
The new 3957-V07/VEB server features 16 GB of physical memory in its basic configuration.
Figure 2-15 shows the front view of the new TS7700 Virtualization Engine Server
3957-V07/VEB.
The TS7700 Virtualization Engine Server V07-VEB offers the following features:
Rack-mount (4U) configuration.
One 3.0 Ghz 8-core processor card.
16 GB of 1066 MHz ECC (error checking and correcting) memory: The 3957-V07/VEB
server provides scalable processing power and performance. It does so through pluggable
processor and memory cards. Fully configured, it can go up to 32 processors and 256 GB
of DDR3 physical memory. Therefore, you can increase processing power and capacity on
demand. This makes it ready for future enhancements.
Each new Expansion Unit I/O adapter drawer offers six additional PCI-X or PCI Express
adapter slots:
One or Two 4-Gb FICON adapters (PCI-x)
Grid Ethernet card (PCI Express)
Fibre Channel to Disk Cache (PCI Express)
Tape connection card in a TS7740 or Expansion frame in a TS7720 - Fibre Channel PCI
Express
Software configurations and support is not apparent for IBM z/OS, IBM z/VM®, IBM z/VSE®,
and IBM z/TPF. It is also not apparent to IBM or third-party tape management software.
The TS7720 Virtualization Engine is configured with different server models and different disk
cache models than the TS7740 Virtualization Engine. A TS7720 Virtualization Engine
The 3952 Model F05 Tape Base Frame houses the following components:
One TS7720 Virtualization Engine Server, 3957 Model VEB
One TS7720 Virtualization Engine SATA Cache Controller, 3956 Model CS8. The
controller has zero to six TS7720 Virtualization Engine SATA Cache Drawers, 3956 Model
XS7
Two Ethernet switches
The 3952 Model F05 Storage Expansion Frame houses the following components:
Two TS7720 Virtualization Engine SATA Cache Controllers, 3956 Model CS8. Each
control can have zero to ten TS7720 Virtualization Engine SATA Cache Drawers, 3956
Model XS7
Each TS7720 Virtualization Engine SATA Cache Controller, 3956 Model CS8 using 2-TB
drives provides approximately 19.8 TB of capacity after RAID 6 formatting. The TS7720
Virtualization Engine SATA Cache Drawers, 3956 Model XS7 using 2 TB drives provides
approximately 23.8 TB of capacity after RAID 6 formatting. The TS7720 uses global spares,
allowing all expansion drawers to share a common set of spares in the RAID 6 configuration.
The base frame cache controller can have a separate capacity than the other two controllers
in the expansion frame. This configuration depends on characteristics such as number of the
expansion drawers in the expansion frame and disk size in the basic frame.
Using 2-TB disk drives, the maximum configurable capacity of the TS7720 Virtualization
Engine at R2.0 with the 3952 Model F05 Storage Expansion Frame is 442 TB of data before
compression. This works out to 1.33 PB of host data, assuming a 3:1 compression rate.
Figure 2-19 TS7720 Virtualization Engine Cache Controller, 3956-CS8 (front view and rear view)
The TS7720 Virtualization Engine Cache Controller provides RAID-6-protected virtual volume
disk storage for fast retrieval of data from cache. The TS7720 Virtualization Engine Cache
Controller offers the following features:
Two 8 Gbps Fibre Channel processor cards
Two battery backup units (one for each processor card)
Two power supplies with embedded enclosure cooling units
Figure 2-20 TS7720 Virtualization Engine Cache Drawer (front view and rear view)
The TS7720 Virtualization Engine Cache Drawer expands the capacity of the TS7720
Virtualization Engine Cache Controller. It does so by providing additional RAID-6-protected
disk storage. Each TS7720 Virtualization Engine Cache Drawer offers the following features:
Two Fibre Channel processor cards
Two power supplies with embedded enclosure cooling units
Sixteen DDMs, each with a storage capacity of 2 TB, for a usable capacity of 23.84 TB per
drawer
The total usable capacity of a TS7740 Virtualization Engine with one 3956 Model CC8 and
three 3956 model CX7s is approximately 28.17 TB before compression.
The Model CX7s can be installed at the plant or in an existing TS7740 Virtualization Engine.
Figure 2-22 shows the front and rear view of the TS7740 Virtualization Engine Model CC8
Cache Controller.
Figure 2-22 TS7740 Virtualization Engine Cache Controller (front and rear view)
The TS7740 Virtualization Engine Cache Controller provides RAID-5-protected virtual volume
disk storage. This storage temporarily holds data from the host before writing it to physical
tape. It then caches the data to allow fast retrieval from the disk. The TS7740 Virtualization
Engine Cache Controller offers the following features:
Two 8 Gbps Fibre Channel processor cards
Two battery backup units (one for each processor card)
Two power supplies with embedded enclosure cooling units
16 DDMs, each possessing 600 GB of storage capacity, for a usable capacity of 7.04 TB
Optional attachment to one or three TS7740 Virtualization Engine Cache Drawers
Figure 2-24 shows the front view and the rear view of the TS7740 Virtualization Engine Model
CX7 Cache Drawer.
Figure 2-24 TS7740 Virtualization Engine Cache Drawer (front and rear view)
The TS7740 Virtualization Engine Cache Drawer expands the capacity of the TS7740
Virtualization Engine Cache Controller by providing additional RAID 5 disk storage. Each
TS7740 Virtualization Engine Cache Drawer offers the following features:
Two Fibre Channel processor cards
Two power supplies with embedded enclosure cooling units
16 DDMs, each with 600 GB of storage capacity, for a total usable capacity of 7.04 TB per
drawer
Attachment to the TS7740 Virtualization Engine Cache Controller
Tape libraries
The TS3500 Tape Library is the only library supported with TS7740 Virtualization Engine
Release 2.0 Licensed Internal Code. To support a TS7740, the TS3500 Tape Library must
include a frame model L23 or D23 equipped with the TS7740 backend Fiber Switches.
Remember: Each TS7740 Virtualization Engine requires two separate fibre switches.
The TS7740 Virtualization Engine Release 2.0 supports 4-Gb and 8-Gb Fibre Channel
switches. Feature Code 4872 provides two TS7700 4-Gb Fibre Channel backend switches.
Feature Code 4875 provides one 8-Gb Fibre Channel. The TS7740 requires two switches per
frame.
Tape drives
The TS7740 Virtualization Engine supports the following tape drives inside a TS3500 Tape
Library:
IBM 3592 Model J1A Tape Drive: However, for maximum benefit from the TS7740
Virtualization Engine, use more recent generations of the 3592 Tape Drive. The 3592
Model J1A Tape Drives cannot be intermixed with TS1130 Tape Drives. The 3592 Model
J1A Tape Drives can be intermixed with TS1120 Tape Drives. However, they can only be
intermixed when the TS1120 Tape Drives are set to J1A emulation mode.
TS1120 Tape Drive (native mode or emulating 3592-J1A Tape Drives): Tape drive types
cannot be intermixed except for 3592-J1A Tape Drives and TS1120 Tape Drives operating
in 3592-J1A emulation mode.
TS1130 Tape Drive: TS7740 Virtualization Engine Release 1.6 and later include support
for TS1130 Tape Drives. When a TS1130 Tape Drive are attached to a TS7740
Virtualization Engine, all attached drives must be TS1130 Tape Drives. Intermixing is not
supported with 3592-J1A and TS1120 Tape Drives. TS1130 Tape Drives can read data
written by either of the prior generation 3592 Tape Drives. Tapes written in E05 format are
appended to in E05 format. The first write to supported tapes are written in the E06
format.
If TS1130 Tape Drives are detected and other generation 3592 Tape Drives are also
detected, the TS1130 Tape Drives will not be configured.
If the Feature Code 9900 is installed or if you plan to use tape drive encryption with the
TS7740 Virtualization Engine, ensure the installed tape drives support encryption and are
enabled for System Managed Encryption using the TS3500 Library Specialist. By default,
TS1130 Tape Drives are encryption capable. TS1120 Tape Drives with the encryption module
are also encryption capable. Encryption is not supported on 3592 Model J1A Tape Drives
For more information, see 2.4.5, “Encryption” on page 79 and 3.7, “Planning for encryption in
the TS7740 Virtualization Engine” on page 169.
Table 2-1 Summary of capacity and performance for media type and format
Media type E06 format capacity and E05 format capacity and J1A format capacity and
data rate (min - max) data rate (min - max) data rate (min - max)
JB 1 TB 700 GB N/A
40 MBps - 160 MBps 40 MBps - 150 MBps
JJ 128 GB 100 GB 60 GB
40 MBps - 140 MBps 40 MBps - 140 MBps 30 MBps - 70 MBps
For more information, see IBM TS3500 Tape Library with System z Attachment A Practical
Guide to Enterprise Tape Drives and TS3500 Tape Automation, SG24-6789.
The TS3500 Tape Library was originally delivered in year 2000 with Linear Tape Open (LTO)
Ultrium technology. It offers a robust enterprise library solution for mid-range and high-end
open systems. Since its introduction, the library has been enhanced to accommodate newer
drive types and operating platforms. These supported types include the attachment of
System z hosts and tape drive controllers.
Proven reliable tape handling and functional enhancements result in a robust enterprise
solution with outstanding retrieval performance. Typical cartridge move time is less than three
seconds. For optimal results, use TS1120, TS1130, or LTO Ultrium high density cartridge
technology. The TS3500 Tape Library provides a powerful and robust tape storage solution
with a minimal footprint.
Further Information: See the TS3500 documentation for more information about the
TS3500 Tape Library features and capabilities.
The TS3500 Tape Library’s scalability allows you to increase capacity from a single base
frame up to fifteen additional storage units. These additional units are called expansion
frames. This section highlights the storage units that can be attached to a TS7740
Figure 2-26 shows the TS3500 Tape Library High Density frame.
The Lxx frames also contain an I/O station for 16 cartridges. If both LTO and IBM 3592 Tape
Drives are installed inside the TS3500 Tape Library, the optional second I/O station is
required for the second media format. The second I/O station is installed below the first I/O
station. The drive type in the Lxx frame determines which I/O station is in the top position. In a
L23 frame (equipped with 3592 tape drives), the I/O station for 3592 cartridges are in the top
position.
All currently available frame models can be intermixed in the same TS3500 Tape Library as
previously installed frame models. Previous frame models include the L22, L32, L52, D22,
D32, and D52.
The TS3500 Tape Library also houses the Fiber Switches belonging to the attached TS7740
Virtualization Engine. To support this configuration, the TS3500 must include a frame model
L23 or D23. This frame model must be equipped with the TS7740 backend switches. Use FC
4872 for 4 Gb FC switches or two FC 4875 for 8 Gb FC switches. Each TS7740 Virtualization
Engine must have its own set of Fiber Switches. You can have more than one TS7740
attached to the same TS3500 Tape Library.
The following tape drive types are currently supported for System z attachment in the TS3500
Tape Library:
3592 Model J1A
TS2230
TS1130
Although these tape drives also support Open Systems hosts attachment, differences exist
between System z and Open Systems attachment.
The TS7740 Virtualization Engine requires a minimum of four dedicated physical tape drives
and supports a maximum of 16 drives. These drives can reside in the TS3500 Tape Library
Model L23/L22 Base Frame and Model D23/D22 Drive Frames. Up to twelve drives can be
installed in one frame. TS7740 Virtualization Engine attached drives do not have to be
installed in contiguous positions. For availability and performance reasons, spread the drives
evenly across two frames in close proximity to each other.
TS7740 Virtualization Engine attached drives cannot be shared with other systems. However,
they can share a frame with tape drives attached to the following items:
Other TS7740 Virtualization Engines
Tape controllers
Open Systems hosts
The TS7740 Virtualization Engine supports IBM 3592 Model J1A, TS1120, and TS1130 Tape
Drives. The IBM 3592-J1A Tape Drive has been withdrawn from marketing, but existing drives
can be used for TS7740 Virtualization Engine attachment. When TS1120 drives are
intermixed with 3592-J1A drives on the same TS7740 Virtualization Engine, the TS1120
drives must run in J1A Emulation mode. When only TS1120 drives are attached to a TS7740
Virtualization Engine, set them to native E05 mode. TS1130 drives cannot be intermixed with
either TS1120 drives or J1A tape drives.
TS3500 Tape Library takes Multi-Path Architecture to the next level. It implements the
Advanced Library Management System. ALMS is an optional feature for the TS3500 Tape
Library in general. However, in a System z host attached Library like the one described here,
ALMS is mandatory. ALMS provides improved flexibility with these features in an user-friendly
web interface:
Defining logical libraries
Allocating resources
Managing multiple logical libraries
Multiple logical libraries can coexist in the same TS3500 Tape Library that is connected to
different resources. These can include the following resources:
Other TS7740 Virtualization Engines
Virtual Tape Server (VTS) and Native Tape Controllers (using IBM 3953 F05 frame)
Open System hosts
The TS7740 Virtualization Engine must have its own logical library partition within the
TS3500 Tape Library.
For details of the TS3500 Tape Library, see IBM TS3500 Tape Library with System z
Attachment A Practical Guide to Enterprise Tape Drives and TS3500 Tape Automation,
SG24-6789.
Tip: The IBM TotalStorage 3592-J1A has been withdrawn. Its replacements are the
TS1120 Tape Drive and the TS1130 Tape Drive.
The IBM TotalStorage 3592-J1A does not support Media Type JB.
When intermixed with 3592-J1A Tape Drives, the TS1120 Model E05 Tape Drive always
operates in J1A Emulation mode. This is the same behind the same TS7740 Virtualization
Engine or other controller. To use the full capacity and functionality of the TS1120 Model E05,
do not intermix it with J1A Tape Drives.
The TS1130 Tape Drive is available in two 3592 models: E06 and EU6. Model EU6 is only
available as an upgrade of an existing TS1120 Tape Drive Model E05 to the TS1130 Tape
drive. The TS1130 Tape Drive supports the following capabilities:
Data encryption and key management
Downward read compatible (n-2) to the 3592 Model J1A
Downward write compatible (n-1) to the 3592 Model E05 formats.
The TS1130 Tape Drive uses the same IBM 3592 Cartridges as the TS1120 and 3592-J1A.
Attachments to System z and Open Systems platforms are maintained.
The TS1130 shares the following enhancements with the previous model numbers:
Redundant power supplies
Larger, 1.1 GB (1 GiB) internal buffer on Model E06/EU6, 536.9 MB (512 MiB) for Model
E05, 134.2 MB (128 MiB) for Model J1A
Dynamic digital speed matching, individual read/write data channel calibration, and
increased search speed
Streaming Lossless Data Compression (SLDC) algorithm
AES 256-bit data encryption capability increases security with minimal performance
impact.
Up to 160 Mibit/sec.2. native data rate for the Models E06 and EU6, four times faster than
the Model J1A at 40 Mibit/sec. (Up to 100 Mibit/sec. for the Model E05.)
Up to 1073.7 GB (1000 GiB3) native cartridge capacity for the Models E06 and EU6 using
the IBM TotalStorage Enterprise Tape Cartridge 3592 Extended (3221.2 GB [3000 GiB] at
3:1 compression), more than a threefold increase over the maximum 322.1 GB (300 GiB)
native tape cartridge capacity (966.3 GB [900 GiB] at 3:1 compression) of Model J1A.
See IBM TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise
Tape Drives and TS3500 Tape Automation, SG24-6789 for more information regarding IBM
TS1130 Tape Drives.
The TS7700 Virtualization Engine Management Interface is accessed by the user through a
standard web browser directed to the IP address assigned to the cluster (or clusters) during
installation. The TS7700 Virtualization Engine must be connected to your LAN using the
supplied Ethernet connection to have Management Interface fully accessible and operational
to the user.
See 8.2, “TS3500 Tape Library Specialist” on page 453 for more details about the
management interface.
TS7720 Virtualization Engine accepts all storage constructs that are provided by the host to
make configuration and support easier. TS7700 Virtualization Engine provides enhanced
cache management options for performance tuning and enhanced automatic removal policies
in multi-cluster grid configurations where at least one cluster is a TS7720.
The TS7700 Virtualization Engine tape volume cache is a disk buffer to which emulated tape
volumes are written in 3490E format. For TS7740 Virtualization Engine configurations, the
emulated tape volumes are then copied to physical tape cartridges. The host operating
system sees virtual IBM 3490E Tape Drives, and the 3490 tape volumes are represented by
storage space in a fault-tolerant disk subsystem. All host interaction is through the virtual
control unit presented by the vNode. The host never writes directly to the physical tape drives
attached to a TS7740 Virtualization Engine.
While resident in the tape volume cache, the user data is protected by RAID 5 (Fibre Channel
drives) for TS7740 Virtualization Engine configurations, and RAID 6 (SATA drives) for TS7720
Virtualization Engine configurations. These RAID configurations provide continuous data
availability to users. If one data disk in a RAID group becomes unavailable, the user data can
be recreated dynamically from the remaining disks using parity data provided by the RAID
implementation. The RAID groups contain global hot spare disks to take the place of a failed
hard disk. Using parity, the RAID controller rebuilds the data from the failed disk onto the hot
spare as a background task. This allows the TS7700 Virtualization Engine to continue
working while the IBM SSR replaces the failed hard disk in the TS7700 Virtualization Engine
Cache Controller or Cache Drawer.
In addition to enabling the full use of high-capacity tape cartridges in a TS7740 Virtualization
Engine, the following benefits are offered:
Emulated 3490E volumes are accessed at disk speed. Tape commands such as space,
locate, rewind, and unload are mapped into disk commands that are completed in tens of
milliseconds rather than the tens of seconds required for traditional tape commands.
Multiple emulated 3490E volumes can be accessed in parallel because they physically
reside in the tape volume cache. To ensure data integrity, a single virtual volume cannot
be shared by different jobs or systems at the same time.
Preference level 0
Preference level 0 (Preference Group 0 or PG0) is assigned to volumes that are unlikely to be
accessed after being created, for example, volumes holding DASD image copies. There is no
need to keep them in cache any longer than is necessary to copy them to physical tape.
Informal studies suggest that the proportion of data that is unlikely to be accessed can be as
high as 80%.
When a volume is assigned preference level 0, the TS7740 Virtualization Engine gives it
preference to be copied to physical tape. When space is needed in the tape volume cache,
the TS7740 Virtualization Engine will first select a preference level 0 volume that has been
copied to a physical volume and delete it from cache. Preference level 0 volumes are selected
by largest size first, independent of how long they have been resident in cache. If no
preference level 0 volumes have been copied to physical volumes to remove, the TS7740
Virtualization Engine will select preference level 1 (PG1) volumes.
In addition to removing preference level 0 volumes from cache when space is needed, the
TS7740 Virtualization Engine will also remove them if the subsystem is relatively idle. There
is a small amount of internal processing impact to removing a volume from cache, so there is
some benefit in removing them when extra processing capacity is available. In the case where
the TS7740 Virtualization Engine removes PG0 volumes during idle times, it selects them by
smallest size first.
Preference level 1
Preference level 1(Preference Group or PG1) is assigned to volumes that are likely to be
accessed after being created. An example of this are volumes that contain master files
When a volume is assigned preference level 1, the TS7740 Virtualization Engine adds it to
the queue of volumes to be copied to physical tape after a four minute time delay and after
any volumes assigned to preference level 0. The four minute time delay is to prevent
unnecessary copies from being performed when a volume is created, then quickly remounted
and appended to again. When space is needed in cache, the TS7740 Virtualization Engine
will first determine if there are any preference level 0 volumes that can be removed. If not, the
TS7740 Virtualization Engine selects preference level 1 volumes to be remove based on a
“least recently used” algorithm. This results in volumes that have been copied to physical tape
and have been in cache the longest without access to be removed first.
When a preference level has been assigned to a volume, that assignment is persistent until
the volume is reused for scratch and a new preference level is assigned. Or, if the policy is
changed and a mount/demount occurs, the new policy will also takes effect. This means that
a volume that is assigned a preference level 0 will maintain that preference level when it is
subsequently recalled into cache.
DC
TS7700
SC ACS
MC routines TS7700 DB
SG
TVC VOLSER SG SC MC DC
Storage Group= BACKUP
Storage Class= NOCACHE VT0999 BACKUP NOCACHE -------- --------
Volser selected= VT0999
Application data
VT0999 Assignment at
Rewind/Unload
Figure 2-28 TS7740 Tape Volume Cache Management through Storage Class
Through the management interface, you can define one or more Storage Class names and
assign preference level 0 or 1 to them. To be compatible with the IART method of setting the
preference level, the Storage Class definition also allows a Use IART selection to be assigned.
Even before Outboard Policy Management was made available for the previous generation
VTS, you had the ability to assign a preference level to virtual volumes by using the Initial
Access Response Time Seconds (IART) attribute of the Storage Class. The IART is a
Storage Class attribute that was originally added to specify the desired response time (in
seconds) for an object using the Object Access Method (OAM). If you wanted a virtual volume
to remain in cache, you assign a Storage Class to the volume whose IART value is 99
seconds or less. Conversely, if you want to give a virtual volume preference to be out of
cache, you assign a Storage Class to the volume whose IART value was 100 seconds or
more.
Assuming that the Use IART selection has not been specified, the TS7700 Virtualization
Engine sets the preference level for the volume based on the preference level 0 or 1 of the
Storage Class assigned to the volume.
If the host passes a previously undefined Storage Class name to the TS7700 Virtualization
Engine during a scratch mount request, the TS7700 Virtualization Engine will add the name
using the definitions for the default Storage Class.
Because the TS7720 Virtualization Engine has a maximum capacity that is the size of its tape
volume cache, after this cache fills, the Volume Removal Policy allows logical volumes to be
In addition, when the auto removal is performed, it implies an override to the current copy
consistency policy in place, resulting in a lowered number of consistency points compared
with the original configuration defined by the user. When the automatic removal starts, all
volumes in fast-ready category are removed first because these volumes are scratch
volumes. To account for any mistake where private volumes are returned to scratch,
fast-ready volumes must meet the same copy count criteria in a grid as the non-fast ready
volumes. The pinning option and minimum duration time criteria discussed below are ignored
for fast-ready volumes. It is being granted to user control over which and when volumes are
removed.
To guarantee that data will always reside in a TS7720 Virtualization Engine or will reside for at
least a minimal amount of time, a pinning time must be associated with each removal policy.
This pin time in hours will allow volumes to remain in a TS7720 Virtualization Engine tape
volume cache for a certain period of time before it becomes a candidate for removal, varying
between 0 and 65,536 hours. A pinning time of zero assumes no minimal pinning
requirement. In addition to pin time, three policies are available for each volume within a
TS7720 Virtualization Engine. These policies are as follows:
Pinned
The copy of the volume is never removed from this TS7720 cluster. The pinning duration is
not applicable and is implied as infinite. After a pinned volume is moved to scratch, it
becomes a priority candidate for removal similarly to the next two policies. This policy must
be used cautiously to prevent TS7720 Cache overruns.
Prefer Remove - When Space is Needed Group 0 (LRU)
The copy of a private volume is removed as long as:
a. An appropriate number of copies exist on peer clusters
b. The pinning duration (in number of hours) has elapsed since last access
c. The available free space on the cluster has fallen below the removal threshold
The order of which volumes are removed under this policy is based on their least recently
used (LRU) access times. Volumes in Group 0 are removed before the removal of volumes
in Group 1 except for any volumes in Fast Ready categories that are always removed first.
Archive and backup data would be a good candidate for this removal group because it will
not likely be accessed after it is written.
Prefer Keep - When Space is needed Group 1 (LRU)
The copy of a private volume is removed as long as:
a. An appropriate number of copies exists on peer clusters
b. The pinning duration (in number of hours) has elapsed since last access
c. The available free space on the cluster has fallen below removal threshold
d. Volumes with the Prefer Remove (LRU Group 0) policy have been exhausted
The order of which volumes are removed under this policy is based on their least recently
used (LRU) access times. Volumes in Group 0 are removed before the removal of volumes
in Group 1 except for any volumes in Fast Ready categories which are always removed
first.
Prefer Remove and Prefer Keep policies are similar to cache preference groups PG0 and
PG1 with the exception that removal treats both groups as LRU versus using the volume size.
Host Command Line Query Capabilities are supported that help override automatic removal
behaviors and the ability to disable automatic removal within a TS7720 cluster. See the IBM
Virtualization Engine TS7700 Series z/OS Host Command Line Request User’s Guide on
Techdocs for more information. It is available at the following URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/Techdocs
For multi-cluster grid configurations where one or more TS7720 are present, a TS7720 Out of
Cache Resources event will cause mount redirection so that an alternate TS7720 or TS7740
(tape volume cache) can be chosen.
During this emergency, if a volume mount is requested to the affected cluster, all tape volume
cache candidates are considered, even when the mount point cluster is in the Out of Cache
Resources state. The grid function will choose an alternate TS7700 cluster with a valid
consistency point and, if dealing with a TS7720, available cache space. Fast-ready mounts
involving a tape volume cache candidate that is Out of Cache Resource will fail only if no
other TS7700 cluster is eligible to be a tape volume cache candidate. Non-Fast Ready
mounts will only be directed to a tape volume cache in an Out of Cache Resources state if
there is no other eligible (tape volume cache) candidate.
When all tape volume caches within the grid are in the Out of Cache Resources state,
non-Fast Ready mounts will be mounted with read-only access.
When all tape volume cache candidate are either in the Paused, Out of Physical Scratch
resource, or Out of Cache Resources state, the mount process enters a queued state. The
mount will remain queued until the host issues a demount or one of the distributed libraries
exits the undesired state.
Any mount issued to a cluster that is in the Out of Cache Resources state and also has Copy
Policy Override set to Force Local Copy will be failed. The Force Local Copy setting excludes
all other candidates from tape volume cache selection.
Tip: The Out of Cache Resources is an highly undesired state that should be avoided by
all means. Make sure that Enhanced Removal Policies, Copy Consistency Policies, and
Threshold levels are applied and adjusted to your needs.
These default settings can be changed according to your needs. Work with your IBM SSR if
you need to modify them.
For example, in a two-cluster grid, when you have set up a copy consistency point policy of
RUN, RUN and the host has access to all virtual devices in the grid, the selection of virtual
devices combined with I/O tape volume cache selection criteria automatically balances the
distribution of original volumes and copied volumes across the tape volume caches. The
original volumes (newly created or modified) will be preferred to reside in cache, while the
copies will be preferred to be removed from cache. The result is that each tape volume cache
is filled with unique newly created or modified volumes, thereby roughly doubling the effective
amount of cache available to host operations.
For a multi-cluster grid that is used for remote business continuation, particularly when the
local clusters are used for all I/O (remote virtual devices varied offline), the default cache
management method might not be desired. In the case where the remote cluster of the grid is
used for recovery, the recovery time is minimized by having most of the needed volumes
already in cache. What is needed is to have the most recently copied volumes remain in
cache, not be preferred out of cache.
Based on your requirements, you can set or modify this control through the z/OS Host
Console Request function for the remote cluster:
When off, which is the default, logical volumes copied into the cache from a peer TS7700
Virtualization Engine are managed as PG0 (preferred to be removed from cache).
When on, logical volumes copied into the cache from a peer TS7700 Virtualization Engine
are managed using the actions defined for the Storage Class construct associated with
the volume as defined at the TS7700 Virtualization Engine receiving the copy.
If you define a common preference group at all clusters for a given Storage Class construct
name while also setting the above control to “on”, the preference group assigned to all copies
will remain the same. For example, the storage group constructs can be SCBACKUP=Pref Group
1, SCARCHIV=Pref Group 0. All logical volumes written that specify SCARCHIV are treated as
PG0 in both the local and remote (copy) caches. All logical volumes written that specify
SCBACKUP are treated as PG1 in both the local and remote caches.
Volumes that are written to an I/O tape volume cache that is configured for PG0 have priority
with respect to peer TS7700 Virtualization Engine replication priority. Therefore, copy queues
within TS7700 Virtualization Engine Clusters will handle volumes with I/O tape volume cache
PG0 assignments before volumes configured as PG1 within the I/O tape volume cache. This
behavior is designed to allow those volumes marked as PG0 to be flushed from cache as
quickly as possible and therefore not left resident for replication purposes. This behavior
overrides a pure first-in first-out (FIFO) ordered queue. There is a new setting in the
Management Interface (MI) under Copy Policy Override, Ignore cache Preference Groups
for copy priority, to disable this function. When selected, it will cause that all PG0 and PG1
volumes are treated in a true first-in first-out order.
Tip: These settings in the Copy Policy Override window override default TS7700
Virtualization Engine behavior and can be different for every cluster in a grid.
However, an unlikely situation is that all of the volumes that are used to restore will be
resident in the cache and that recalls will be required. Unless you can explicitly control the
sequence of volumes used during restore, recalled volumes will likely displace cached
volumes that have not yet been restored from, resulting in further recalls at a later time in the
recovery process. After a restore process is completed from a recalled volume, that volume is
no longer needed. A method is needed with which to remove the recalled volumes from the
cache after they have been accessed so that there is minimal displacement of other volumes
in the cache.
Based on your requirements, you can set or modify this control through the z/OS Host
Console Request function on the remote cluster:
When OFF, which is the default, logical volumes that are recalled into cache are managed
by using the actions defined for the Storage Class construct associated with the volume as
defined at the TS7700 Virtualization Engine.
When ON, logical volumes that are recalled into cache are managed as PG0 (prefer to be
removed from cache). This control overrides the actions that are defined for the Storage
Class associated with the recalled volume.
Pool 12
Figure 2-30 TS7740 Logical volume allocation to specific physical volume pool flow
This definition can be set through the management interface. Each TS7740 Virtualization
Engine that is attached to a TS3500 Tape Library has its own set of pools. A Common
Scratch Pool (Pool 00) is a reserved pool containing only scratch stacked volumes. There are
also 32 general purpose pools (Pools 01-32).
By default, there is one pool, Pool 01, and the TS7740 Virtualization Engine stores virtual
volumes on any stacked volume available to it. This creates an intermix of logical volumes
from differing sources, for example, LPAR and applications on a physical cartridge. The user
cannot influence the physical location of the logical volume within a pool. Having all logical
volumes end up in a single group of stacked volumes is not always optimal.
Volume pooling allows the administrator to define pools of stacked volumes within the TS7740
Virtualization Engine and you can direct virtual volumes to these pools through the use of
SMS constructs. There can be up to 32 general purpose pools (01-32) and one common pool
Physical VOLSER ranges can be defined with a home pool at insert time. Changing the home
pool of a range has no effect on existing volumes in the library. When also disabling
borrow/return, this provides a method to have a specific range of volumes used exclusively by
a specific pool.
Using this facility, you can also perform the following tasks:
Move stacked volumes to separate pools
Set a reclamation threshold at the pool level
Set Forced Reclamation policies for stacked volumes
Eject stacked volumes from specific pools
Intermix or segregate media types
Map separate Storage Groups to the same primary pools
Set up specific pools for Copy Export
Set up pool or pools for encryption
Tip: Primary Pool 01 is the default private pool for TS7740 Virtualization Engine stacked
volumes.
With borrowing, stacked volumes can move from pool to pool and back again to the original
pool. In this way, the TS7740 Virtualization Engine can manage out of scratch and low scratch
scenarios, which can occur within any TS7740 Virtualization Engine from time to time.
Remember: Pools that have borrow/return enabled that contain no active data will
eventually return all scratch volumes to the common scratch pool after 48 to 72 hours of
inactivity.
With the Selective Dual Copy function, storage administrators have the option to selectively
create two copies of logical volumes within two pools of a TS7740 Virtualization Engine. In a
grid environment, logical volumes that are copied to peer TS7740 Virtualization Engine
clusters can also be duplexed to back-end tape.
The Selective Dual Copy function can be used along with the Copy Export function to provide
a secondary off site physical copy for disaster recovery purposes. See 2.4.4, “Copy Export”
on page 76 for more details concerning Copy Export.
The second copy of the logical volume is created in a separate physical pool ensuring
physical cartridge separation. Control of Dual Copy is through the Management Class
construct (see “Management Classes” on page 240). The second copy is created when the
original volume is pre-migrated.
Important: Make sure that reclamation in the secondary physical volume pool is
self-contained (the secondary volume pool reclaims onto itself) to keep secondary pool
cartridges isolated from the others. Otherwise, copy export disaster recovery capabilities
can be compromised.
The second copy created through the Selective Dual Copy function is only available when the
primary volume cannot be recalled or is inaccessible. It cannot be accessed separately and
cannot be used if the primary volume is being used by another operation. In other words, the
second copy provides a backup in the event the primary volume is damaged or is
inaccessible.
Selective Dual Copy is defined to the TS7740 Virtualization Engine and has the following
characteristics:
The copy feature is enabled by the Management Class setting through the management
interface where you define the secondary pool.
Secondary and primary pools can be intermixed:
– A primary pool for one logical volume can be the secondary pool for another logical
volume unless the secondary pool is used as a Copy Export pool.
– Multiple primary pools can use the same secondary pool.
When a request for a scratch is issued to the TS7700 Virtualization Engine, the request
specifies a mount category. The TS7700 Virtualization Engine selects a virtual VOLSER from
the candidate list of scratch volumes in the category.
Scratch volumes chosen at the mounting cluster are chosen using the following priority order:
1. All volumes in the source or alternate source category that are owned by the local cluster,
not currently mounted, and do not have pending reconcile changes against a peer cluster.
2. All volumes in the source or alternate source category that are owned by any available
cluster, not currently mounted, and do not have pending reconcile changes against a peer
cluster.
3. All volumes in the source or alternate source category that are owned by any available
cluster and not currently mounted.
4. All volumes in the source or alternate source category that can be taken over from an
unavailable cluster that has an explicit or implied takeover mode enabled.
Volumes chosen in the above steps favor those that have been contained in the source
category the longest. Volume serials are also toggled between odd and even serials for each
volume selection.
For all scratch mounts, the volume is temporarily initialized as though the volume had been
initialized using the EDGINERS or IEHINITT program, and will have an IBM standard label
consisting of a VOL1 record, an HDR1 record, and a tape mark. If the volume is modified, the
temporary header information is applied to a file in the tape volume cache. If the volume is not
modified, the temporary header information is discarded and any previously written content (if
it exists) is not modified. Besides choosing a volume, tape volume cache selection processing
is used to choose which tape volume cache will act as the I/O tape volume cache as
described in “I/O tape volume cache selection” on page 96.
The TS7700 Virtualization Engine with Scratch Allocation Assistance (SAA) function
activated attempts, in conjunction with z/OS host software capabilities, to determine the
cluster that will provide the best mount point for a scratch mount request.
If in the TS7740 Virtualization Engine the chosen I/O tape volume cache contains a copy of
the logical volume in cache, a physical tape mount is not required. In this case, the mount is
signaled as complete and the host can access the data immediately.
If the volume has already been removed in the TS7740 Virtualization Engine from the tape
volume cache and migrated to a stacked volume, recalling the logical volume from the
stacked volume into the tape volume cache is required before the host can directly access the
volume. A recall operation typically requires a physical mount unless the stacked volume is
already mounted from a previous volume recall. Mount completion is signaled to the host
system only after the entire volume is available in the tape volume cache. The virtual volume
remains in the tape volume cache until it becomes the least-recently used (LRU) volume,
unless the volume was assigned a Preference Group of 0 or the Recalls Preferred to be
Removed from Cache override is enabled using the TS7700 Library Request command. If
modification of the virtual volume did not occur when it was mounted, the TS7740
Virtualization Engine does not schedule another copy operation and the current copy of the
logical volume on the original stacked volume remains active. Furthermore, copies to remote
TS7700 Virtualization Engine Clusters are not required if modifications were not made.
In a z/OS environment, to mount a specific volume in the TS7700 Virtualization Engine, that
volume must reside in a private category within the library. The tape management system
prevents a scratch volume from being mounted in response to a specific mount request.
Alternatively, the TS7700 Virtualization Engine treats any specific mount that targets a
volume that is currently assigned to a category, which is also configured through the
management interface as Fast Ready, as a host scratch mount. If this occurs, the temporary
tape header is created and no recalls will take place, as described in “Mounting a scratch
virtual volume” on page 72. In this case, DFSMS allocations will fail the mount operation
because the expected last written data set for the private volume was not found. Because no
write operation occurs, the original volume’s contents should be left intact, which accounts for
categories being incorrectly configured as Fast Ready within the management interface.
The TS7700 Virtualization Engine with the activated Allocation Assistance functions Device
Allocation Assistance (DAA) for private volumes and Scratch Allocation Assistance (SAA) for
scratch volumes, attempts in conjunction with z/OS host software capabilities to determine
the cluster that will provide the best performance for a given volume mount request.
For a detailed description on Allocation Assistance, see 9.4, “Virtual Device Allocation in z/OS
with JES2” on page 653.
Important: Disregarding Delete Expired Volumes setting can lead to an out-of-cache state
in a TS7720 Virtualization Engine. With a TS7740 Virtualization Engine, it can cause an
excessive tape usage, or in an extreme condition, an out-of-scratch state.
The disadvantage of not having this option enabled is that scratched volumes needlessly
consume tape volume cache and physical stacked volume resources, therefore demanding
more tape volume cache active space while also requiring more physical stacked volumes in
a TS7740 Virtualization Engine. The time it takes a physical volume to fall below the
reclamation threshold is also increased because the data is still considered active. This delay
in data deletion also causes scratched stale volumes to be moved from one stacked volume
to another during reclamation.
With expired volume management, you can set a “grace period” for expired volumes ranging
from one hour to approximately 144 weeks (default is 24 hours). After that period has
elapsed, expired volumes become candidates for deletion. The deletion of expired logical
volume data eliminates the need for the TS7700 Virtualization Engine to manage logical
volume data that has been expired at the host. For details about expired volume
management, see 4.3.6, “Defining the logical volume expiration time” on page 234.
An additional option referred to as Expire Hold can also be enabled if delete expired is
enabled. When this option is also enabled in addition to delete expire, the volume cannot be
accessed using any host initiated command until the grace period has elapsed. This
additional option is made available to prevent any malicious overwriting or unintended
overwriting of scratched data before the duration elapsing. After the grace period expires, the
volume will simultaneously be removed from a held state and made a deletion candidate.
Restriction: Volumes in expire-hold state are excluded from DFSMS OAM scratch counts
and are not candidates for TS7700 scratch mounts.
Expired data on a physical volume remains readable through salvage processing until the
volume has been completely overwritten with new data.
The reconciliation process checks for invalidated volumes. A reconciliation is that period of
activity by the TS7740 Virtualization Engine when the most recent instance of a logical
volume is determined as the active one and all other instances of that volume are deleted
from the active volume list. This process automatically adjusts the active data amount for any
stacked volumes that hold invalidated logical volumes.
The data that is associated with a logical volume is considered invalidated if any of the
following statements are true:
A host has assigned the logical volume to a scratch category. The volume is subsequently
selected for a scratch mount and data is written to the volume. The older version of the
volume is now invalid.
The TS7740 Virtualization Engine keeps track of the amount of active data on a physical
volume. It starts at 100% when a volume becomes full. Although the granularity of the
percentage of full TS7740 Virtualization Engine tracks is one tenth of 100%, it rounds down,
so even one byte of inactive data drops the percent to 99.9%. TS7740 Virtualization Engine
keeps track of the time that the physical volume went from 100% full to less than 100% full by
performing the following tasks:
Checking on an hourly basis for volumes in a pool with a non-zero setting
Comparing this time against the current time to determine if the volume is eligible for
reclamation
Reclamation, which consolidates active data and frees stacked volumes for return-to-scratch
use, is part of the internal management functions of a TS7740 Virtualization Engine.
Physical Tape volumes become eligible for space reclamation when they cross the occupancy
threshold level specified by the administrator in the home pool definitions where those Tape
Volumes belong. This reclaim threshold is set for each pool individually according to the
specific needs for that client and is expressed in percentage (%) of tape utilization.
Volume reclamation can be concatenated with a Secure Data Erase for that volume if
required. This causes volume to be erased after that reclamation has finished. See “Secure
Data Erase” on page 75 for more details.
Reclamation should not concur with the production peek period in a TS7740 cluster, leaving
all physical drives available for the priority tasks like recalls and migrations. You should chose
the best period for reclamation considering the workload profile for that TS7740 cluster and
inhibit reclamation during the busiest period for the machine.
A physical volume that is being ejected from library is also reclaimed in a similar way before
being allowed to be ejected. The active logical volumes contained in the cartridge are moved
to another physical volume, according to the policies defined in the volume’s home pool,
before the physical volume is ejected from the library.
Reclamation also can be used to expedite migration of older data from a pool to another while
it is being reclaimed, but only by targeting a separate specific pool for reclamation.
The Secure Data Erase function supports the erasure of a physical volume as part of the
reclamation process. The erasure is performed by writing a random data pattern on the
physical volume at the end of the reclamation task. A random data pattern is written on the
physical volume being reclaimed so that the logical volumes written on it are no longer
readable. As part of this data erase function, an additional reclaim policy is added. The policy
specifies the number of days a physical volume can contain invalid logical volume data before
the physical volume becomes eligible to be reclaimed.
When a physical volume contains encrypted data, the TS7740 Virtualization Engine is able to
perform a fast erase of the data under certain circumstances by secure data erasing the
encryption keys on the cartridge. Basically, it does the same Secure Data Erasing (SDE)
process only on the portion of the tape where the key information is stored. Without the key
information, the rest of the tape cannot be read. This method significantly reduces the erasure
time. The first erasure of an encrypted volume causes the keys to be secure data erased and
a pattern to be written to the tape.
This data erase function is enabled on a volume pool basis. It is enabled when a non-zero
value is specified for the data erase reclamation policy. When enabled, all physical volumes in
the pool are erased as part of the reclamation process, independent of the reclamation policy
under which the volume became eligible for reclamation.
Any physical volume that has a status of read-only is not subject to this function and is not
designated for erasure as part of read-only recovery.
If you use the eject stacked volume function, the data on the volume is not erased before
ejecting. The control of expired data on an ejected volume is your responsibility.
Volumes tagged for erasure cannot be moved to another pool until erased, but they can be
ejected from the library because such a volume is usually removed for recovery actions.
Using the Move function will also cause a physical volume to be erased, even though the
number of days specified has not yet elapsed. This includes returning borrowed volumes.
The Copy Export function is supported on all configurations of the TS7740 Virtualization
Engine, including grid configurations. In a grid configuration, each TS7740 Virtualization
Engine is considered a separate source TS7740 Virtualization Engine. This means that only
the physical volumes exported from a source TS7740 Virtualization Engine can be used for
the recovery of a source TS7740 Virtualization Engine. Physical volumes from more than one
source TS7740 Virtualization Engine in a grid configuration cannot be combined for recovery
use. Recovery is only to a stand-alone cluster configuration. After recovery, the Grid MES
offering can be applied to recreate a grid configuration.
The Copy Export function allows a copy of selected logical volumes written to secondary
pools within the TS7740 Virtualization Engine to be removed and taken off site for disaster
recovery purposes. The benefits of volume stacking, which places many logical volumes on a
physical volume, are retained with this function. Because the physical volumes being
exported are from a secondary physical pool, the primary logical volume remains accessible
to the production host systems.
During a Copy Export operation, all of the physical volumes with active data on them in a
specified secondary pool are removed from the library associated with the TS7740
Virtualization Engine. Only the logical volumes that are valid on that TS7740 Virtualization
Engine are considered during the execution of the operation. If the virtual volumes are in the
tape volume cache, but have not yet been migrated to the secondary pool, pre-migrations are
made as part of the Copy Export operation. If the TS7740 Virtualization Engine is in a grid
configuration, pre-migrations that have not been completed to the TS7740 Virtualization
Engine performing the Copy Export are not considered during the execution of the operation.
It is expected that Copy Export operations will be run on a periodic basis, resulting in multiple
groups of physical volumes that contain copies of logical volumes from the TS7740
Virtualization Engine. Logical volumes currently mounted during a copy export operation are
excluded from the export set.
During the Copy Export operation, a copy of the current TS7740 Virtualization Engine’s
database is written to the exported physical volumes. To restore access to the data on the
removed physical volumes, all exported physical volumes for a source TS7740 Virtualization
Engine are placed into a library that is attached to an empty TS7740 Virtualization Engine. A
disaster recovery procedure is then performed that restores access using the latest copy of
the database.
The physical volumes exported during a Copy Export operation continue to be managed by
the source TS7740 Virtualization Engine with regard to space management. As logical
volumes that are resident on the exported physical volumes expire, are rewritten, or otherwise
invalidated, the amount of valid data on a physical volume will decrease until the physical
volume becomes eligible for reclamation based on the your provided criteria for its pool.
Exported physical volumes that are to be reclaimed are not brought back to the source
TS7740 Virtualization Engine for processing. Instead, a new secondary copy of the remaining
valid logical volumes is made using the primary logical volume copy as a source.
The next time the Copy Export operation is performed, the physical volumes with the new
copies are also exported. The physical volumes that were reclaimed (which are off site) no
longer are considered to have valid data and can be returned to the source TS7740
Virtualization Engine to be used as new scratch volumes.
The host that initiates the Copy Export operation first creates a dedicated export list volume
on the TS7740 Virtualization Engine that will perform the operation. The export list volume
contains instructions regarding the execution of the operation and a reserved file that the
TS7740 Virtualization Engine will use to provide completion status and export operation
information. As part of the Copy Export operation, the TS7740 Virtualization Engine creates
response records in the reserved file that lists the logical volumes exported and the physical
volume on which they reside. This information can be used as a record for what data is off
site. The TS7740 Virtualization Engine also writes records in the reserved file on the export
list volume that provides the current status for all physical volumes with a state of Copy
Exported.
Use of the Copy Export function is shown in Figure 2-31, which shows the flow of the Copy
Export volumes and how they are used in a disaster recovery. In a disaster recovery scenario
where the production site is lost, the latest set of Copy Export volumes is used to restore the
TS7740 Virtualization Engine at a remote site.
Recovery Host
Production Host
Recovery TS7700
Database
TVC
Pool 09
Production Site
Recovery Site
Storage Group and Management Class constructs are defined to use separate pools for the
primary (pool 01) and secondary (pool 09) copies of the logical volume. The existing
Management Class construct, which is part of Advanced Policy Management (APM), is used
to create a second copy of the data to be Copy Exported. The Management Class actions are
configured through the TS7700 Virtualization Engine management interface. An option on the
management interface window allows designation of a secondary pool as a Copy Export pool.
As logical volumes are written, the secondary copy of the data is pre-migrated to stacked
volumes in the Copy Export pool. The example uses pool 09 as the Copy Export pool.
When the time comes to initiate a Copy Export, a Copy Export job is run from the production
host. The TS7740 Virtualization Engine will pre-migrate any logical volumes in the Copy
Export pool that have not been pre-migrated. Any new logical volumes written after the Copy
Export operation is initiated will not be included in the Copy Export set of physical volumes.
The TS7740 Virtualization Engine then writes a complete TS7740 Virtualization Engine
database to each of the physical volumes in the Copy Export set.
The Copy Export job can specify whether the stacked volumes in the Copy Export set should
be ejected immediately or placed into the export-hold category. When Copy Export is used
with the export-hold category, you will need to manually request that the export-hold volumes
be ejected. The choice to eject as part of the Copy Export job or to eject them later from the
export-hold category will be based on your operational procedures. The ejected Copy Export
Tip: If a copy export hold volume is reclaimed while it is still present in the Tape Library, it is
automatically moved back to the common scratch pool (or the defined reclamation pool)
after the next copy export operation completes.
A schedule is implemented for rotation of the Copy Export sets at the disaster recovery site.
This will periodically return sets of stacked volumes to the production system TS7740
Virtualization Engine, where they are reintegrated into the library. The TS7740 Virtualization
Engine will reconcile the logical volumes on the returning stacked volumes based on activity
in the Composite Library while they were exported to the disaster recovery site.
In the event of a disaster, a recovery process is performed through the TS7740 Virtualization
Engine management interface on an empty TS7740 Virtualization Engine. As part of that
recovery process, all of the source TS7740 Virtualization Engine Copy Exported stacked
volumes are inserted into the target library. If there are multiple pools that have been
exported, one pool is recovered at a time. The stacked volume with the latest copy of the
source TS7740 Virtualization Engine's database needs to be identified to the target TS7740
Virtualization Engine. The target TS7740 Virtualization Engine then restores the database
from this stacked volume. At this point, the disaster recovery host can start operations.
Clarification: Recovery process can be done in a test mode for DR testing purposes. This
allows a test restore without compromising the contents of the copy export sets. See the
copy export white paper available at the following URL:
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101092
2.4.5 Encryption
The importance of data protection has become increasingly apparent with news reports of
security breaches, loss and theft of personal and financial information, and government
regulation. Encrypting the stacked cartridges minimizes the risk of unauthorized data access
without excessive security management burdens or subsystem performance issues.
For the remainder of this section Key Manager is used in place of EKM, TKLM or ISKLM.
The key manager is the central point from which all encryption key information is managed
and served to the various subsystems. The Key Manager server communicates with the
Virtualization Engine TS7740 Virtualization Engine and tape libraries, control units, and Open
Systems device drivers.
Encryption on the TS7740 Virtualization Engine is controlled on a storage pool basis. The
Storage Group DFSMS construct that is specified for a logical tape volume determines which
storage pool is used for the primary and optional secondary copies in the TS7740
Virtualization Engine. The storage pools were originally created for management of physical
media, and have been enhanced to include encryption characteristics. Storage pool
encryption parameters are configured through the TS7740 Virtualization Engine management
interface under Physical Volume Pools.
For encryption support, all drives that are attached to the TS7740 Virtualization Engine must
be encryption-capable and encryption must be enabled. If TS7740 Virtualization Engine uses
TS1120 Tape Drives, they must also be enabled to run in their native E05 format. The
management of encryption is performed on a physical volume pool basis. Through the
management interface, one or more of the 32 pools can be enabled for encryption.
Each pool can be defined to use specific encryption keys or the default encryption keys
defined at the Key Manager server:
Specific encryption keys
Each pool that is defined in the TS7740 Virtualization Engine can have its own unique
encryption key. As part of enabling a pool for encryption, enter two key labels for the pool
and an associated key mode. The two keys might or might not be the same. Two keys are
required by the Key Manager servers during a key exchange with the drive. A key label
can be up to 64 characters. Key labels do not have to be unique per pool: The
management interface provides the capability to assign the same key label to multiple
pools. For each key, a key mode can be specified. The supported key modes are Label
and Hash. As part of the encryption configuration through the management interface, you
provide IP addresses for a primary and an optional secondary key manager.
Default encryption keys
The TS7740 Virtualization Engine encryption supports the use of a default key. This
support simplifies the management of the encryption infrastructure because no future
changes are required at the TS7740 Virtualization Engine. After a pool is defined to use
the default key, the management of encryption parameters is performed at the key
manager:
– Creation and management of encryption certificates
– Device authorization for key manager services
– Global Default key definitions
– Drive level Default key definitions
– Default key changes as required by security policies
Figure 2-32 illustrates that the method for communicating with a key manager is through the
same Ethernet interface that is used to connect the TS7740 Virtualization Engine to your
network for access to the management interface. The request for an encryption key is
directed to the IP address of the primary key manager. Responses are passed through the
TS7740 Virtualization Engine to the drive. If the primary key manager did not respond to the
key management request, the optional secondary key manager IP address is used. After the
TS1120 or TS1130 drive has completed the key management communication with the key
manager, it accepts data from the tape volume cache.
When a logical volume needs to be read from a physical volume in a pool enabled for
encryption, either as part of a recall or reclamation operation, the TS7740 Virtualization
Engine uses the key manager to obtain the necessary information to decrypt the data.
Network
FICON
Host - zOS, AIX, zLinux, Linux, iOS,
HP-UX, Windows, Sun
As a result of supporting 3592 WORM compatible drives and media, the logical WORM
enhancement can duplicate most of the 3592 WORM behavior, allowing ease of host
integration. The host views the TS7700 Virtualization Engine as an LWORM-compliant library
that contains WORM-compliant 3490E logical drives and media.
The LWORM implementation of the TS7700 Virtualization Engine emulates physical 3490E
WORM tape drives and media, and uses the existing host software support of physical
WORM media. TS7700 Virtualization Engine provides the following functions:
Provides an advanced function Data Class construct property that allows volumes to be
assigned as LWORM-compliant during the volume’s first mount, where a write operation
from BOT is required, or during a volume’s reuse from scratch, where a write from BOT is
required.
Generates, during the assignment of LWORM to a volume’s characteristics, a temporary
World Wide Identifier that is surfaced to host software during host software open and close
processing, and then bound to the volume during first write from BOT.
Generates and maintains a persistent Write-Mount Count for each LWORM volume and
keeps the value synchronized with host software.
Allows only appends to LWORM volumes using physical WORM append guidelines.
Provides a mechanism through which host software commands can discover LWORM
attributes for a given mounted volume.
Clarification: Cohasset Associates, Inc. has assessed the logical WORM capability of the
TS7700 Virtualization Engine. The conclusion is that the TS7700 Virtualization Engine
meets all SEC requirements in Rule 17a-4(f), which expressly allows records to be
retained on electronic storage media.
These functions operate through the Host Console Request Function to modify management
controls and to set alert thresholds for many of the resources managed by the TS7700.
For each TS7700 resource managed, two alert thresholds can be set. For each threshold, a
message is sent to all attached hosts when the threshold is exceeded and when the resource
falls back into an acceptable area. The resources that are monitored are:
The amount of data in cache that needs to be copied to a peer
The amount of data resident in the cache (for a TS7740, this is also the amount of data
that needs to be copied to physical tape)
Number of logical scratch volumes
Number of physical scratch volumes
Number of available physical tape drives
Figure 2-33 shows a diagram of the monitored resource and threshold levels.
See 8.5.3, “Host Console Request function” on page 589 and IBM Virtualization Engine
TS7700 Series z/OS Host Command Line Request User’s Guide available at the following
URL for details about how to use the command and threshold levels:
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101091
Also, this facility can be used in conjunction with automation software to obtain and analyze
operational information to help you or the storage administrator personnel to notice any
anomaly or abnormal trend in due time.
The Library Request command with the provided set of parameters is a powerful tool that
allows you to monitor, control and modify the operational behavior of your grid or individual
clusters without using the web-based interface. It gives you access to the inner controls of
your TS7700 Virtualization Engine cluster or grid, allowing you to check or modify your
TS7700 subsystem behavior to meet your requirements.
See 8.5.3, “Host Console Request function” on page 589 and IBM Virtualization Engine
TS7700 Series z/OS Host Command Line Request User’s Guide available at the following
URL for more details concerning the Host Console Request function:
http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101091
Hard Partitioning is a way to give a fixed number of Logical Control Units to a defined Host
Group and connect the units to a range of Logical Volumes dedicated to a particular host or
hosts. SDAC is a useful function when multiple partitions have:
Separate volume ranges
Separate Tape Management system
Separate Tape Configuration Database
SDAC allows you to define a subset of all the logical devices per host (Control Units in ranges
of 16 devices) and enables exclusive control on host initiated mounts, ejects, and attribute or
category changes. The implementation of SDAC is described in more detail in 5.4,
“Implementing Selective Device Access Control” on page 323. Implementing SDAC will
require planning and orchestration with other system areas to map the desired access for the
device ranges from individual servers or logical partitions and consolidate this information in a
coherent IODF (HCD). From the TS7700 subsystem standpoint, SDAC definitions are set up
using the TS7700 Virtualization Management Interface.
To configure a multi cluster grid, the grid enablement feature must be installed on all TS7700
Virtualization Engines in the grid, and an Ethernet connection must be established between
the clusters. Each cluster has two dual-port copper RJ45 1 GBps Ethernet adapters, two
1 Gbps short wave optical fiber adapters, or one 10 Gbps long wave optical fiber, as shown in
2.3.1, “Common components for the TS7720 Virtualization Engine and TS7740 Virtualization
Engine models” on page 40. Earlier TS7740 Virtualization Engines might still have the
single-port adapters for the copper connections and short wave 1 Gbps connections.
The cluster that has ownership of a specific logical volume can change dynamically. When
required, the TS7700 Virtualization Engine node transfers the ownership of the logical volume
as part of mount processing. This action ensures that the cluster with the virtual device
associated with the mount has ownership. When a mount request is received on a virtual
device address, the TS7700 Virtualization Engine Cluster for that virtual device must have
ownership of the volume to be mounted or must obtain the ownership from the cluster that
currently owns it.
If the TS7700 Virtualization Engine Clusters in a grid and the communication paths between
them are operational, the change of ownership and the processing of logical volume-related
commands are transparent to the host. If a TS7700 Virtualization Engine Cluster has a host
request for a logical volume that it does not own and it cannot communicate with the owning
cluster, the operation against that volume will fail unless additional direction is given. In other
words, clusters will not automatically assume or take ownership of a logical volume without
being directed.
The volume ownership protects the volume from being accessed or modified by multiple
clusters simultaneously. If more than one cluster has ownership of a volume, it can result in
the volume's data or attributes being changed differently on each cluster, resulting in a data
integrity issue with the volume. If a TS7700 Virtualization Engine Cluster has failed or is
known to be unavailable (for example, it is being serviced), its ownership of logical volumes is
transferred to another TS7700 Virtualization Engine Cluster as follows:
Read-only Ownership Takeover
When Read-only Ownership Takeover (ROT) is enabled for a failed cluster, ownership of a
volume is taken from the failed TS7700 Virtualization Engine Cluster. Only read access to
the volume is allowed through the other TS7700 Virtualization Engine Clusters in the grid.
Scratch mounts that target volumes that are owned by the failed cluster are failed. Scratch
mounts that target pre-owned volumes will succeed. After ownership for a volume has
been taken in this mode, any operation attempting to modify data on that volume or
change its attributes fails. The mode for the failed cluster remains in place until another
mode is selected or the failed cluster has been restored.
You can set the level of ownership takeover, Read-only or Write Ownership, through the
TS7700 Virtualization Engine management interface. Note that you cannot set a cluster in
service preparation after it has already failed.
AOTM uses the TS3000 System Console (TSSC) associated with each TS7700 Virtualization
Engine in a grid, to provide an alternate path to check the status of a peer TS7700
Virtualization Engine. This is why each TS7700 Virtualization Engine in a grid must be
connected to a TSSC. Through a network that is preferably separate from the grid, the TSSCs
communicate with each other to determine if a cluster is down or if the grid network between
the clusters is down. The TS7700 Virtualization Engine at each site can be configured to
automatically enable Volume Ownership takeover through AOTM.
Without AOTM, an operator must determine if one of the TS7700 Virtualization Engine
Clusters has failed and then enable one of the ownership takeover modes. This is required to
access the logical volumes owned by the failed cluster. It is important that write ownership
takeover be enabled only when a cluster has failed, and not when there is only a problem with
communication between the TS7700 Virtualization Engine Clusters. If it is enabled and the
cluster in question continues to operate, data might be modified independently on other
clusters, resulting in corruption of data. Although there is no data corruption issue with the
read ownership takeover mode, it is possible that the remaining clusters will not have the
latest version of the logical volume and present down-level data.
Even if AOTM is not enabled, it is a best practice to configure it. This provides protection from
a manual takeover mode being selected when the cluster is actually functional. This
additional TS3000 System Console (TSSC) path is used to determine if an unavailable cluster
is still operational or not. This path is used to prevent the user from forcing a cluster online
when it should not be, or enabling a takeover mode that can result in dual volume use.
With AOTM, one of the takeover modes is enabled if normal communication between the
clusters is disrupted and the cluster performing the takeover can verify that the other cluster
has failed or is otherwise not operational. When a cluster loses communication with another
peer cluster, it will ask the TS3000 System Console to which it is attached to confirm that the
other cluster has failed. If the TSSC validates that the other cluster has indeed failed, it will
reply back and the requesting TS7700 Virtualization Engine will enter the user AOTM
configured ownership takeover mode. If it cannot validate the failure or if the system consoles
To take advantage of AOTM, you must provide an IP communication path between the
TS3000 System Consoles at the cluster sites. For AOTM to function properly, it must not
share the same paths as the grid interconnection between the TS7700 Virtualization Engines.
Cluster 0
WAN X
Cluster 1
If the TSSC is unable to ascertain the health of the remote site, then Autonomic Ownership
Takeover is not performed (Figure 2-35).
Cluster 0
X
WAN Cluster 1
TSSC TSSC
Through options provided through SMIT menus, the IBM SSR can enable or disable this
function and select which ownership takeover mode is to be entered when another TS7700
Virtualization Engine is determined not to be operational.
With a three-cluster or higher grid, an additional vote is available for peer outage detection
(Figure 2-36). For example, if Cluster 0 detects that Cluster 2 is unavailable, but Cluster 1
disagrees, then a network issue must be present. In this case, the AOTM request will fail and
no takeover mode will be enabled. If the other available clusters and the other available
cluster’s TSSC’s agree that a peer cluster is unavailable, then ownership takeover can occur.
TSSC 0
WAN2
Cluster0 10Mbit+
Cluster1
TSSC 1
Based on your requirements, an IBM SSR enables or disables AOTM and also sets the
default ownership takeover mode that is to be invoked.
Remember: In Figure 2-36, each cluster requires a dedicated TSSC of the clusters belong
to the same grid. Clusters from different grids can be attached to the same TSSC if the
clusters are in close proximity of each other.
If the mount is directed to a cluster without a valid copy, then a remote mount is the result.
Thus, in special cases, even if DAA is enabled, remote mounts and recalls can still occur.
Subsequently, host processing will attempt to allocate a device from the first cluster returned
in the list. If an online non-active device is not available within that cluster, it will move to the
next cluster in the list and try again until a device is chosen. This allows the host to direct the
mount request to the cluster that would result in the fastest mount, typically the cluster that
has the logical volume resident in cache.
DAA improves a grid’s performance by reducing the number of cross-cluster mounts. This
feature is especially important when copied volumes are treated as Preference Group 0
(removed from cache first) and when copies are not made between locally attached clusters
of a common grid. In conjunction with DAA, using the copy policy overrides to prefer local
tape volume cache for fast ready mounts provides the best overall performance.
Configurations which include the TS7720’s deep cache will dramatically increase their cache
hit ratio.
Without DAA, configuring the cache management of replicated data as PG1 (prefer to be kept
in cache with a LRU algorithm) is the best way to improve non-fast ready mount performance
by minimizing cross cluster mounts. However, this performance gain comes with a reduction
in the effective grid cache size because multiple clusters are maintaining a copy of a logical
volume. To regain the same level of effective grid cache size, an increase in physical cache
capacity might be required.
DAA requires updates in host software (APAR OA24966 for z/OS V1R8, V1R9 and V1R10).
DAA functionality is included in z/OS V1R11 and later.
The function Scratch Allocation Assistance (SAA) extends the capabilities of Device
Allocation Assistance (DAA) to the scratch mount requests. SAA filters the list of clusters in a
grid to return to the host a smaller list of candidate clusters specifically designated as scratch
mount candidates. By identifying a subset of clusters in the grid as sole candidates for scratch
mounts, SAA optimizes scratch mounts to a TS7700 grid.
TS7740
Drives/Library
Arc
hiv
e Wo
rklo TS7740
ad TS7740 Cluster
Drives/Library
d LAN/WAN
loa
Work
ary
im
Pr
TS7740 Cluster
TS7720
JES2
Only!
TS7720 Cluster
A cluster is designated as a candidate for scratch mounts using the Scratch Mount Candidate
option on the management class construct, accessible from the TS7700 Management
Interface. Only those clusters specified through the assigned management class are
considered for the scratch mount request. When queried by the host preparing to issue a
scratch mount, the TS7700 Virtualization Engine considers the candidate list associated with
the management class, along with cluster availability. The TS7700 Virtualization Engine then
returns to the host a filtered, but unordered, list of candidate clusters suitable for the scratch
mount operation.
The z/OS allocation process then randomly chooses a device from among those candidate
clusters to receive the scratch mount. In the event all candidate clusters are unavailable or in
service, all clusters within the grid become candidates. In addition, if the filtered list returns
clusters that have no devices configured within z/OS, all clusters in the grid become
candidates.
A new LIBRARY REQUEST option is introduced to allow to globally enable or disable the
function across the entire multi-cluster grid. Only when this option is enabled will the z/OS
software execute the additional routines needed to obtain the candidate list of mount clusters
from a given composite library. This function is disabled by default.
All clusters in the multi-cluster grid must be at release R2.0 level before SAA will be
operational. A supporting z/OS APAR OA32957 is required to use Scratch Allocation
Assistance in a JES2 environment of z/OS. Any z/OS environment with earlier code can exist,
but will continue to function in the traditional way with respect to scratch allocations.
In the TS7700 Virtualization Engine architecture, the grid can directly access the data within
the tape volume cache of any cluster. A copy of the data does not have to be in the tape
volume cache of the same cluster as the vNode used by the host for data access. With the
TS7700 Virtualization Engine, the determination of which cluster's tape volume cache will act
as I/O tape volume cache is based upon the copy requirements of the Management Class,
internal algorithms, and settings.
Copy management
With the TS7700 Virtualization Engine's architectural capability of having more than two
clusters peered together, copy policy management requirements are different than those of
the prior generation PTP VTS. In a TS7700 Virtualization Engine Grid, you might want to
have multiple copies of a virtual volume on separate clusters. You might also want to specify
when the copies are performed relative to the job that has written to a virtual volume and have
that be unique for each cluster.
Copy management is controlled through the Management Class storage construct. Using the
management interface, you can create Management Classes and define where copies reside
and when they will be synchronized relative to the host job that created them. Depending on
your business needs for more than one copy of a logical volume, multiple Management
Classes, each with a separate set of definitions, can be created. The following key questions
help to determine copy management in the TS7700 Virtualization Engine:
Where do I want my copies to reside?
When do I want my copies to become consistent with the originating data?
Do I want logical volume copy mode retained across all grid mount points?
A portion of the Management Class definition window will include the cluster name and allow
a copy consistency point to be specified for each cluster. If a copy is to reside on a cluster's
tape volume cache, then you indicate a copy consistency point. If you do not want a cluster to
have a copy of the data, then you specify the No Copy option.
Tip: The architecture of the TS7700 Virtualization Engine allows for multiple clusters in a
grid. The current version, Release 2.0 supports up to six clusters in a multi-cluster grid
configuration.
All host read and write operations are routed to the I/O tape volume cache, which can be in a
separate cluster than the mount vNode. The mount vNode is the cluster providing the virtual
tape drive address against which the I/O operations from the host are being issued. To the
application writing data to a volume, the data on the I/O tape volume cache is always
consistent with the application. Based on the policies defined for the Management Class
assigned to the volume, the data from the I/O tape volume cache is replicated to other tape
volume caches associated with the other clusters in the grid.
If a copy consistency point of Rewind/Unload is defined for a cluster in the Management Class
assigned to the volume, a consistent copy of the data must reside in that cluster’s tape
volume cache before command completion is indicated for the Rewind/Unload command.
If multiple clusters have a copy consistency point of Rewind/Unload, all of their associated
tape volume caches must have a copy of the data before command completion is indicated for
the Rewind/Unload command. Options are available to override this requirement for
performance tuning purposes. See “Override settings” on page 98 for a description of the
option that indicates command completion of the Rewind/Unload command when multiple
clusters specify the Rewind/Unload copy consistency point.
If a copy consistency point of Deferred is defined, the copy to that cluster’s tape volume
cache can occur any time after the Rewind/Unload command has been processed for the I/O
tape volume cache. A mixture of copy consistency points can be defined for a Management
Class, allowing each cluster to have a unique consistency point.
TS7720
RNND
DDD D
TS7740
TS7720 Cluster 0
Host
LAN/WAN
TS7720
Disaster
Recovery Site
TS7720 Cluster 1
TS7720
Production
Site NNRD
TS7720 Cluster 2
The grid for this example spans two sites. The production site has three TS7720 Virtualization
Engines that are attached to the same host. The fourth cluster in the grid is a TS7740
Virtualization Engine that is located at a disaster recovery site with no direct access to the
production host. The goal is to maintain only two copies of a logical volume in the grid at any
given time. The CCP array is set for each TS7720 Virtualization Engine to create the initial
copy at the mounting cluster and to send a deferred copy to the TS7740 Virtualization Engine
at the disaster recovery site. No copies are made between the TS7720 Virtualization Engine
clusters. The CCP policies of the clusters would be as follows, where R is Run (immediate
mode), N is No Copy, and D is Deferred:
Production site TS7720 Cluster 0: R N N D
Production site TS7720 Cluster 1: N R N D
Production site TS7720 Cluster 2: N N R D
Disaster recovery site TS7740 Cluster 3: D D D D
Under optimum operational conditions in the grid, and using JES2 Device Allocation
Assistance (DAA), the intended goal of producing two copies of a logical volume is met.
During allocation processing of non-Fast Ready logical volumes, DAA is used to determine
which cluster is best for the specific mount request. Usually this is the cluster with the logical
volume resident in cache. The host then attempts to allocate a device from the best cluster.
During allocation processing of non-Fast Ready logical volumes, DAA is used to determine
which cluster is best for the specific mount request. Usually this is the cluster with the logical
volume resident in cache. The host will then attempt to allocate a device from the best cluster.
JES3 does not support Device Allocation Assist and hence, using the configuration example,
the host can allocate to clusters that do not have a copy in cache. Without Retain Copy Mode,
additional copies of volumes can result.
Clarification: When families are defined, clusters set to Deferred within the local family
will be preferred over RUN clusters outside the family.
The default Management Class is Deferred at all configured clusters including the local. The
default setting are applied whenever a new construct is defined through the management
interface or to a mount command where management class has not been previously defined.
You might want to have two separate physical copies of your logical volumes on one of the
clusters and not on the others. Through the management interface associated with the cluster
where you want the second copy, specify a secondary pool when defining the Management
Class. For the Management Class definition on the other clusters, do not specify a secondary
pool. For example, you might want to use the Copy Export function to extract a copy of data
from the cluster to be taken to a disaster recovery site.
It is important to note that during mount processing, the copy consistency point information
that is used for a volume is taken from the Management Class definition for the cluster with
which the mount vNode is associated. A best practice is to define the copy consistency point
definitions of a Management Class to be the same on each cluster so as to avoid confusion
about where copies will reside. You can devise a scenario in which you define separate copy
consistency points for the same Management Class on each of the clusters. In this scenario,
the location of copies and when the copies are consistent with the host that created the data
will differ, depending on which cluster a mount is processed. In these scenarios, use the
retain copy mode option. When the retain copy mode is enabled against the currently defined
management class, the previously assigned copy modes are retained independent of what
the current management class definition states.
The TS7700 Virtualization Engine will consider elements such as data validity, cluster
availability, family affinities, type of mount, cluster usage, local preferences, and overrides for
settings in the I/O tape volume cache selection. The many factors are evaluated and an I/O
tape volume cache is selected. The TS7700 Virtualization Engine also takes into
consideration the copy consistency points defined in the Management Class associated with
the volume being mounted. A cluster with a Rewind/Unload copy consistency point
requirement is weighted heavier than one with a Deferred copy consistency point.
For example, Management Class MCPROD01 has been defined within both clusters with the
same following copy data consistency points:
LIBRARY1 Rewind/Unload
LIBRARY2 Deferred
For example, a Management Class MCPROD02 can be defined within both clusters with the
same data consistency points as follows:
LIBRARY1 Rewind/Unload
LIBRARY2 Rewind/Unload
If a scratch mount specifying MCPROD02 is received for a vNode in cluster LIBRARY2, the
tape volume cache associated with LIBRARY2 is selected because it meets the cluster
consistency point requirement, and is expected to have the best performance because the
tape volume cache is local. The TS7700 Virtualization Engine estimates the “best” performing
tape volume cache based on many factors, such as how busy a cluster is with respect to other
clusters within a grid configuration. Because this is a scratch mount, other factors like
consistency and cache residency are not considered. It is possible that the tape volume
cache in LIBRARY2 is currently handling a large quantity of host I/O operations. In that case,
the tape volume cache in LIBRARY1 would likely be selected to avoid job throttling.
Copy consistency points can be used to choose which specific cluster is used for the I/O tape
volume cache. Consider an example of a three-cluster grid consisting of LIBRARY1,
LIBRARY2, and LIBRARY3. Suppose that you want the data to reside only on the cluster that
is associated with LIBRARY2. You define a Management Class within all clusters with the
same copy consistency point as Rewind/Unload only for LIBRARY2. LIBRARY1 and
LIBRARY3 are specified as No Copy. Regardless of which vNode receives a mount using that
Management Class (remember that you have defined the Management Class within all
clusters the same), the tape volume cache associated with cluster LIBRARY2 is selected. If
the cluster LIBRARY2 is not available, the mount will fail. This unpleasant event can be easily
avoided by defining another copy consistency point as Deferred in a separate cluster (either
LIBRARY1 or LIBRARY3) in addition to the Rewind/Unload copy consistency point of
LIBRARY2. With that Management Class definition (NRD or DRN), in the case that
LIBRARY2 is not available, the tape volume cache associated with LIBRARY3 (NRD) or
LIBRARY1(DRN) will be select and the mount will be performed successfully. A deferred copy
will still be made to LIBRARY2 as soon as it is available again.
Also, grouping clusters in families will influence the I/O tape volume cache selection. The
improved tape volume cache selection is a set of special rules put in effect when clusters
families are defined. By this algorithm, clusters within the same family will be favored over the
others, when other factors are leveled for all clusters. Deferred clusters within the mount point
family are preferred over RUN clusters outside the family. This will result in a better utilization
of the caches resources within the multi cluster grid.
Tip: The copy consistency point is considered for both scratch and specific mounts.
Override settings
With the prior generation of PTP VTS, there were several optional override settings that
influenced how an individual VTC selected a VTS to perform the I/O operations for a mounted
tape volume. The override settings are determined by you, but set by an IBM SSR. With the
TS7700 Virtualization Engine, you define and set the optional override settings that influence
the selection of the I/O tape volume cache and replication responses through the
management interface.
TS7700 Virtualization Engine overrides I/O tape volume cache selection and
replication response
The settings are specific to a cluster, meaning that each cluster can have separate settings if
desired. The settings take effect for any mount requests received after the settings were
saved. All mounts, independent of which management class is used, will use the same
override settings. Mounts already in progress are not affected by a change in the settings.
The following override settings are supported:
Prefer Local Cache for Fast Ready Mount Requests
This override will prefer the mount point cluster as the I/O tape volume cache for scratch
mounts if it is available and contains a valid copy consistency definition other than No Copy.
In a GDPS environment, you must set the Force Local TVC override to ensure that the local
tape volume cache is selected for all I/O. This setting includes:
Prefer Local for Fast Ready Mounts
Prefer Local for non-Fast Ready Mounts
Site A Site B
Host A Host B
FICON
Ethernet
TS7700 TS7700
Cluster 0 Cluster 1
Next, look at what happens when Host A mounts a logical volume on a virtual device in
Cluster 0. The mount vNode of Cluster 0 has access to both its local tape volume cache and
to the tape volume cache of remote Cluster 1, so the local tape volume cache will not
necessarily be selected as I/O tape volume cache (although it is heavily preferred). The
cluster copy consistency point settings are the same for both clusters, so there is no
preference as far as copy consistency is concerned. In this case, locality will tip the scales in
favor of the local tape volume cache because the Fibre Channel attached drives of the local
tape volume cache give you a better performance than the Ethernet connected remote tape
volume cache under normal circumstances. Be aware that selection of the remote tape
volume cache as I/O tape volume cache is still possible because numerous other factors,
such as cache residency or validity of data, are also taken into account and can overrule the
locality factor. The same considerations apply to Host B and Cluster 1.
If you want to ensure that the local tape volume cache is always selected as the I/O tape
volume cache, you might have to use the override settings. See “TS7700 Virtualization
Engine overrides I/O tape volume cache selection and replication response” on page 98.
Site A
Host
FICON
Site B
Ethernet
TS7700 TS7700
Cluster 0 Cluster 1
All virtual devices in Cluster 0 and Cluster 1 are online to the host
Assume that a copy of the data is required on both clusters. Contrary to the previous
example, you need a valid copy of the data at Rewind/Unload only on Cluster 0 in the local
site. The data can be copied to Cluster 1 at a later time. Therefore, define the copy
consistency points as RUN for Cluster 0 and Deferred for Cluster 1. It is assumed that Scratch
Allocation Assistance (SAA) is not enabled. When you look at a Fast Ready Mount, for
example, you now must differentiate between two cases:
The host allocates a virtual drive on Cluster 0.
During the mount process, the vNode of Cluster 0 selects the I/O tape volume cache. One
of the factors it takes into consideration for this decision is the copy consistency points
defined in the Management Class associated with the volume being mounted. The RUN
setting for Cluster 0 weighs more than the Deferred setting for Cluster 1. The vNode of
Cluster 0 will therefore select its local tape volume cache as I/O tape volume cache.
Locality also affects the tape volume cache of Cluster 0, but the decisive factors here are
the copy consistency points.
The host allocates a virtual drive on Cluster 1.
During the mount process, the mount vNode in Cluster 1 selects the I/O tape volume
cache. It evaluates the copy consistency points for the decision, and again the tape
volume cache of Cluster 0 is selected as I/O tape volume cache. The difference is that
now the remote tape volume cache is selected (remote from the perspective of the
selecting vNode). Locality affects the local tape volume cache as I/O tape volume cache,
but copy consistency point settings weigh heavier than locality. So data is written to the
tape volume cache of Cluster 0 first, using the Ethernet connection between the two
clusters. The copy to the tape volume cache of Cluster 1 happens at a later time.
This scenario (Figure 2-42) is much the same as the scenario in example 1 (Figure 2-40 on
page 100). The scenarios differ only in the copy consistency point settings. Here there are
separate copy consistency point settings defined on Cluster 0 and Cluster 1 of the grid
configuration.
Site A Site B
Host A Host B
FICON
Ethernet
TS7700 TS7700
Cluster 0 Cluster 1
The hosts in this scenario have access to the virtual devices of their local cluster only
because the remote virtual devices are varied offline to the hosts. Alternatively Scratch
Allocation Assistance can be used here if Host A and Host B are using different management
classes. Host A management classes can be scratch mount candidates for cluster 0 and Host
B management classes can be scratch mount candidates for cluster 1. Device Allocation
Assistance ensures that the local cluster is selected for non-scratch mounts. When a host
mounts a logical volume on one of these virtual devices, you want see the respective local
tape volume cache is selected as the I/O tape volume cache, that a copy exists in the local
tape volume cache before Rewind/Unload is indicated to the host, and that a deferred copy will
be made to the tape volume cache of the remote cluster at a later time.
To achieve this goal, set the copy consistency points on Cluster 0 to RUN and to Deferred for
Cluster 1 (RD in Figure 2-42). On Cluster 1, reverse these settings and define Deferred for
Cluster 0 and RUN for Cluster 1 (DR in Figure 2-42). Regardless of the site where a virtual
volume is mounted, the local tape volume cache is preferred to the remote tape volume cache
Remember: This setting of RD and DR might cause an unexpected behavior when you
read data from the opposite Cluster 1 immediately after writing the volume to Cluster 0. If
your workload requires immediate access from the other cluster, you should set the copy
consistency points on both clusters to DD.
Although no R values exist, they are not required. The mounting vNode simply chooses
one of the Ds as an I/O tape volume cache and internally views it as a RUN site. In addition
to DD-DD, set “Prefer local Fast Ready” so that Fast Ready mounts stay local. For this
example, it then works as follows:
Cluster 0 Fast Ready mount: Because “prefer local” is set, Cluster 0 is selected as tape
volume cache and is internally treated like a RUN site.
Cluster 1 mount for read: Because Cluster 1 has not yet completed its copy, Cluster 0 is
still chosen as I/O tape volume cache (remote mount).
When the volume is demounted, device end will not be held up because no R values
exists. The copy will remain queued to Cluster 1 and will eventually complete.
In the same scenario, if you wanted to always have two valid copies at Rewind/Unload for
mounts on Cluster 0, and just one valid copy in the local tape volume cache for mounts on
Cluster 1, set the copy consistency points as follows:
Cluster 0: RUN for Cluster 0, RUN for Cluster 1
Cluster 1: No Copy for Cluster 0, RUN for Cluster 1
Two-cluster grid
With a two-cluster grid, you can configure the grid for disaster recovery, high availability, or
both. The following sections describe configuration considerations for two-cluster grids. The
scenarios presented are typical configurations. Other configurations are possible and might
be better suited for your environment.
A natural or human-caused event has made the local site's TS7700 Virtualization Engine
Cluster unavailable. The two TS7700 Virtualization Engine Clusters reside in separate
locations, separated by a distance dictated by your company's requirements for disaster
recovery. The only connection between the local site and the disaster recovery site are the
grid interconnections. There is no host connectivity between the local hosts and the disaster
recovery site's TS7700 Virtualization Engine.
Backup
Production DR Host
Host(s)
WAN
DR Grid
In a high availability configuration, both TS7700 Virtualization Engine Clusters are located
within metro distance of each other. These clusters are connected through a local area
network. If one of them becomes unavailable because it has failed, or is undergoing service
or being updated, data can be accessed through the other TS7700 Virtualization Engine
Cluster until the unavailable cluster is made available.
Local Site
Host
TS7700 TS7700
Cluster 0 Cluster 1
Ethernet
R R R R
The assumption is that the two TS7700 Virtualization Engine Clusters will reside in separate
locations, separated by a distance dictated by your company's requirements for disaster
recovery. In addition to the configuration considerations for disaster recovery, you need to
plan for the following items:
Access to the FICON channels on the TS7700 Virtualization Engine Cluster located at the
disaster recovery site from your local site's hosts. This can involve connections using
DWDM or channel extender, depending on the distance separating the two sites. If the
local TS7700 Virtualization Engine Cluster becomes unavailable, you use this remote
access to continue your operations using the remote TS7700 Virtualization Engine
Cluster.
Backup
DR Host TS7700 Cluster
WAN
Ficon Channel
DWDM
Three-cluster grid
With a three-cluster grid, you can configure the grid for disaster recovery and high availability
or use dual production sites that share a common disaster recovery site. The following
sections describe configuration considerations for three-cluster grids. The scenarios
presented are typical configurations. Other configurations are possible and might be better
suited for your environment.
The planning considerations for a two-cluster grid also apply to a three-cluster grid.
The copy consistency points at the disaster recovery site (NNR) are set to only create a copy
of host data at Cluster 2. Copies of data are not made to Clusters 0 and 1. This allows for
disaster recovery testing at Cluster 2 without replicating to the production site clusters.
Figure 2-46 shows an optional host connection that can be established to remote Cluster 2
using DWDM or channel extenders. With this configuration, you need to define an additional
256 virtual devices at the host for a total of 768 devices.
Host
DWDM, Channel Extension (optional)
Host
FICON (optional)
WAN
All virtual devices in Cluster 0 and 1 are online to the host, Cluster 2 devices are offline.
Figure 2-47 shows an optional host connection that can be established to remote Cluster 2
using DWDM or channel extenders.
Host A
DWDM, Channel Extension (optional)
Host
FICON
Host B (optional)
FICON
WAN
This particular configuration provides 442 TB of high performance production cache if you
choose to run balanced mode with three copies (R-R-D for both Clusters 0 and 1).
Alternatively, you can choose having one copy only at production site, doubling the cache
capacity available for production. In this case, copy mode would be R-N-D for Cluster 0 and
N-R-D for cluster one.
Another variation of this model uses a TS7720 and a TS7740 for production site as shown in
Figure 2-49, both replicating to a remote TS7740.
Figure 2-49 Three-cluster High Availability and Disaster Recovery with two TS7740s and one TS7720
In both models, if a TS7720 reaches the upper threshold of utilization, the oldest data which
has already been replicated to the TS7740 will be removed from the TS7720 cache.
In the example shown in Figure 2-49, you can have particular workloads favoring the TS7740,
and others favoring the TS7720, suiting a specific workload to the cluster best equipped to
perform it.
Copy export (shown as optional in both figures) can be used to have a second copy of the
migrated data, if required.
Four-cluster grid
This section covers a four-cluster grid that can have both sites for dual purpose. Both sites are
equal players within the grid, and any site can play the role Production or Disaster Recovery
as required.
You can have Host workload balanced across both clusters (Cluster 0 and Cluster 1 in the
figure). The logical volumes written to a particular cluster are only replicated to one remote
cluster. In Figure 2-50, Cluster 0 replicates to Cluster 2 and Cluster 1 replicates to Cluster 3.
This ‘partitioning’ is accomplished using copy policies. For the described behavior, copy mode
for Cluster 0 is RNDN and Cluster 1 NRND for Cluster 1.
This configuration delivers High Availability at both sites, Production and Disaster Recovery,
without four copies of the same tape logical volume throughout the grid.
Figure 2-51 shows the four-cluster grid reaction to a cluster outage. In this example, Cluster 0
goes down due to a electric power outage. You lose all logical drives emulated by Cluster 0.
The host uses the remaining addresses emulated by Cluster 1 for the entire production
workload.
Figure 2-51 Four-cluster grid High Availability and Disaster Recovery - Cluster 0 outage
During the outage of Cluster 0 in the example, new jobs for write only use one half of
configuration (the unaffected ‘partition’, in the bottom part of the picture). Jobs for read can
In a Disaster Recovery situation, backup host in Disaster Recovery site would operate from
the second High Availability pair, namely Cluster 2 and Cluster 3 in the figure. In this case,
copy policies can be DNRN for Cluster 2 and NDNR for Cluster 3, reversing the direction of
the replication (green arrow in Figure 2-50 on page 110).
One exception to the write protect is those volumes in the insert category. To allow a volume
to be moved from the insert category to a write protect excluded category, the source
category of insert cannot be write protected. Thus, the insert category is always a member of
the excluded categories.
Be sure that you have enough scratch space when expire hold processing is enabled to
prevent the reuse of production scratched volumes when planning for a DR test. Suspending
the volumes’ Return-to-Scratch processing for the duration of the Disaster Recovery test is
also advisable.
See 10.1, “TS7700 Virtualization Engine grid failover principles” on page 750 for more details.
The following features can be used if you are planning a data center consolidation or if you
need to “ungrid” a TS7700 Virtualization Engine to meet a business objective:
Removal of a cluster from a grid
Cluster cleanup
Removal of a cluster from a grid is a feature that delivers instructions for a one time process
to remove or unjoin a cluster from a grid configuration. This function can be used for removing
Removal of a cluster from a grid requires that all data in the cluster that is going to be
removed be copied to a remaining peer, ejected, or returned to scratch before the cluster
removal. After the removal, the cluster is disabled and cannot be used, including any access
to its data it contained before the removal. A cluster cleanup must then occur on the removed
cluster before it can be used elsewhere as an empty cluster.
Cluster cleanup cleans the database, logically deletes volumes from the tape volume cache,
and removes the configuration data for host connections from a TS7700 Virtualization Engine
Cluster. However, cluster cleanup does not secure erase or delete (“shred”) user data from
the tape volume cache or from the back-end stacked tape cartridges of a TS7740
Virtualization Engine.
The cluster cleanup feature is required so that the removed cluster can be reused. This
feature is a single use feature, and returns the removed cluster to a state similar to one
received from manufacturing.
Clarification: These procedures are performed by IBM as a service offering. Contact your
System Service Representative (SSR) about this offering and how to request the service.
In this scenario, you potentially have two clusters at the primary data center for high
availability. The third cluster is located at a remote data center from which you want to move
data. Figure 2-52 shows this initial status.
Host A
Host B
FICON
LAN
Host A
Host B
FICON
Copy Data
WAN
Cluster 2 is added to the Grid and data can be copied from Remote to Primary Datacenter
After all of your data has been copied to the primary data center TS7700 Virtualization
Engines, you can remove Cluster 2 from the remote data center. This TS7700 Virtualization
Engine can now be relocated and the process can be repeated.
An important aspect to mention is that the cleanup process does not include physical cluster
discontinuance or reinstallation. If you are planning to reuse a TS7700 Virtualization Engine
after it is removed from a grid, you must have a service contract for physical discontinuance
services and the subsequent reinstall of the cluster.
During planning you must determine how to handle the volumes that have a copy consistency
point only at the cluster that is being removed. You can eject them, move them to the scratch
category, or activate a Management Class change on a mount/demount operation to get a
copy on another cluster. This step must be done before starting the remove process. A Bulk
Volume Information Retrieval (BVIR) option, Copy Audit, is provided for generating the list of
inconsistent volumes to help you plan your activities. See Chapter 8, “Operation” on page 451
for details about BVIR.
The removal of the cluster from the grid is concurrent with normal operations on the
remaining clusters, except for certain steps where inserts, ejects, and exports will be
inhibited. Therefore, complete the removal of a cluster from the grid during a period of
minimized utilization of the remaining clusters.
No secure erase or low level format will be done on the tapes or the cache as part of the
removal of a cluster from a grid and cluster cleanup process.
Generally speaking, the functional characteristics available to the composite library will be
define by the grid cluster with the lowest release level. After all TS7700 Virtualization Engines
in a multi-cluster configuration are upgraded to the same level, all licensed functional
characteristics provided in the release will be enabled. In a grid configuration that has clusters
with different release levels, the TS7700 Virtualization Engine R2.0 clusters will be able use
their full cache capacity but will not be able to provide new capabilities such as:
Scratch Allocation Assistance
Selective Devices Access Control
Increased Logical Volumes (FC 5270)
All licensed functional characteristics provided in the release will be available after all of the
TS7700 Virtualization Engines in a multi-cluster configuration are all upgrades to the Release
R2.0 level.
Keep in mind that the joining cluster must have at least the capabilities of the current grid,
such as number of logical volume increments (FC 5270).
To present the capability to join clusters with different code levels, consider an upgrade
scenario in which there is an existing two-cluster grid at TS7700 Virtualization Release R1.7
TS7720
TS7720 R2.0
TS7740
Cluster 2
1
TS7740 R1.7
Latest internal
release level
Cluster 0
LAN/WAN 2
TS7720
TS7740
TS7720 R1.7
TS7740 R2.0
Cluster 1
Cluster 3
The existing grid configuration consists of Cluster 0, a TS7740 Virtualization Engine R1.7 with
the latest internal release level, and Cluster 1, a TS7720 Virtualization Engine R1.7 at the
same R1.7 PGA level. You will be joining a TS7720 Virtualization Engine R2.0 as Cluster 2
and a TS7740 Virtualization Engine R2.0 as Cluster 3.
You initially join Cluster 2 to Cluster 1 because both Cluster 0 and 1 are at the same level. You
have now established a three cluster grid configuration. The cluster with the highest release
level in the grid is now Cluster 2. You then initiate a join of Cluster 3 to Cluster 2 to complete
your goal of a four-cluster grid configuration. You have now completed the formation of your
four-cluster grid configuration in the required order.
Remember: For this chapter, the term “Tape Library” refers to the IBM System Storage
TS3500 Tape Library.
Enterprise Library Controller (ELC) functions and associated components are integrated into
the TS7700 Virtualization Engine frame, and includes the TS3000 Service Console.
See Chapter 2.3.4, “TS3500 Tape Library” on page 52 for more information about the TS3500
Tape Library and Chapter 6.3, “TS7700 Virtualization Engine upgrade to Release 2.0” on
page 356 for more information about the Licensed Internal Code upgrade to R2.0.
Tip: IBM 3494 Tape Library attachment is not supported with TS7740 Release 2.0.
All components are installed in IBM 3952 Tape Frame(s). The Virtualization Engine is
connected to the host through FICON channels. In a multi-cluster grid configuration, where
you can have up to six TS7700 Virtualization Engines, two or four independent 1 Gb copper
(RJ-45) or shortwave fiber Ethernet links (single- or dual-ported), or alternatively two
longwave fiber Ethernet links per TS7700 Virtualization Engine are used to interconnect the
clusters.
The TS3000 System Console (FC2730 and FC2719) or TS3000 System Console
with Internal Modem (FC2732 and FC2733) can be installed on 3952 Tape Frame
Model F05, connecting to the TS7740 Server using Console Expansion (FC2714) or
Console Attachment (FC2715).
When feature numbers FC2714 or FC2715 are installed on 3957 Model V07, the
Console Upgrade (FC2719) is required on the machine type model where feature
FC2718, FC2720, FC2721, or FC2730 is installed.
One TS7740 Cache Controller (3956 Model CC8) is required with the following required
features:
• 9.6 TB Fibre Storage (FC7123)
• Plant Install in 3952 F05 (FC9352)
Two 4 Gb or 8 Gb Fibre Channel switches are required:
– Both switches must be the same type. Mixing one 4 Gb switch with one 8 Gb switch is
not supported.
– To install new switches in a 3584 L23 or D23 frame or reinstall switches removed from
a separate subsystem, the following features are required:
• FC4871 TS7700 BE SW Mounting Hardware
• FC1950 Power Distribution Units
• One power cord feature FC9954 through FC9959 or FC9966
– Two new 8 Gb FC switches can be ordered for the 3584 frames using feature FC4875,
BE 8 Gb Switches.
Tip: If the TS7740 was previously attached to the TS3500 Tape Library through the
3953 Model L05 Library Manager, you can choose to leave the FC switches in the
3953 Model F05 Tape Frame, or the switches can be removed and reinstalled in the
TS3500 Tape Library.
The TS3000 System Console (FC2721) can be installed on 3953 Model F05,
connecting to the TS7740 Server using Console Expansion (FC2714) or Console
Attachment (FC2715).
The TS3000 System Console (FC2730 and FC2719) or TS3000 System Console
with Internal Modem (FC2732 and FC2733) can be installed on 3952 Tape Frame
Model F05, connecting to the TS7740 Server using Console Expansion (FC2714) or
Console Attachment (FC2715).
When feature codes FC2714 or FC2715 are installed on 3957 Model V06, the
Console Upgrade (FC2719) is required on the machine type model where feature
FC2718, FC2720, FC2721, or FC2730 is installed.
Tip: If the TS7740 was previously attached to the TS3500 Tape Library through the
3953 Model L05 Library Manager, you can choose to leave the FC switches in the
3953 Model F05 Tape Frame, or the switches can be removed and reinstalled in the
TS3500 Tape Library.
– If the switches remain in the 3953 Model F05, two of feature FC3488, 4 Gb Fibre
Channel Switches, or FC4897, Reinstall 4 Gb Fibre Channel Switches, provide the
switches
– To remove the switches from the 3953 Model F05, order feature FC4748, Remove
4 Gb Switch, for each switch to be removed
– The switches removed from the 3953 Model F05 can be installed in the 3584 Model
L23 or D23 frame using feature number FC4873, Reinstall TS7700 BE Switches
One or more 3584 Model L23 or D23 frames with:
– From four to sixteen 3592 tape drives: All attached tape drives must operate in the
same mode, therefore 3592 Model E05 tape drives operating in native mode cannot be
Restriction: If all cluster members are before the 8.7.0.134 level of code, then all
must be at the same code level before the unjoin is started.
If one of the remaining clusters is at the 8.7.0.134 or later code level, it can be used
to drive the unjoin even with a mixed code level on clusters within the grid.
– Cluster Cleanup (FC4017) facilitates a one time cluster cleanup. If the cluster is
configured with FC4015, Remove Cluster from a Grid (FC4016) must be performed
before Cluster Cleanup. Each instance of FC4017 provides a single cleanup operation.
If the cluster is returned to production, and requires cleanup in the future, another
instance of FC4017 must be purchased. FC4017 is for Field Install only. Up to 99
instances of FC4017 can be ordered for a single TS7740 server.
– Enable Dual Port Grid Connections (FC1034) activates the second port on the dual
port grid adapters (FC1032 or FC1033) to provide four active 1 Gb Ethernet links for
grid communications.
– Two additional FICON attachments can be installed to provide a total of four FICON
attachments on the TS7700 Server. Valid total quantities of FICON Shortwave
Attachment (FC3441), FICON Longwave Attachment (FC3442), and FICON 10km
Longwave Attachment (FC3443) are two or four.
– 8 GB Memory Upgrade - Field (FC3461) can be installed on Model V06 servers without
feature FC9461, 8 GB Memory Upgrade - Plant. See 3.1.7, “Expanded memory” on
page 137 for more details about the expanded memory option.
– Additional 1 TB Cache Enablement (FC5267) can be installed, up to a maximum
quantity of 28 based on the 3957-V06 attached cache drawers.
– Additional 100 MBps Increment (FC5268) can be installed, up to a maximum quantity
of 6.
– Additional increments of 200,000 logical volumes, Increased Logical Volumes
(FC5270), can be installed, up to a maximum quantity of 5.
Clarification: If adding FC5270 to a cluster in the grid, the grid will support the
number of logical volumes the cluster with the least amount of FC5270s installed
supports.
See 6.1.4, “TS7740 Virtualization Engine Cache Upgrade options” on page 349 for more
details.
The TS3000 System Console (FC2721) can be installed on 3953 Model F05,
connecting to the TS7720 Server using Console Expansion (FC2714) or Console
Attachment (FC2715).
The TS3000 System Console (FC2730 and FC2719) or TS3000 System Console
with Internal Modem (FC2732 and FC2733) can be installed on 3952 Tape Frame
Model F05, connecting to the TS7720 Server using Console Expansion (FC2714) or
Console Attachment (FC2715).
When feature FC2714 or FC2715 is installed on 3957 Model VEA, the Console
Upgrade (FC2719) is required on the machine type model where feature FC2718,
FC2720, FC2721, or FC2730 is installed.
One TS7720 SATA Cache Controller (3956 Model CS8) with the following required
features:
– 32 TB SATA Storage (FC7114)
– Plant Install in 3952 F05 (FC9352)
The TS3000 System Console (FC2721) can be installed on 3953 Model F05,
connecting to the TS7720 Server using Console Expansion (FC2714) or Console
Attachment (FC2715).
The TS3000 System Console (FC2730 and FC2719) or TS3000 System Console
with Internal Modem (FC2732 and FC2733) can be installed on 3952 Tape Frame
Model F05, connecting to the TS7720 Server using Console Expansion (FC2714) or
Console Attachment (FC2715).
When feature FC2714 or FC2715 is installed on 3957 Model VEA, the Console
Upgrade (FC2719) is required on the machine type model where feature FC2718,
FC2720, FC2721, or FC2730 is installed.
One TS7720 SATA Cache Controller (3956 Model CS8) with the following required
features:
– 32 TB SATA Storage (FC7114)
– Plant Install in 3952 F05 (FC9352)
Tip: If all cluster members are before the 8.7.0.134 level of code, then all must be at
the same code level before the unjoin is started.
If one of the remaining clusters is at the 8.7.0.134 or later code level, it can be used
to drive the unjoin even with a mixed code level on clusters within the grid.
– Cluster Cleanup (FC4017) facilitates a one time cluster cleanup. If the cluster is
configured with FC4015, Remove Cluster from a Grid (FC4016) must be performed
before Cluster Cleanup. Each instance of FC4017 provides a single cleanup operation.
If the cluster is returned to production, and requires cleanup in the future, another
instance of FC4017 must be purchased. FC4017 is for Field Install only. Up to 99
instances of FC4017 Fibre Channel be ordered for a single TS7720 server.
– Enable Dual Port Grid Connections (FC1034) activates the second port on the dual
port grid adapters (FC1032 or FC1033) to provide four active 1 Gb Ethernet links for
grid communications.
– Two additional FICON attachments can be installed to provide a total of four FICON
attachments on the TS7700 Server. Valid total quantities of FICON Shortwave
Attachment (FC3441), FICON Longwave Attachment (FC3442), and FICON 10km
Longwave Attachment (FC3443) are two or four.
– 8 GB Memory Upgrade - Field (FC3461) can be installed on Model VEA servers
without feature FC9461, 8 GB Memory Upgrade - Plant. See 3.1.7, “Expanded
memory” on page 137 for more details about the expanded memory option.
– Up to five 100 MBps Increment (FC5268) can be installed.
– Additional increments of 200,000 logical volumes, Increased Logical Volumes
(FC5270), can be installed, up to a maximum quantity of 5.
Important: If adding FC5270 to a cluster in the grid, the grid will support the number
of logical volumes the cluster with the least amount of FC5270s installed supports.
Tip: If adding FC5270 to a cluster in the grid, the grid will support the number of
logical volumes the cluster with the least amount of FC5270s installed supports.
3.1.3 Cables
This section lists cable feature codes for attachment to TS7700 Virtualization Engine and
additional cables, fabric components, and cabling solutions.
A TS7700 Virtualization Engine Server with the FICON Attachment features (FC3441,
FC3442, or FC3443) can attach to FICON channels of IBM System z mainframe, IBM
zSeries® server, or IBM S/390® server using FICON cable features ordered on the TS7700
Virtualization Engine Server. A maximum of four FICON cables, each 31 meters in length, can
be ordered.
One cable should be ordered for each Host System Attachment by using the following cable
features:
4-Gbps FICON Long-Wavelength Attachment feature (FC3442 and FC3443): The FICON
long wavelength adapter shipped with feature FC3442 (4-Gbps FICON Long-Wavelength
Attachment) or feature FC3443 (4-Gbps FICON 10 km Long-Wavelength Attachment) has an
LC Duplex connector, and can connect to FICON long wavelength channels of IBM
zEnterprise™, IBM System z9®, IBM System z10®, or S/390 servers utilizing a 9-micron
single-mode fibre cable. If host attachment cables are desired, they can be specified with the
following feature code: FC0201 - 9 Micron LC/LC 31 meter Fibre Cable.
Additional cables, fabric components, and cabling solutions: Conversion cables from SC
Duplex to LC Duplex are available as features on the System z servers if you are currently
using cables with SC Duplex connectors that now require attachment to fibre components
with LC Duplex connections. Additional cable options, along with product support services
such as installation, are offered by IBM Global Services' Networking Services. See the IBM
Virtualization Engine TS7700 Introduction and Planning Guide (GA32-0568) for Fibre
Channel cable planning information. If Grid Enablement FC4015 is ordered, Ethernet cables
are required for the copper/optical 1 Gbps and optical longwave adapters to attach to the
communication grid.
3.1.4 Limitations
Consider the following limitations when performing the TS7700 Virtualization Engine
preinstallation and planning:
Each TS7700 Virtualization Engine node (3957 Model V06 or VEA) must be located within
100 feet of the TS3000 System Console (TSSC).
Intermix of 3592 Model E06 / EU6 tape drives with other 3592 tape drives is not
supported.
The 3592 back end tape drives for a TS7740 cluster must be installed in a 3584 Tape
Library. Connections to 3494 Tape Libraries are no longer supported as of R2.0 machine
code.
Clusters running R2.0 machine code can only be joined in a grid with clusters running
either R1.7 or R2.0 machine code.
The TS7740 Virtualization Engine uses the native capacity of TS1120 (3592-E05) and
TS1130 (3592-E06) tape drives with the following restrictions:
If any TS1130 tape drives are installed, then no 3592-J1A drives or TS1120 drives can
exist in the TS7740 Virtualization Engine configuration.
If any TS1120 tape drives are installed and configured in their native mode, then no
3592-J1A drives can exist in the TS7740 Virtualization Engine configuration.
For encryption, all tape drives attached to the TS7740 Virtualization Engine must be
TS1130 or TS1120 tape drives. The combination of encryption-capable and
non-encryption-capable drives is not supported on an encryption enabled TS7740
Virtualization Engine.
To address compatibility issues, theTS1120 tape drives can operate in native 3592 E05 mode
or in 3592-J1A emulation mode. This way allows established configurations to add more
back-end TS1120 drives when 3592-J1A drives already exist. However, all TS7740
Virtualization Engine back-end drives must operate in the same mode. Therefore, in a mixed
configuration of TS1120 and 3592-J1A drives, the TS1120 tape drives must run in 3592-J1A
emulation mode to provide a homogenous back-end tape format. For operation in TS1120
native mode, all back-end drives must be TS1120 tape drives. To use the TS1120 tape drives
in emulation mode, the back-end drives must be configured by the IBM System Services
Representative (SSR).
Restriction: An intermix of TS1130 E06 tape drives and prior 3592 drive types is not
supported.
Restriction: Physical Write Once Read Many (WORM) cartridges are not supported on
TS7700 Virtualization Engine attached tape drives.
3.1.5 TS3500 Tape Library High Density frame for a TS7740 Virtualization
Engine configuration
A TS3500 Tape Library High Density frame is supported by a TS7740 Virtualization Engine
and holds the stacked volumes.
If you are planning to size TS7740 Virtualization Engine physical volume cartridges, this high
density (HD) frame can be a solution in terms of floor space reduction.
The TS3500 Tape Library offers high-density, storage-only frame models (HD frames)
designed to greatly increase storage capacity without increasing frame size or required floor
space.
Figure 3-1 TS3500 Tape Library HD frame (left) and top-down view
HD frame Model S24 provides storage for up to 1,000 3592 tape cartridges.
The base capacity of Model S24 is 600 cartridges, which are stored in Tiers 0, 1, and 2. To
increase capacity to the maximum for each frame, it is necessary to purchase the High
Density Capacity on Demand (HD CoD) feature. This feature provides a license key that
enables you to use the storage space available in the remaining tiers.
Licensed Internal Code upgrades from levels earlier than Release 2.0 might also require a
hardware reconfiguration scenario. See 6.3, “TS7700 Virtualization Engine upgrade to
Release 2.0” on page 356 if you currently have a TS7700 Virtualization Engine with a
microcode release before R2.0.
Exception: If all clusters in a grid configuration already have FC4015, Grid enablement
installed, contact your IBM Service Representative to properly set up, connect, and
configure the grid environment.
Restriction: This feature is available only with the TS7740 Virtualization Engine.
Restrictions: This feature is available only with the TS7740 Virtualization Engine.
TS7700 Virtualization Engine Model V07 and VEB only feature codes
Starting with Release 2.0 the following feature codes are available for TS7700 Virtualization
Engine Models V07 and VEB only:
FC1035 Grid optical LW connection
FC1035 Grid optical LW connection provides a single port, 10-Gbps Ethernet longwave
adapter for grid communication between TS7700 Virtualization Engines. This adapter has
an SC Duplex connector for attaching 9-micron, single-mode fibre cable. This is a
standard longwave (1,310 nm) adapter that conforms to the IEEE 802.3ae standards. It
supports distances up to 10 km.
Restriction: These adapters cannot negotiate down to run at 1 Gb. They must be
connected to a 10-Gb network device or light point.
Restriction: On a Model V06, FC4743 is mutually exclusive with FC5638 (Plant install
3956-CC6). This mean that the V06 must NOT have a 3956-CC6 Cache.
More details about the FC4743 Remove 3957-V06/VEA and FC5627 Install 3957-VEB /
FC5629 Install 3957-V07 options are provided in Chapter 7, “Migration aspects” on page 385.
There are also frame replacement procedures available to replace the entire 3952 F05 Frame
containing a 3956-V06 with a new Frame containing the new 3957-V07, and replacing the
entire 3956-VEA Frame with 3957-VEB Frame. See Chapter 7, “Migration aspects” on
page 385 for more details about those migration options.
See the IBM Virtualization Engine TS7700 Series Introduction and Planning Guide,
GA32-0567, for a detailed listing of system requirements.
Unit height 36 U
a. See the IBM Virtualization Engine TS7700 Series Introduction and Planning Guide,
GA32-0567, for maximum configurations for the TS7720 Virtualization Engine and the TS7740
Virtualization Engine
Operating 10°C to 28°C 5001 ft. amsl to 20% to 60% 23°C (73°F)
(high altitude) (50°F to 82.4°F) 7000 ft. amsl
For a complete list of system requirements, see IBM Virtualization Engine TS7700 Series
Introduction and Planning Guide, GA32-0567.
Power considerations
Your facility must have an available power supply to meet the input voltage requirements for
the TS7700 Virtualization Engine.
The 3952 Tape Frame houses the components of the TS7700 Virtualization Engine. The
standard 3952 Tape Frame ships with one internal power distribution unit. However, FC1903,
Dual AC power, is required to provide two power distribution units to support the high
availability characteristics of the TS7700 Virtualization Engine. The 3952 Tape Expansion
Frame has two power distribution units and requires two power feeds.
Current 20 A
Current 20 A
The TS7700 grid TCP/IP network infrastructure must be in place before the grid is activated,
so that the clusters can communicate with one another as soon as they are online. Two or
four 1 Gb Ethernet, or two 10 Gb Ethernet connections must be in place before grid
installation and activation, including the following equipment:
Internal Ethernet Routers
Ethernet routers are used to connect the network to management interface services
operating on a 3957-VEA or 3957-V06. These routers are still present if the TS7700
Virtualization Engine Server is upgraded/replaced by a 3957-VEB or 3957-V07, but they
are configured as switches.
Internal Ethernet switches
Starting with Release 2.0 Ethernet switches are used primarily for private communication
between components within a cluster. Ethernet switches are available from manufacturing
with 3957-VEB or 3957-V07 model TS7700 Virtualization Engine Servers.
The default configuration for a TS7700 Server from manufacturing (3957-VEB or 3957-V07) is
two dual-ported PCIe 1 Gb Ethernet adapters. You can use FC 1035, grid optical LW
connection to add support for two 10 Gb optical longwave Ethernet adapters. This feature
improves data copy replication while providing minimum bandwidth redundancy. Clusters
configured using two 10 Gb, four 1 Gb, or two 1 Gb clusters can be interconnected within the
same TS7700 grid, although explicit same port-to-port communications still apply.
Important: Identify, order, and install any new equipment to fulfill grid installation and
activation requirements. before grid activation, you must test connectivity and performance
of the Ethernet connections. You must ensure installation and testing of this network
infrastructure is complete before grid activation.
The network between the TS7700 Virtualization Engines should have sufficient bandwidth to
account for the total replication traffic. If you are sharing network switches among multiple
TS7700 Virtualization Engine paths or with other network traffic, the sum total of bandwidth
on that network should be sufficient to account for all of the network traffic.
The TS7700 Virtualization Engine uses the TCP/IP protocol for moving data between each
cluster. Bandwidth is a key factor that affects throughput for the TS7700 Virtualization Engine.
Other key factors that can affect throughput include:
Latency between the TS7700 Virtualization Engines
Network efficiency (packet loss, packet sequencing, and bit error rates)
Network switch capabilities
Flow control to pace the data from the TS7700 Virtualization Engines
Inter-switch link capabilities (flow control, buffering, and performance)
The TS7700 Virtualization Engines attempts to drive the network links at the full 1 Gb rate,
which might exceed the network infrastructure capabilities. The TS7700 Virtualization Engine
supports the IP flow control frames so that the network paces the level at which the TS7700
Virtualization Engine attempts to drive the network. The best performance is achieved when
Remember: When the system exceeds the network capabilities, packets are lost. This
causes TCP to stop, resync, and resend data, resulting in a much less efficient use of the
network.
In short, latency between the sites is the primary factor. However, packet loss due to bit error
rates or insufficient network capabilities can cause TCP to resend data, thus multiplying the
effect of the latency.
The TS7700 Virtualization Engine uses your LAN/WAN to replicate logical volumes, access
logical volumes remotely, and perform cross-site messaging. The LAN/WAN should have
adequate bandwidth to deliver the throughput necessary for your data storage requirements.
The cross-site grid network is 1 Gb Ethernet with either copper (RJ-45) or shortwave fibre
(single- or dual-ported) links. For copper networks, Cat 5E or 6 Ethernet cabling can be used,
but Cat 6 cabling is preferable to achieve the highest throughput. Alternatively two 10 Gb
longwave fiber Ethernet links can be provided. For additional information, see “FC1036 1-Gb
grid dual port copper connection” on page 136, “FC1037 1-Gb dual port optical SW
connection” on page 136 and “FC1035 Grid optical LW connection” on page 136. The
TS7700 Virtualization Engine does not encrypt the data it sends over the LAN/WAN.
Important: To avoid any network conflicts, the following subnets must not be used for
LAN/WAN IP addresses or management interface primary, secondary, or virtual IP
addresses:
192.168.251.xxx
192.168.250.xxx
172.31.1.xxx
Network redundancy
The TS7700 Virtualization Engine provides Start of changeup to four End of change
independent 1 Gb copper (RJ-45) or shortwave fibre Ethernet links for grid network
connectivity. Connect each through an independent WAN interconnection to be protected
from a single point of failure that would disrupt service to both WAN paths from a node.
Tip: If FC 9900, Encryption configuration, is installed, this same connection is used for
communications between the TS7740 Virtualization Engine and the Encryption Key Server
or Tivoli Key Lifecycle Manager (TKLM). Because encryption occurs on attached physical
tape drives, the TS7720 Virtualization Engine does not support encryption and the virtual
connection is used exclusively to create redundant paths.
Important: All three provided IP addresses will be assigned to one TS7700 Virtualization
Engine cluster for the management interface access.
For the list of required TCP/IP port assignments, see Table 3-9 on page 147.
Tip: In a TS7700 Virtualization Engine multi-cluster grid environment, you need to supply
two (single port connection) or four (dual port connection) IP addresses per cluster for the
physical links required by the TS7700 Virtualization Engine grid for cross-site replication.
2 6
3 9
4 12
5 15
6 18
Tip: The IP address used for the TSSC network must be entered as an increment of 10
between 10 and 240. The router/switch configuration uses this address and the next nine
sequential addresses for TSSC configuration. To prevent network problems, do NOT
configure these addresses on this master console for another system.
To allow connection to a TSSC network, the IP addresses used by the TS7700 Virtualization
Engine must not conflict with any other TS7700 Virtualization Engine connected to a common
TSSC. Any IP addresses used are TSSC-assigned. Each TS7700 Virtualization Engine is
assigned a range of ten IP addresses, where X is the lowest value in the IP address range
and all components within a TS7700 Virtualization Engine are then assigned IP addresses as
a value of X.
Table 3-7 displays the TCP/IP configuration for a TS7740 Virtualization Engine attached to a
TS3500 Tape Library.
Drawer 0, Slot 1,
Port 1 (dual port)
Drawer 1, Slot 1,
Port 0
Drawer 1, Slot 1,
Port 1 (dual port)
Table 3-8 displays the TCP/IP configuration for a TS7720 Virtualization Engine.
Drawer 0, Slot 1,
Port 1 (dual port)
Drawer 1, Slot 1,
Port 0
Drawer 1, Slot 1,
Port 1 (dual port)
Controller 0 172.31.1.(X+3)
(bottom) CEC 0,
3952 Storage
Expansion Frame
Controller 0 172.31.1.(X+4)
(bottom) CEC 1,
3952 Storage
Expansion Frame
Clarification: These requirements apply only to the grid LAN/WAN; the TS7700
Virtualization Engine network is managed and controlled by internal code.
Table 3-9 displays TCP/IP port assignments for the grid WAN.
7 Echo/Ping port
350 TS7700 Virtualization Engine file replication (distributed library file transfer)
The following ports should remain open for easier service of the subsystem:
20: FTP data
21: FTP control
23: Telnet
You can also use Dense Wave Division Multiplexers (DWDMs) or FICON channel extenders
between the System z host and the TS7700 Virtualization Engine. See Figure 3-3 on
page 150 for more details about the distances supported.
Native shortwave Fibre Channel transmitters have a maximum distance of 500 m with 50
micron diameter, multi-mode, optical fiber. Although 62.5 micron, multi-mode fiber can be
used, the larger core diameter has a greater dB loss and maximum distances are shortened
to 300 m. Native longwave Fibre Channel transmitters have a maximum distance of 10 km
when used with 9-micron diameter single-mode optical fiber.
This configuration is extremely important for storage devices, which do not handle dropped or
out-of-sequence records. When two Fibre Channel ports begin a conversation, they
exchange information about their number of supported buffer credits. A Fibre Channel port
will send only the number of buffer frames for which the receiving port has given credit. This
approach both avoids overruns and provides a way to maintain performance over distance by
filling the “pipe” with in-flight frames or buffers. The maximum distance that can be achieved
at full performance depends on the capabilities of the Fibre Channel node that is attached at
either end of the link extenders.
This relationship is vendor-specific. There should be a match between the buffer credit
capability of the nodes at either end of the extenders. A host bus adapter (HBA) with a buffer
credit of 64 communicating with a switch port with only eight buffer credits is able to read at
full performance over a greater distance than it is able to write. This is because, on the writes,
the HBA can send a maximum of only eight buffers to the switch port. On the reads, the
switch can send up to 64 buffers to the HBA. Until recently, a rule of thumb has been to
allocate one buffer credit for every 2 km to maintain full performance.
Buffer credits within the switches and directors have a large part to play in the distance
equation. The buffer credits in the sending and receiving nodes heavily influence the
throughput that is attained in the Fibre Channel. Fibre Channel architecture is based on a flow
control that ensures a constant stream of data to fill the available pipe. Generally, to maintain
acceptable performance, one buffer credit is required for every 2 km distance covered. See
Introduction to SAN Distance Solutions, SG24-6408 for more information.
Supported distances
When directly attaching to the host, the TS7700 Virtualization Engine can be installed at a
distance of up to 10 km from the host. With FICON Switches, also called FICON Directors or
Dense Wave Division Multiplexers (DWDMs), the TS7700 Virtualization Engine can be
installed at extended distances from the host.
TS7700
FICON with cascaded Switches
System
z Host
up to 100 km
TS7700
FICON with cascaded Switches
and DWDMs
System
z Host
up to 250 km
TS7700
FICON with cascaded Switches
and Channel Extenders
System
z Host
> 250 km
TS7700 TS7700
In a multi-cluster grid configuration, the TS7700 Virtualization Engines are also connected
through TCP/IP connections. These network connections are not as sensitive as FICON to
long distances when sufficient bandwidth is available.
You cannot mix different vendors (Brocade (McData (CNT & Inrange)) and CISCO), but you
can mix models of one vendor. See the System Storage Interoperation Center (SSIC) for
specific intermix combinations supported. You can find the SSIC at the following URL:
http://www.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.w
ss?start_over=yes
Using the frame shuttle or tunnel mode, the extender receives and forwards FICON frames
without doing any special channel or control unit emulation processing. The performance is
limited to the distance between the sites and the normal round trip delays in FICON channel
programs.
Emulation mode can go unlimited distances, and it monitors the I/O activity to devices. The
channel extender interfaces emulate a control unit by presenting command responses and
CE/DE status ahead of the controller and emulating the channel when running the
pre-acknowledged write operations to the real remote tape device. Thus, data is accepted
early and forwarded to the remote device to maintain a full pipe throughout the write channel
program.
The supported channel extenders between the System z host and the TS7700 Virtualization
Engine is within the same matrix as the FICON switch support under the following URL (see
the Ficon Channel Extenders section):
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FQ116133
Port type and port-to-port connections are defined using the Director’s GUI setup.
High availability means being able to provide continuous access to logical volumes through
planned and unplanned outages with as little user impact or intervention as possible. It does
not mean that all potential for user impact or action is eliminated. The basic guidelines for
establishing a grid configuration for high availability are as follows:
The production systems (sysplexs and LPARs) have FICON channel connectivity to both
clusters in the grid. This means that DFSMS library definitions and IODF have been
established and the appropriate FICON Directors, DWDM attachments, and fiber are in
place. Virtual tape devices in both clusters in the grid configuration are varied online to the
production systems. If virtual tape device addresses are not normally varied on to both
clusters, the virtual tape devices to the standby cluster need to be varied on in the event of
a planned or unplanned outage to allow production to continue.
The workload placed on the grid configuration should be such that when using only one of
the clusters, performance throughput is sufficient to meet service level agreements. If both
clusters are normally used by the production systems (the virtual devices in both clusters
are varied online to production), then in the case where one of the clusters is unavailable,
the available performance capacity of the grid configuration is reduced to half.
For all data that is critical for high availability, assign a Management Class whose copy
consistency point definition has both clusters having a copy consistency point of RUN.
This means that each cluster is to have a copy of the data when the volume is closed and
unloaded from the source cluster.
The distance of grid links between the clusters should be no more than 50-100 km using
low-latency directors, switches, or DWDMs. This minimizes the impact to job execution
times because of volume copying between clusters at volume close time. Network Quality
of Service (QoS) or other network sharing methods should be avoided because they can
introduce packet loss, which directly reduces the effective replication bandwidth between
the clusters.
By following these guidelines, the TS7700 Virtualization Engine grid configuration supports
the availability and performance goals of your workloads by minimizing the impact of the
following outages:
Planned outages in a grid configuration, such as microcode or hardware updates to a
cluster. While one cluster is being serviced, production work continues with the other
cluster in the grid configuration after virtual tape device addresses are online to the
cluster.
Unplanned outage of a cluster. Because of the copy policy to make a copy of a volume
when it is being closed, all jobs that completed before the outage will have a copy of their
data available on the other cluster. For jobs that were in progress on the cluster that failed,
they can be re-issued after virtual tape device addresses are online on the other cluster (if
they were not already online) and an ownership takeover mode has been established
(either manually or through AOTM). More details about AOTM can be found in“Autonomic
Ownership Takeover Manager” on page 86. For jobs that were writing data, the written
data is not accessible and the job must start again.
Important: Fast Ready categories and data classes work at the system level and are
unique for all clusters in a grid. This means that if you modify them in one cluster, it will
apply to all clusters on that grid.
LIBRARY-ID
In a grid configuration used with the TS7700 Virtualization Engine, each virtual device
attached to a System z host reports back the same five character hexadecimal library
sequence number, known as the Composite Library ID. With the Composite Library ID, the
host considers the grid configuration as a single library.
Each cluster in a grid configuration possesses a unique five character hexadecimal Library
Sequence Number, known as the Distributed Library ID, which identifies it among the
clusters in its grid. This Distributed Library ID is reported to the System z host upon request
and is used to distinguish one cluster from another in a given grid.
LIBPORT-ID
Each logical control unit, or 16-device group, must present a unique subsystem identification
to the System z host. This ID is a 1-byte field that uniquely identifies each logical control unit
within the cluster and is called the LIBPORT-ID. The value of this ID cannot be 0. Table 3-10
shows the definitions of the LIBPORT-IDs in a multi-cluster grid.
For more details, see 5.1.4, “Library names, library IDs, and port IDs” on page 286.
0 0-F 01-10
1 0-F 41-50
2 0-F 81-90
3 0-F C1-D0
4 0-F 21-30
5 0-F 61-70
Each TS7700 Virtualization Engine cluster provides 256 virtual devices. When two clusters
are connected to a grid configuration, the grid provides 512 virtual devices. When three
clusters are connected to a grid configuration, the grid provides up to 768 virtual devices and
so on up to a six-cluster grid configuration which can provide 1536 virtual devices.
The Automatic Class Selection (ACS) routines process every new tape allocation in the
system-managed storage (SMS) address space. The production ACS routines are stored in
the active control data set (ACDS). These routines allocate to each volume a set of classes
that reflect your installation’s policies for the data on that volume. The ACS routines are
invoked for every new allocation. Tape allocations are passed to the Object Access Method
(OAM), which uses its Library Control System (LCS) component to communicate with the
Integrated Library Manager.
For SMS-managed requests, the Storage Group routine assigns the request to a Storage
Group. The assigned Storage Group determines which logical partitions in the tape library are
to be used. Through the Storage Group construct, you can direct logical volumes to specific
physical volume pools.
Use a FICON Director when connecting the TS7700 Virtualization Engine to more than one
system.
The TS7700 Virtualization Engine places no limitations on the number of hosts that can use
those channel paths, or the types of hosts, or their operating system environments. This is the
same as for any tape technologies that are supported in IBM Tape Libraries.
An operating environment, however, through its implementation, does impose limits. z/OS
DFSMS can support up to 32 systems or groups of systems.
Basically, anything that can be done with native drives in an IBM tape library can be done with
the virtual drives in a TS7700 Virtualization Engine.
The TS7700 Virtualization Engine attaches to the host system or systems through two or four
FICON channels. Each FICON channel provides 256 logical paths (starting with TS7700
For more information, see 5.1.3, “Logical path considerations” on page 285.
Important: Fast Ready Categories and Data Classes work at the system level and are
unique for all clusters in a grid. Therefore, if you modify them in one cluster, the
modification will apply to all clusters on that grid.
You can use SDAC to configure hard partitions at the LIBPORT-ID level for independent host
logical partitions or system complexes. Hard partitioning prevents a host logical partition or
system complex with an independent tape management configuration from inadvertently
modifying or removing data owned by another host. It also prevents applications and users on
one system from accessing active data on volumes owned by another system.
SDAC is enabled using FC 5271, Selective Device Access Control. See “TS7700
Virtualization Engine common feature codes” on page 134 for additional information about
this feature. This feature license key must be installed on all clusters in the grid before SDAC
is enabled. You can specify one or more LIBPORT-IDs per SDAC group. Each access group is
given a name and assigned mutually exclusive VOLSER ranges. Use the Library Port Access
Groups window on the TS7700 Virtualization Engine management interface to create and
configure Library Port Access Groups for use with SDAC. Access control is imposed as soon
as a VOLSER range is defined. As a result, selective device protection applies retroactively to
pre-existing data. See 5.4, “Implementing Selective Device Access Control” on page 323 for
detailed implementation information.
A case study about sharing and partitioning the TS7700 Virtualization Engine can be found in
Appendix E, “Case study for logical partitioning of a two-cluster grid” on page 863.
The VOLSERs for logical volumes and physical volumes must be unique.
The VOLSERs must be unique throughout an SMSplex and throughout all storage
hierarchies, such as DASD, tape, and optical storage media. It must also be unique across
logical partitions connected to the grid.
Consider the size of your logical volumes, the number of scratch volumes you will need per
day, the time that is required for return-to-scratch processing, how often scratch processing is
performed, and whether you need to define logical WORM volumes.
Tip: Support for 25,000 MB logical volume maximum size is available by RPQ only and
supported by a code level of 8.7.0.143 or higher.
Depending on the logical volume sizes that you choose, you might see the number of
volumes required to store your data grow or shrink depending on the media size from which
you are converting. If you have data sets that fill native 3590 volumes, even with 6,000 MB
logical volumes, you will need more TS7700 Virtualization Engine logical volumes to store the
data, which will be stored as multi-volume data sets.
The 400 MB CST-emulated cartridges or 800 MB with ECCST-emulated cartridges are the
two types you can specify when adding volumes to the TS7700 Virtualization Engine. You can
use these sizes directly or use policy management to override them to provide for the 1,000,
2,000, 4,000, or 6,000 MB sizes.
A logical volume size can be set by VOLSER and can change dynamically using the DFSMS
Data Class storage construct when a job requires a scratch volume. The amount of data
copied to the stacked cartridge is only the amount of data that has been written to a logical
volume. The choice between all available logical volume sizes does not affect the real space
used in either the TS7700 Virtualization Engine cache or the stacked volume.
In general, unless you have a special need for CST emulation (400 MB), specify the ECCST
media type when inserting volumes in the TS7700 Virtualization Engine.
In planning for the number of logical volumes needed, first determine the number of private
volumes that make up the current workload you will be migrating. One way to do this is by
looking at the amount of data on your current volumes and then matching that to the
supported logical volume sizes. Match the volume sizes, taking into account the
compressibility of your data. If you do not know the average ratio, use the conservative value
of 2:1.
If you choose to only use the 800 MB volume size, the total number needed might increase
depending whether current volumes that contain more than 800 MB compressed need to
expand to a multi-volume set. Take that into account for planning the number of logical
volumes required.
Now that you know the number of volumes you need for your current data, you can estimate
the number of empty scratch logical volumes you must add. Based on your current
operations, determine a nominal number of scratch volumes from your nightly use. If you have
an existing VTS installed, you might have already determined this number and are able to set
a scratch media threshold with that value through the ISMF Library Define window.
For example, assuming the current volume requirements (using all the available volume
sizes), and you use of 2,500 scratch volumes per night and performance of return-to-scratch
processing every other day, you need to plan on the following number of logical volumes in
the TS7700 Virtualization Engine:
75,000 (current, rounded up) + 2,500 + (2,500) (1+1) = 82,500 logical volumes
If you define more volumes than you need, you can always delete the additional volumes.
Unused logical volumes do not consume space.
The default number of logical volumes supported by the TS7700 Virtualization Engine is
1,000,000. You can add support for additional logical volumes in 200,000 volume increments,
up to a total of 2,000,000 logical volumes.This is the maximum number either in a stand-alone
or in a grid configuration.
See FC 5270, Increased logical volumes in “TS7700 Virtualization Engine common feature
codes” on page 134 to achieve this upgrade.
Consideration: Up to 10,000 logical volumes can be inserted at one time. This applies to
both inserting a range of logical volumes and inserting a quantity of logical volumes.
Attempting to insert amounts over 10,000 will return an error.
Return-to-scratch processing
Return-to-scratch processing involves running a set of tape management tools that identify
the logical volumes that no longer contain active data, and then communicating with the
TS7700 Virtualization Engine to change the status of those volumes from private to scratch.
The amount of time the process takes depends on the type of tape management system
being employed, how busy the TS7700 Virtualization Engine is when it is processing the
volume status change requests, and whether a grid configuration is being used.
If the number of logical volumes used on a daily basis is small (less than a few thousand), you
might choose to only perform return-to-scratch processing every few days. A good rule is to
plan for no more than a 4-hour time period to run return to scratch. By ensuring a nominal run
time of four hours, enough time exists during first shift to run the process twice should
problems be encountered during the first attempt. Unless there are specific reasons, execute
return-to-scratch processing once per day.
With z/OS V1.9 or later, return to scratch in DFSMSrmm has been enhanced to speed up this
process. To reduce time required for housekeeping, it is now possible to run several
return-to-scratch processes in parallel. For additional information about the enhanced
return-to-scratch process, see the z/OS DFSMSrmm Implementation and Customization
Guide, SC26-7405.
The TS7700 Virtualization Engine determines how to establish increments of VOLSER values
based on whether the character in a particular position is a number or a letter. For example,
inserts starting with ABC000 and ending with ABC999 will add logical volumes with
VOLSERs of ABC000, ABC001, ABC002…ABC998, and ABC999 into the inventory of the
TS7700 Virtualization Engine. You might find it helpful to plan for growth by reserving multiple
ranges for each TS7700 Virtualization Engine you expect to install.
If you have multiple partitions, it is better to plan in advance which ranges will be used in
which partitions, for example, A* for first sysplex, B* for second sysplex, and so on. If you
need more than a range, you can select A* and B* for first sysplex, C* and D* for second
sysplex, and so on.
For critical data that only resides on tape, you can currently make two copies of the data on
physically separate tape volumes either manually through additional job steps or through their
applications. Within a TS7700 Virtualization Engine, you might need to be able to control
where a second copy of your data is placed so that it is not stacked on the same physical tape
as the primary copy. Although this can be accomplished through the logical volume affinity
functions, it simplifies the management of the tape data and make better use of the host CPU
resources to have a single command to the TS7700 Virtualization Engine subsystem direct it
to selectively make two copies of the data contained on a logical volume.
If you activate Dual Copy for a group of data or a specific pool, consider that all tasks and
properties, which are connected to this pool, are duplicated:
The number of reclamation tasks
The number of physical drives used
You must plan for additional throughput and capacity. You do not need more logical volumes
because the second copy uses an internal volume ID.
With the Delete Expired Volume Data setting, the data associated with volumes that have
been returned to scratch are deleted after a time period and their old data is released. For
example, assume that you have 20,000 logical volumes in scratch status at any point in time,
the average amount of data on a logical volume is 400 MB, and the data compresses at a
2:1 ratio. The space occupied by the data on those scratch volumes is 4,000,000 MBs or the
equivalent of 14 JA cartridges (when using J1A Emulation mode). By using the Delete
Expired Volume Data setting, you can reduce the number of cartridges required in this
example by 14.
Having too low a setting results in more physical volumes being needed. Having too high a
setting might impact the ability of the TS7740 Virtualization Engine to perform host workload,
because it is using its resources to perform reclamation. You might need to experiment to find
a threshold that matches your needs.
As a good starting point, use 35%. This will accommodate most installations.
The suggested formula to calculate the number of physical volumes needed is as follows:
Pv = (((Lv + Lc) × Ls/Cr)/(Pc × (((100+Rp)/100)/2))
To this number, you then add scratch physical volumes based on the common media pool and
the number of physical volume pools you plan on using. For example, use the following
assumptions:
Lv 82,500 (see “Number of logical volumes” on page 162)
Lc 10,000 (logical volumes)
Ls 400 MB
Cr 2
Rp 10
Pc 300,000 MB (capacity of a 3592 J1A written JA volume)
500,000 MB (capacity of a TS1120 written JA volume)
700,000 MB (capacity of a TS1120 written JB volume)
640,000 MB (capacity of a TS1130 written JA volume)
1,000,000 MB (capacity of a TS1130 written JB volume)
Common scratch pool 10
Volume pools 5 (with three volumes per pool)
Important: The calculated number is the required minimum value. It does not include any
spare volumes to allow data growth from the first installation phase.
Therefore, add enough extra scratch volumes for future data growth.
Using the suggested formula and the assumptions, plan to use the following number of
physical volumes in your TS7740 Virtualization Engine:
(((82,500 + 10,000) × 400 MB)/2)/(300,000 × (((100 + 10)/100)/2)) + 10 + 5 × 3 =
137 physical volumes
If you insert more physical volumes in the TS7740 Virtualization Engine than you need, you
can eject them at a later time.
See “Out of Physical Volumes” on page 620 for more information about how to handle an out
of scratch situation. For more information about BVIR, see 9.9.5, “Interpreting the BVIR
response data” on page 718. For more information about the Host Console Request function,
see 2.4.6, “Logical WORM (LWORM) support and characteristics” on page 82.
Important: To achieve the optimum throughput, verify your definitions to make sure that
you have specified compression for data written to the TS7700 Virtualization Engine.
TS7700 Virtualization Engine Release 1.3 added physical volume erasure on a physical
volume pool basis controlled by an additional reclamation policy. It uses the Outboard Policy
Management (OPM) feature, which is standard on the TS7700 Virtualization Engine. With the
Secure Data Erase function, all reclaimed physical volumes in that pool are erased by writing
a random pattern across the whole tape before being reused. In the case of a physical volume
that has encrypted data, the erase might involve just “shredding” the encryption keys on the
volume to accomplish the erasure. A physical cartridge is not available as a scratch cartridge
as long as its data is not erased.
As part of this data erase function, an additional reclamation policy is added. The policy
specifies the number of days a physical volume can contain invalid logical volume data before
the physical volume becomes eligible to be reclaimed. The data associated with a logical
volume is considered invalidated as follows:
A host has assigned the logical volume to a scratch category. The volume is subsequently
selected for a scratch mount and data is written to the volume. The older version of the
volume is now invalid.
A host has assigned the logical volume to a scratch category that has the fast-ready
attribute set, the category has a non-zero delete expired data parameter value, the
parameter value has been exceeded, and the TS7740 Virtualization Engine has deleted
the logical volume.
A host has modified the contents of the volume. This can be a complete rewrite of the
volume or appending to it. The new version of the logical volume will be migrated to a
separate physical location, and the older version is now invalid.
The TS7740 Virtualization Engine keeps track of the amount of active data on a physical
volume. It starts at 100% when a volume becomes full. Although the granularity of the percent
of full TS7740 Virtualization Engine tracks is 0.01%, it rounds down, so even one byte of
inactive data will drop the percent to 99.9%.
TS7740 Virtualization Engine keeps track of the time that the physical volume went from
100% full to less than 100% full by doing the following tasks:
Checking, on an hourly basis, for volumes in a pool that have a non-zero setting
This data erase function is enabled on a per-pool basis. It is enabled when a non-zero value is
specified for the data erase reclamation policy. When enabled, all physical volumes in the
pool are erased as part of the reclamation process, independent of the reclamation policy
under which the volume became eligible for reclamation.
Any physical volume that has a status of read-only is not subject to this function and is not
designated for erasure as part of read-only recovery.
If you use the eject stacked volume function, no attempt is made to erase the data on the
volume before ejecting the cartridge. The control of expired data on an ejected volume
becomes your responsibility.
Volumes that are tagged for erasure cannot be moved to another pool until erased, but they
can be ejected from the library because such volumes are usually removed for recovery
actions.
Using the Move function of the Integrated Library Manager will also cause a physical volume to
be erased, even though the number of days specified has not yet elapsed. This includes
returning borrowed volumes.
The TS7740 Virtualization Engine historical statistics are updated with the number of physical
mounts for data erasure. The pool statistics are updated with the number of volumes waiting
to be erased and the value for the days (number of days) until erasure reclamation policy.
As soon as you decide to implement Secure Data Erase for a limited group of data separated
on a dedicated pool, the number of additional reclamation tasks plus data erase tasks will
increase. Less physical drives might be available even during times when you have inhibited
reclamation.
The Inhibit Reclaim Schedule specification only partially applies to Secure Data Erase:
No new cartridges are reclaimed during this time.
Cartridges already reclaimed can be erased during this time.
This means that, although you do not allow reclamation during your peak hours to have all
your drives available for recall and premigration, Secure Data Erase will not honor your
settings and thus will run up to two concurrent erasure operations per physical drive type as
long as there are physical volumes to be erased.
Because the first logical volume that expires triggers the physical volume to be erased, an
almost full physical cartridge will be first reclaimed and then erased.
Group logical volumes that require secure erase after they are expired so that no
unnecessary reclamation and subsequent erasure operations take place. Pooling by
expiration date might help reduce unnecessary reclamation. Although proper grouping
reduces the amount of reclamation that needs to be done, it will not eliminate the erasure
step.
Use the available settings to meet specific performance and availability objectives. You can
view and modify these settings from the TS7700 Virtualization Engine management interface
by clicking Configuration Copy Policy Override, as shown in Figure 3-6.
The settings you can select in the MI window shown in Figure 3-6 are:
Prefer local cache for fast ready mount requests
A fast ready mount selects a local copy as long as a copy is available and a cluster copy
consistency point is not specified as No Copy in the Management Class for the mount. The
cluster is not required to have a valid copy of the data.
Prefer local cache for non-fast ready mount requests
This override causes the local cluster to satisfy the mount request as long as the cluster is
available and the cluster has a valid copy of the data, even if that data is only resident on
physical tape. If the local cluster does not have a valid copy of the data, then default
cluster selection criteria applies.
Force volumes mounted on this cluster to be copied to the local cache
For a non-fast ready mount, this override causes a copy to be performed on the local
cluster as part of mount processing. For a fast ready mount, this setting has the effect of
overriding the specified Management Class with a copy consistency point of
rewind/unload for the cluster. This setting does not change the definition of the
Management Class, but serves to influence the replication policy.
Allow fewer RUN consistent copies before reporting RUN command complete
Important: In an IBM Geographically Dispersed Parallel Sysplex® (IBM GDPS), all three
Copy Policy Override settings must be selected on each cluster to ensure that wherever
the GDPS primary site is, this TS7700 Virtualization Engine Cluster is preferred for all I/O
operations.
If the TS7700 Virtualization Engine cluster of the GDPS primary site fails, perform the
following recovery actions:
1. Vary virtual devices from a remote TS7700 Virtualization Engine cluster online from the
primary site of the GDPS host.
2. Manually invoke, through the TS7700 Virtualization Engine management interface, a
Read/Write Ownership Takeover, unless Automated Ownership Takeover Manager
(AOTM) has already transferred ownership.
TS7740 Virtualization Engine back-end drive encryption was introduced with Release 1.2 of
the TS7740 Virtualization Engine. There are no host software updates required for this
function because the TS7740 Virtualization Engine controls all aspects of the encryption
solution.
Tapes encrypted in the TS7740 Virtualization Engine backstore use a “wrapped key” model.
The data on each cartridge is encrypted with a random 256-bit Advanced Encryption
Standard (AES-256) Data Key (DK). The Data Key is stored on the cartridge in an encrypted,
or “wrapped”, form. Four instances of these wrapped data keys are stored on each cartridge.
Use the window shown in Figure 3-7 on page 171 for viewing and modifying storage pool
encryption settings on the TS7740 Virtualization Engine. The TS7740 Virtualization Engine
allows encryption by storage pool. If you are planning to enable encryption for one or more
pools, use this window to modify the encryption settings as follows:
1. Select the Encryption Settings tab at the physical volume pool properties. An encryption
settings overview will be displayed for all pools. Select the pool number you want to
modify, then select Modify Encryption Settings in the Select Action drop-down menu
and click Go.
2. Enable and modify your selected pool encryption settings as shown in Figure 3-8.
Although you can run with a single EKM, you should have two EKMs for use by the TS7740
Virtualization Engine. Each EKM should have all the required keys in their respective
keystores. The EKMs should have independent power and network connections to maximize
the chances that at least one of them is reachable from the TS7740 Virtualization Engine
when needed. If the TS7740 Virtualization Engine is unable to contact either EKM when
required, you might temporarily lose access to migrated logical volumes and will not be able
to move logical volumes in encryption-enabled storage pools out of cache.
See IBM Encryption Key Manager Component for the Java Platform Introduction, Planning,
and User's Guide, GA76-0418 for details about installing and configuring your EKMs.
Because the TS7740 Virtualization Engine maintains TCP/IP connections with the EKMs at
all times, the EKM configuration file should have the following setting to prevent the EKM from
timing out on these always-on connections:
TransportListener.tcp.timeout = 0
For additional information about the Tivoli Key Lifecycle Manager, see the following URL:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=/com.ibm.tk
lm.doc/welcome.htm
TS1130 Model E06 tape drives can also be attached to the TS7740 Virtualization Engine.
Intermixing with TS1120 tape drives is not allowed.
To enable encryption on a TS7740 Virtualization Engine with 3592 Model J1A drives
attached, plan your back-end drive capacity with the expectation that these drives will not be
used. The 3592 Model J1A tape drives are not encryption capable and should be detached.
The TS7740 Virtualization Engine does not allow a mixture of drive types to be used. The J1A
drives can be redeployed in other subsystems or used as direct-attached drives for
open-systems hosts. If you have a mixture of J1A and E05 drives attached to your TS7740
Virtualization Engine and cannot detach the J1A drives right away, you can proceed as long
as you have a minimum of four encryption-capable E05 drives attached. Be aware, however,
that the J1A drives will not be used by the TS7740 Virtualization Engine after the E05 drives
are put into native mode.
All TS1120 tape drives with feature FC5592 or 9592 are encryption-capable.
The TS7740 Virtualization Engine must not be configured to force the TS1120 drives into
“J1A” mode. This setting may only be changed by your IBM System Service Representative.
If you need to update the microcode level, be sure the IBM System Service Representative
checks and changes this setting if needed.
The window shown in Figure 3-9 on page 174 is used to set the encryption key manager
addresses in the TS7700 Virtualization Engine.
A video tutorial is available from this window. The tutorial presents the properties of the
Encryption Key Manager. Click the View tutorial link.
This page is visible but disabled on the TS7700 Virtualization Engine management interface if
the cluster possesses a physical library, but the selected cluster does not. The following
message is displayed:
The cluster is not attached to a physical tape library.
This page is not visible on the TS7700 Virtualization Engine management interface if the
cluster does not possess a physical library.
The encryption key server assists encryption-enabled tape drives in generating, protecting,
storing, and maintaining encryption keys that are used to encrypt information being written to
and decrypt information being read from tape media (tape and cartridge formats).
The following settings are used to configure the IBM Virtualization Engine TS7700 connection
to an encryption key server.
Primary key server address
The key server name or IP address that is primarily used to access the encryption key
server. This address can be a fully qualified host name or an IP address in IPv4 format.
This field is not required if you do not want to connect to an encryption key server.
Primary key server port
The port number of the primary key server. Valid values are any whole number between 0
and 65535; the default value is 3801. This field is only required if a primary key address is
used.
Secondary key server address
The key server name or IP address that is used to access the encryption key server when
the primary key server is unavailable. This address can be a fully qualified host name or
an IP address in IPv4 format. This field is not required if you do not want to connect to an
encryption key server.
Secondary key server port
The port number of the secondary key server. Valid values are any whole number between
0 and 65535; the default value is 3801. This field is only required if a secondary key
address is used.
Use the Ping Test buttons to check cluster network connection to a key server after changing
a cluster's address or port. If you change a key server address or port and do not submit the
change before using the Ping Test button, you will receive the following warning:
to perform a ping test you must first submit your address and/or port changes.
Important: The two key manager servers must be set up on separate machines to provide
redundancy. Connection to a key manager is required to read encrypted data.
The Virtualization Engine TS7740 Virtualization Engine provides the means to manage the
use of encryption and what keys are used on a storage pool basis.The Storage Group
DFSMS construct specified for a logical tape volume determines which storage pool is used
for the primary and optional secondary copies in the TS7740 Virtualization Engine. Storage
pool encryption parameters are configured through the TS7740 Virtualization Engine
management interface under Physical Volume Pools.
With TS7700 Virtualization Engine R1.7 and later, each pool can be defined to use Specific
Encryption Keys or the Default Encryption Keys, which are defined at the key manager
server:
Specific Encryption Keys
Each pool in defined in the TS7740 Virtualization Engine can have its own unique
encryption key. As part of enabling a pool for encryption, enter one or two key labels for
the pool and a key mode. A key label can be up to 64 characters. Key labels do not have to
be unique per pool, and the management interface provides the capability to assign the
same key label to multiple pools. For each key, a key mode can be specified. The
supported key modes are Label and Hash. As part of the encryption configuration through
the management interface, provide IP addresses for a primary and an optional secondary
key manager.
Default Encryption Keys
As an extension of TS7740 Virtualization Engine encryption capabilities, TS7740
Virtualization Engine R1.7 and later supports the use of a default key. This key simplifies
the management of the encryption infrastructure because no future changes are required
at the TS7740 Virtualization Engine. After a pool is defined to use the default key, the
management of encryption parameters is performed at the key manager.
For additional details about encryption and setting up your encryption key management
solution, see IBM System Storage Data Encryption, SG24-7797.
An essential step is that you configure the tape drives for System-Managed encryption. The
TS7740 Virtualization Engine uses the drives in this mode only, and does not support
Library-Managed or Application-Managed encryption.
Important: After the TS7740 Virtualization Engine is using drives for encrypted physical
tape volumes, it will put drives that are not properly enabled for encryption offline to the
subsystem. TS3500 Tape Library operators must be made aware that they should leave
attached TS7740 Virtualization Engine drives in System-Managed encryption mode at all
times so that drives are not taken offline.
A page opens to a list of TXT, PDF, and EXE files. To start, open the OVERVIEW.PDF file to see
a brief description of all the various tool jobs. All jobs are found in the IBMTOOLS.EXE file, which
is a self-extracting compressed file that will, after it has been downloaded to your PC, expand
into four separate files:
IBMJCL.XMI: JCL for current tape analysis tools
IBMCNTL.XMI: Parameters needed for job execution
Two areas of investigation can assist you in tuning your current tape environment by
identifying the impact of factors that influence the overall performance of the TS7700
Virtualization Engine. These weak points are bad block sizes, that is, smaller than 16 KB, and
low compression ratios, both of which can affect performance in a negative way.
The examples shown in Table 3-12 show the types of reports that can be created from SMF
data. The examples should be viewed primarily as suggestions to assist in beginning to plan
SMF reports.
04 Step End
05 Job End
The following job stream has been created to help analyze these records. See the installation
procedure in member $$INDEX file:
EREPMDR: JCL to extract MDR records from EREP history file
TAPECOMP: A program that reads either SYS1.LOGREC or the EREP history file and
produces reports on current compression ratios and MB transferred per hour.
The SMF 21 records have been recording both channel-byte and device-byte information.
The TapeWise tool calculates data compression ratios for each volume. The following reports
show compression ratios:
HRS
DSN
MBS
VOL
TAPEWISE
The TAPEWISE tool is available from the IBM Tape Tools FTP site. TAPEWISE can, based on
input parameters, generate several reports that can help with various items:
Tape activity analysis
Mounts and MBs processed by hour
Input and output mounts by hour
Mounts by SYSID during an hour
Concurrent open drives used
Long VTS mounts (recalls)
The following job stream has been created to help analyze these records. See the installation
procedure in the member $$INDEX file:
EREPMDR: JCL to extract MDR records from EREP history file.
BADBLKSZ: A program that reads either SYS1.LOGREC or the EREP history file, finds
volumes writing small block sizes, then gathers the job name and data set name from a
tape management system (TMS) copy.
Collect the stated SMF records for all z/OS systems that share the current tape configuration
and might have data migrated to the TS7700 Virtualization Engine. The data collected should
span one month (to cover any month-end processing peaks) or at least those days that
represent the peak load in your current tape environment. Check in SYS1.PARMLIB in member
SMF to see whether the required records are being collected. If they are not being collected,
arrange for collection.
TMS SMF
data data
FORMCA TS SORTSMF
1 (FORMCA TS) (SMFIL TER)
...FORMCA TS ...SMFDATA
.TMCATLG .SORTED
BMPACKT BMPACKS
2 (BMPACK) (BMPACK)
In addition to the extract file, the following information is useful for sizing the TS7700
Virtualization Engine:
Number of volumes in current tape library
This number includes all the tapes (located within automated libraries, on shelves, and off
site). If the unloaded Tape Management Catalog data is provided, there is no need to
collect the number of volumes.
Criteria for identifying volumes
Because volumes are transferred off site to be used as backup, their identification is
important. Identifiers such as high-level qualifiers (HLQ), program names, or job names,
must be documented for easier reference.
Number and type of tape control units installed
This information provides a good understanding of the current configuration and will help
identify the reasons for any apparent workload bottlenecks.
3.8.2 BatchMagic
The BatchMagic tool provides a comprehensive view of the current tape environment and
predictive modeling of workloads and technologies. The general methodology behind this tool
involves analyzing SMF type 14, 15, 21 and 30 records, and data extracted from the tape
management system. The TMS data is required only if you want to make a precise forecast of
the cartridges to be ordered based on the current cartridge utilization that is stored in the
TMS catalog.
When running BatchMagic, it involves data extraction, grouping data into workloads, and then
targeting workloads to individual or multiple IBM tape technologies. BatchMagic examines
Tape Management System catalogs and projects cartridges required with new technology,
and it models the operation of a TS7700 Virtualization Engine and 3592 drives (for TS7740
Virtualization Engine) and projects required resources. The reports from BatchMagic give a
clear understanding of your current tape activities, and even more important, make
projections for a TS7700 Virtualization Engine solution together with its major components,
such as 3592 drives, which cover your overall sustained and peak throughput requirements.
BatchMagic is specifically for IBM internal and IBM Business Partner use.
In this example, the TS7700 Virtualization Engine Cache hit results in a savings in tape
processing elapsed time of 40 seconds.
Notes:
The term local means the TS7700 Virtualization Engine cluster that is performing
the logical mount to the host.
The term remote means any other TS7700 Virtualization Engine that is participating
in the same grid as the local cluster.
With the wide range of capabilities that the TS7700 Virtualization Engine provides, unless the
data sets are very large or require interchange, the TS7700 Virtualization Engine is likely a
suitable place to store data.
Storage administrators and system programmers should also receive the same training as the
operations staff, plus:
Software choices and how they affect the TS7700 Virtualization Engine
Disaster recovery considerations
For additional information about this topic, see IBM TS3500 Tape Library with System z
Attachment A Practical Guide to Enterprise Tape Drives and TS3500 Tape Automation,
SG24-6789.
These services help you learn about, plan, install, manage, or optimize your IT infrastructure
to be an On Demand business. They can help you integrate your high-speed networks,
storage systems, application servers, wireless protocols, and an array of platforms,
middleware, and communications software for IBM and many non-IBM offerings.
References in this publication to IBM products or services do not imply that IBM intends to
make them available in all countries in which IBM operates.
Table 3-13 can help you with planning TS7700 Virtualization Engine preinstallation and
sizing. Use the table as a checklist for the main tasks needed to complete the TS7700
Virtualization Engine installation.
Initial meeting
Specialists training
Cutover to production
Post-installation tasks (if any) 9.10, “IBM Tape Tools” on page 727
It covers all implementation steps that relate to the setup of the following products:
IBM System Storage TS3500 Tape Library
TS7700 Virtualization Engine
Important: IBM 3494 Tape Library attachment is not supported at Release 2.0.
The TS7720 Virtualization Engine does not have a tape library attached, so the
implementation steps related to a physical tape library, TS3500 Tape Library, do not apply to
the TS7720 Virtualization Engine.
You can install the TS7740 Virtualization Engine together with your existing TS3500 Tape
Library. As the Library Manager functions reside inside of TS7700 Virtualization Engine
microcode, the TS7740 itself manages all necessary operations, so the IBM 3953 Tape
System is no longer required to attach a TS7740 Virtualization Engine.
Important: System z attachment of native 3592 tape drives through a tape controller might
still require the IBM 3953 Tape System. See IBM TS3500 Tape Library with System z
Attachment A Practical Guide to Enterprise Tape Drives and TS3500 Tape Automation,
SG24-6789, for more information.
You can also install a new TS3500 Tape Library and a new TS7740 Virtualization Engine at
the same time.
These three groups of implementation tasks can be done in parallel or sequentially. The HCD
and host definitions can be completed before or after the actual hardware installation.
Important: IBM 3494 Tape Library attachment is not supported at Release 2.0.
If you are installing a TS7740 Virtualization Engine in an existing TS3500 Tape Library
environment, some of these tasks might not apply to you:
Hardware related activities (completed by your IBM System Service Representative):
– Install the IBM TS3500 Tape Library.
– Install any native drives that will not be TS7740 Virtualization Engine controlled.
– Install the TS7740 Virtualization Engine Frame and additional D2x Frame(s) in the
TS3500 Tape Library.
Define drives to hosts:
– z/OS
– z/VM
– z/VSE
– TPF and z/TPF
Software related activities:
– Apply maintenance for the TS3500 Tape Library.
– Apply maintenance for the TS7700 Virtualization Engine.
– Verify or update exits for the tape management system (if applicable) and define logical
volumes to it.
– See z/VM V5R4.0 DFSMS/VM Removable Media Services, SC24-6090, and 5.6,
“Software implementation in z/VM and z/VSE” on page 329.
Specific TS7700 Virtualization Engine activities:
– Define policies and constructs using the TS7700 Virtualization Engine management
interface (MI).
– Define the logical VOLSER ranges of the logical volumes through the TS7700
Virtualization Engine MI.
Important: Encryption does not work with tape drives in emulation mode. Tape drives
must be set to Native mode.
These tasks are further described, including the suggested order of events, later in this
chapter.
After your TS7740 Virtualization Engine is installed on the TS3500 Tape Library, perform the
following post-installation tasks:
Schedule and complete operator training.
Schedule and complete storage administrator training.
Your IBM System Service Representative (SSR) performs the hardware installation of the
TS7740 Virtualization Engine, its associated tape library, and the frames. This installation
does not require your involvement other than the appropriate planning. See Chapter 3,
“Preinstallation planning and sizing” on page 117 for details.
After the SSR has physically installed the library hardware, you can use the TS3500 Tape
Library Specialist to set up the logical library, which is attached to the System z host.
The TS3500 Tape Library Specialist is required to define a logical library and perform the
following tasks. Therefore, you should make sure that it is set up properly and working. For
access through a standard-based web browser, an IP address must be configured, which will
be done initially by the SSR during hardware installation at the TS3500 Tape Library operator
window.
Important:
Each TS7740 Virtualization Engine requires its own Logical Library in a TS3500 Tape
Library.
The Advanced Library Management System (ALMS) feature must be installed and
enabled to define a logical library partition in the TS3500 Tape Library.
Figure 4-1 TS3500 Tape Library Specialist System Summary and Advanced Library Management System windows
As you can see in the ALMS window (at the bottom of the picture), ALMS is enabled for this
TS3500 Tape Library.
When ALMS is enabled for the first time in a partitioned TS3500 Tape Library, the contents of
each partition will be migrated to ALMS logical libraries. When enabling ALMS in a
non-partitioned TS3500 Tape Library, cartridges that already reside in the library are migrated
to the new ALMS single logical library.
Tip: You can create or remove a logical library from the TS3500 Tape Library by using the
Tape Library Specialist web interface.
1. From the main section of TS3500 Tape Library Specialist Welcome window, go to the work
items on the left side of the window and select Library Logical Libraries, as shown in
Figure 4-2.
2. From the Select Action drop-down menu, select Create and click Go.
3. Type in the logical library name (up to 15 characters), select the media type (3592 for
TS7740), and then click Apply. The new logical library is created and will appear in the
logical library list when the window is refreshed.
After the logical library is created, you can display its characteristics by selecting Library
Logical Libraries under work items on the left side of the window, as shown in Figure 4-4 on
page 197. From the Select Action drop-down menu, select Detail and then click Go.
In the Logical Library Details window, you see the element address range. The starting
element address of each newly created logical library starts one element higher, such as:
Logical Library 1: Starting SCSI element address is 1025.
Logical Library 2: Starting SCSI element address is 1026.
Logical Library 3: Starting SCSI element address is 1027.
If the new logical library does not show 8 in the Volser column, correct the information:
1. Select a logical library.
2. Click the Selection Action drop-down menu and select Modify Volser Report.
3. Click Go.
Figure 4-7 illustrates the sequence.
This link takes you to a filtering window where you can select to have the drives displayed by
drive element or by Logical Library. Upon your selection, a window opens so that you can add
a drive to or remove it from a library configuration. It also enables you to share a drive
between Logical Libraries and define its drive as a control path.
Restriction: Do not share drives belonging to a TS7740 (or any other Control Unit). They
must be exclusive.
Figure 4-9 on page 201 shows the drive assignment window of a logical library that has all
drives assigned.
Unassigned drives would appear in the Unassigned column with the box checked, so to
assign them, check the appropriate drive box under the logical library name and click Apply.
In a multi-platform environment, you see logical libraries as shown in Figure 4-9, and you can
reassign physical tape drives from one logical library to another. You can easily do this for the
Open Systems environment, where the tape drives attach directly to the host systems without
a tape controller or VTS/TS7700 Virtualization Engine.
In a System z environment, a tape drive always attaches to one tape control unit or TS7740
Virtualization Engine only. If you reassign a tape drive from a TS7740 Virtualization Engine or
an IBM 3953 Library Manager partition to an Open Systems partition temporarily, you must
also physically detach the tape drive from the TS7740 Virtualization Engine or tape controller
first, and then attach the tape drive to the Open Systems host. Only IBM System Service
Representatives (SSR) should perform these tasks to protect your tape operation from
unplanned outages.
Important: In a System z environment, use the Drive Assignment window only to:
Initially assign the tape drives from TS3500 Tape Library Web Specialist to a logical
partition.
Assign additional tape drives after they have been attached to the TS7740
Virtualization Engine or a tape controller.
Remove physical tape drives from the configuration after they are physically detached
from the TS7740 Virtualization Engine or tape controller.
In addition, never disable ALMS at the TS3500 Tape Library after it has been enabled for
System z host support and System z tape drive attachment.
In a logical library, you can designate any dedicated drive to become a control path drive. A
drive that is loaded with a cartridge cannot become a control path until you remove the
cartridge. Similarly, any drive that is a control path cannot be disabled until you remove the
cartridge that it contains.
The definition of the control path drive is specified on the Drive Assignment window shown in
Figure 4-10. Notice that drives, defined as control paths, are identified by the symbol on the
left side of the drive box. You can change the control path drive definition by selecting or
deselecting this symbol.
4.2.4 Defining the Encryption Method for the new logical library
After adding tape drives to the new logical library, you must specify the Encryption Method for
the new Logical Library (if applicable).
Reminders:
When using encryption, tape drives must be set to Native mode.
To activate encryption, FC9900 must have been ordered for TS7400 and license key
factory installed. Also the associated tape drives must be Encryption Capable
3592-E05 or 3592-E06 (although supported, 3592-J1A is not able to encrypt data).
2. If necessary, change the drive mode to Native mode: In the Drive Summary window, select
a drive and select Change Emulation Mode, as shown in Figure 4-12.
3. In the next window that opens, select the native mode for the drive. After drives are at the
desired mode, proceed with the Encryption Method definition.
To make encryption fully operational in the TS7740 configuration, additional steps are
necessary. Work with your IBM System Service Representative to configure the Encryption
parameters into TS7740 during installation process.
Select Cartridge Assignment Policy from the Cartridges work items to add, change, and
remove policies. The maximum quantity of Cartridge Assignment Policies for the entire
TS3500 Tape Library must not exceed 300.
Figure 4-15 shows the VOLSER ranges defined for logical libraries.
The TS3500 Tape Library allows duplicate VOLSER ranges for different media types only. For
example, Logical Library 1 and Logical Library 2 contain LTO media, and Logical Library 3
contains IBM 3592 media. Logical Library 1 has a Cartridge Assignment Policy of
ABC100-ABC200. The library will reject an attempt to add a Cartridge Assignment Policy of
ABC000-ABC300 to Logical Library 2 because the media type is the same (both LTO).
However, the library does allow an attempt to add a Cartridge Assignment Policy of
ABC000-ABC300 to Logical Library 3 because the media (3592) is different.
In an SMS-managed z/OS environment, all VOLSER identifiers across all storage hierarchies
are required to be unique. Follow the same rules across host platforms also, whether or not
you are sharing a TS3500 Tape Library between System z and Open Systems hosts.
Tip: The Cartridge Assignment Policy does not reassign an already assigned tape
cartridge. If needed, you must first make it unassigned, and then manually reassign it.
Important: Prior inserting TS7740 Virtualization Engine physical volumes into the tape
library make sure that the VOLSER ranges are defined correctly at the TS7740
Management Interface. See 4.3.3, “Defining VOLSER ranges for physical volumes” on
page 217.
These procedures ensure that TS7700 Virtualization Engine back-end cartridges will never
be assigned to a host by accident. Figure 4-16 shows the flow of physical cartridge insertion
and assignment to logical libraries for TS7740 Virtualization Engine R1.6.
Important: With the Advanced Library Manager System (ALMS) enabled, cartridges that
are not in a cartridge assignment policy will not be added to any logical library.
After completing the new media insertion, close the doors. After approximately 15 seconds,
the TS3500 automatically inventories the frame or frames of the door you opened. During the
inventory, the message INITIALIZING is displayed on the Activity window on operator window.
When the inventory completes, TS3500 operator window displays a Ready state. The TS7740
Virtualization Engine uploads its Logical Library inventory and updates its Integrated Library
Manager inventory accordingly. After completing this operation, the TS7740 Virtualization
Engine Library reaches the Auto state.
Place cartridges only in a frame that has an open front door. Do not add or remove cartridges
from an adjacent frame.
Basically, with virtual I/O enabled, TS3500 will move the cartridges from the physical I/O
station into the physical library by itself. In the first moment, the cartridge leaves the physical
I/O station and goes into a slot mapped as a virtual I/O - SCSI element between 769 (X’301’)
and 1023 (X’3FF’) for the logical library selected by the Cartridge Assignment Policy (CAP).
Each logical library has its own set of up to 256 VIO slots. This is defined in the logical library
creation, and can be altered later if needed.
With virtual I/O disabled, TS3584 does not move cartridges from the physical I/O station
unless commanded to by the TS7400 Virtualization Engine or any other host in control.
In both cases, TS3500 detects the volumes inserted when the I/O station door is closed and
scans all I/O cells using the barcode reader. CAP decides to which Logical Library those
cartridges belong and then performs one of the following tasks:
Moves them to that logical library’s virtual I/O slots, if virtual I/O is enabled.
Waits for a host command in this logical partition. Cartridges stay in the I/O station after
barcode scan.
Given that inserted cartridges belong to a defined range in CAP of this Logical Library, and
those ranges were defined into TS7740 Virtualization Engine Physical Volume Range as
explained in 4.3.3, “Defining VOLSER ranges for physical volumes” on page 217, those
cartridges will be assigned to this logical library. If any VOLSER is not in the range defined by
CAP, the operator must identify the correct logical library as the destination using the Insert
Notification window at the operator window. If Insert Notification is not answered, the volume
remains unassigned.
You can then assign the cartridges to the TS7740 Virtualization Engine logical library partition
by following the procedure in 4.2.7, “Assigning cartridges in TS3500 Tape Library to logical
library partition” on page 210.
Important: Unassigned cartridges can exist in the TS3500 Tape Library, and in the
TS7700 Virtualization Engine MI. But “unassigned” has separate meanings and requires
separate actions from the operator in each system.
Clarifications:
Insert Notification is not supported in a high-density library. CAP must be correctly
configured to provide automated assignment of all inserted cartridges.
A cartridge that has being manually assigned to the TS7740 Logical Library will not
show up automatically in the TS7740 inventory. An Inventory Upload is needed to
refresh it. The Inventory Upload function is available on the Physical Volume Ranges
Menu as shown in Figure 4-18.
Cartridge assignment to a logical library is available only through the TS3500 Tape
Library Specialist web interface. The operator window does not provide this function.
Remember:
The Advanced Library Management System (ALMS) must be enabled in a Library that
is connected to a TS7740 Virtualization Engine. As a result, the cleaning mode is set to
automatic and the library will manage drive cleaning.
A cleaning cartridge is good for 50 cleaning actions.
The process to insert cleaning cartridges varies depending on the setup of the TS3500 Tape
Library. A cleaning cartridge can be inserted by using the web interface or from the operator
window. As many as 100 cleaning cartridges can be inserted in a TS3500 Tape Library.
To insert a cleaning cartridge using the TS3500 Tape Library Specialist, perform the following
steps:
1. Open the door of the I/O station and insert the cleaning cartridge.
2. Close the door of the I/O station.
3. Type the Ethernet IP address on the URL line of the browser and press Enter. The
Welcome Page opens.
4. Click Cartridges I/O Station. The I/O Station window opens.
5. Follow the instructions in the window.
This section describes the definitions and settings that apply to the TS7700 Virtualization
Engine. The major tasks are as follows:
Definitions that are made by the IBM System Service Representative (SSR) during the
installation of the TS7700 Virtualization Engine, at your request
Definitions that are made through the TS7740 Virtualization Engine MI
Insertion of logical volumes through the TS7740 Virtualization Engine MI
As Figure 4-19 shows, even if you are accessing a stand-alone grid TS7700 Virtualization
Engine, it always displays as a grid, which in this case is a stand-alone grid.
Observe that Composite Library Sequence Number, Distributed Library sequence number,
and Cluster number are shown in this window, no matter whether it is a stand-alone cluster or
multi-cluster configuration.
Composite and Distributed Library sequence number, and Cluster number values are defined
to each TS7700 cluster by the IBM System Service Representative (SSR) during hardware
installation.
Remember: The Composite and Distributed Library sequence numbers must use
hexadecimal characters.
Name your grid and cluster (or clusters) using the management interface. Be sure to use
meaningful names to make resource identification as easy as possible to anyone who might
be managing or monitoring this grid through the MI.
Tip: A best practice is to make the grid name the same as the composite library name that
was defined through DFSMS.
To set the cluster names, click Configuration Cluster Identification Properties. See
Figure 4-22 on page 216 for the Cluster Identification Properties window.
Tip: Use the same denomination in cluster nickname as in DFSMS distributed library for
the same cluster.
Both cluster or grid nickname must be one to eight characters in length composed of
alphanumeric characters. Blank spaces and the following special characters are also allowed:
@ . - +
In the Grid description or Cluster description field, a brief description (up to 63 characters)
is enough.
Both composite and distributed library sequence numbers are set into each TS7700
Virtualization Engine cluster by the IBM System Service Representative during installation.
Composite library sequence number defined into TS7700 must match HCD LIBRARY-ID
definitions and TS7700’s Distributed Library sequence number must match to LIBRARY-ID
listed in the ISMF Tape Library definition windows.
Restriction: Do not use distributed library names starting with the letter V because on the
z/OS host, a library name cannot start with the letter V.
Important:
When using a TS3500 Tape Library, you must assign CAP at the library hardware level
before using the library with System z hosts.
When using a TS3500 Tape Library, the TS7740 Virtualization Engine, physical
volumes must fall within ranges that are assigned by CAP to this TS7740 Virtualization
Engine Logical Library in the TS3500 Tape Library.
Use the window shown in Figure 4-23 on page 218 to add, modify, or delete physical volume
ranges. Unassigned physical volumes are listed in this window. When you observe an
unassigned volume that belongs to this TS7740 Virtualization Engine, add a range that
includes that volume to fix it. If an unassigned volume does not belong to this TS7740
Virtualization Engine, you must eject it and reassign it to the proper logical library in TS3500
Tape Library. In this case, re-check CAP for proper range definition in TS7700 Virtualization
Engine.
Click the Inventory Upload button to upload the inventory from TS3500 and update any
range or ranges of physical volumes that were recently assigned to that logical library.
The VOLSER Ranges Table displays the list of defined VOLSER ranges for a given
component.
You can use the VOLSER Ranges Table to create a new VOLSER range, or modify or delete
a predefined VOLSER range.
Figure 4-23 shows status information that is displayed in the VOLSER Ranges Table as
follows:
From: The first VOLSER in a defined range
To: The last VOLSER in a defined range
Media Type: The media type for all volumes in a given VOLSER range. Possible values are
as follows:
– JA-ETC: Enterprise Tape Cartridge
– JB-ETCL: Enterprise Extended-Length Tape Cartridge
– JJ-EETC: Enterprise Economy Tape Cartridge
Home Pool: The home pool to which the VOLSER range is assigned
Use the drop-down menu in the VOLSER Ranges Table to add a new VOLSER range, or
modify or delete a predefined range:
Important: Modifying a predefined VOLSER range will not have any effect on physical
volumes which are already inserted and assigned to the TS7740 Virtualization Engine.
Only physical volumes that will be inserted after the VOLSER range modification will be
changed.
The VOLSER entry fields must contain six characters. The characters can be letters,
numerals, or a space. The two VOLSERs must be entered in the same format. Corresponding
characters in each VOLSER must both be either alphabetic or numeric. For example,
AAA998 and AAB004 are of the same form, but AA9998 and AAB004 are not. The VOLSERs
that fall within a range are determined as follows: The VOLSER range is increased such that
alphabetic characters are increased alphabetically, and numeric characters are increased
numerically. For example, VOLSER range ABC000 - ABD999 result in a range of 2000
VOLSERs (ABC000 - ABC999 and ABD000 - ABD999).
Restriction: The VOLSER ranges you define on the IBM TS3500 Tape Library apply to
physical cartridges only. You can define logical volumes only through the TS7700
Virtualization Engine management interface. See 4.3.12, “Inserting logical volumes” on
page 254 for more information.
For the TS7700 Virtualization Engine, no additional definitions are required at the hardware
level other than setting up the proper VOLSER ranges at the TS3500 library.
Although you could now enter cartridges into the TS3500 library, complete the required
definitions at the host before you insert any physical cartridges into the Tape Library.
The process of inserting logical volumes into the TS7700 Virtualization Engine is described in
4.3.12, “Inserting logical volumes” on page 254.
Pooling physical volume allows you to separate your data into separate sets of physical
media, treating each media group in a specific way. For instance, you might want to segregate
production data from test data, or encrypt part of your data. All this can be accomplished by
defining physical volume pools appropriately. Also, you can define the reclaim parameters for
each specific pool to best suit specific needs. The TS7700 Virtualization Engine management
interface is used for pool property definitions.
Items under Physical Volumes in the Management interface only apply to clusters with an
associated Tape Library (TS7740 Virtualization Engine). Trying to access those screens from
a TS7720 will result in the following HYDME0995E message:
The Physical Volume Pool Properties table displays the encryption setting and media
properties for every physical volume pool defined for a given cluster in the grid.
You can use the Physical Volume Pool Properties table to view encryption and media settings
for all installed physical volume pools. To view and modify additional details of pool properties,
select a pool or pools from this table and then select either Pool Properties or Encryption
Settings from the drop-down menu.
Tip: Pools 1 - 32 are pre installed. Pool 1 functions as the default pool and is used if no
other pool is selected. All other pools must be defined before they can be selected.
The Physical Volume Pool Properties table displays the media properties and encryption
settings, and for every physical volume pool defined for a given cluster in the grid. This table
contains two tabs: Pool Properties and Encryption Settings.
Under Pool Properties tab:
– Pool: Lists the pool number, which is a whole number in the range of 1 - 32, inclusive.
– Media Class: Lists the supported media class of the storage pool is 3592.
– First Media (Primary): The primary media type that the pool can borrow or return to the
common scratch pool (Pool 0). Possible values are as follows:
Any 3592 Any 3592
JA Enterprise Tape Cartridge (ETC)
JB Enterprise Extended-Length Tape Cartridge (ETCL)
JJ Enterprise Economy Tape Cartridge (EETC)
To modify pool properties, select the check box next to one or more pools listed in the
Physical Volume Pool Properties table and select Properties from the drop-down menu.
To modify encryption settings for one or more physical volume pools use the following steps
(see Figure 4-25 and Figure 4-26 on page 224 for reference):
1. Open the Physical Volume Pools page (Figure 4-25)
Tip: A tutorial is available at the Physical Volume Pools page showing how to modify
encryption properties.
In this window, you can modify values for any of the following controls:
Encryption:
This field is the encryption state of the pool and can have the following values:
– Enabled: Encryption is enabled on the pool.
– Disabled: Encryption is not enabled on the pool.
When this value is selected, key modes, key labels, and check boxes are disabled.
Use Encryption Key Manager default key
Select this check box to populate the Key Label field by using a default key provided by the
encryption key manager.
Restriction: Your encryption key manager software must support default keys to use
this option.
This check box occurs before both Key Label 1 and Key Label 2 fields; you must select this
check box for each label to be defined using the default key.
To complete the operation, click OK. To abandon the operation and return to the Physical
Volume Pools page, click Cancel.
Reclaim thresholds
To optimize utilization of the subsystem resources, such as CPU cycles and tape drive usage,
you can inhibit space reclamation during predictable busy periods of time and adjust
reclamation thresholds to the optimum point in your TS7740 through the management
interface. Reclaim threshold is the percentage used to determine when to perform
reclamation of free space in a stacked volume. When the amount of active data on a physical
stacked volume drops below this percentage, the volume becomes eligible for reclamation.
Reclamation values can be in the range of 5 - 95%, and default value is 10%. Clicking to clear
the check box deactivates this function.
Throughout the data life cycle, new logical volumes are created and old logical volumes
become obsolete. Logical volumes are migrated to physical volumes, occupying real space
there. When a logical volumes becomes obsolete, that space becomes a waste of capacity in
that physical tape. In other words, the active data level of that volume is decreasing over time.
Reclamation will copy active data from that volume to another stacked volume in the same
pool. When the copy finishes and volume became empty, it will be returned to available
scratch status. This cartridge is now available for use and will be returned to common scratch
pool or directed to the specified reclaim pool, according to the Physical Volume Pool
definition.
Clarification: Each reclamation task uses two tape drives (source and target) in a
tape-to-tape copy function. TS7740 cache tape volume cache is not used for reclamation.
Multiple reclamation processes can run in parallel.The maximum number of reclaim tasks is
limited by the TS7740, based on the number of available back-end drives as shown in the
Table 4-1.
3 1
4 1
5 1
6 2
7 2
8 3
9 3
10 4
11 4
12 5
13 5
14 6
15 6
16 7
Select a pool and click Modify Pool Properties in the drop-down menu to set reclamation
level and other policies for that pool. See Figure 4-28 on page 228.
In this example, reclamation level is set to 40%, borrow-return policy is in effect for this pool,
and reclaimed physical cartridges stay in the same Pool 5, except borrowed volumes, which
will be returned to the original pool.
Reclamation enablement
To minimize any impact on TS7700 Virtualization Engine activity, the storage management
software monitors resource utilization in the TS7700 Virtualization Engine, and enables or
disables reclamation as appropriate. You can optionally prevent reclamation activity at
specific times of day by specifying an Inhibit Reclaim Schedule in the TS7740 Virtualization
Engine management interface (Figure 4-29 on page 231 shows an example). However, the
TS7740 Virtualization Engine determines whether reclamation is to be enabled or disabled
once an hour depending on the number of available scratch cartridges and will ignore the
Inhibit Reclaim Schedule if the TS7740 Virtualization Engine is almost out of scratch
category.
Using the Bulk Volume Information Retrieval (BVIR) process, you can run the query for
PHYSICAL MEDIA POOLS to monitor the amount of active data on stacked volumes to help
you plan for a reasonable and effective reclaim threshold percentage. You can also use the
Host Console Request function to obtain the physical volume counts.
Pooling is enabled as a standard feature of the TS7700 Virtualization Engine, even if you are
only using one pool. Reclamation can occur on multiple volume pools at the same time, and
processing multiple tasks for the same pool. One of the reclamation methods selects the
volumes for processing based on the percentage of active data. For example, if the reclaim
threshold was set to 30% generically across all volume pools, the TS7700 Virtualization
Engine would select all the stacked volumes from 0% to 29% of remaining active data. The
reclaim tasks would then process the volumes from least full (0%) to most full (29%) up to the
defined reclaim threshold of 30%.
Individual pools can have separate reclaim policies set. The number of pools can also
influence the reclamation process because the TS7740 Virtualization Engine always
evaluates the stacked media starting with Pool 1.
The scratch count for physical cartridges also affects reclamation. The scratch state of pools
is assessed as follows:
1. A pool enters a Low scratch state when it has access to less than 50 but two or more
stacked volumes.
2. A pool enters a Panic scratch state when it has access to less than two empty stacked
volumes.
Access to includes any borrowing capability, which means that if the pool is configured for
borrowing, and if there are more than 50 cartridges in the common scratch pool, the pool will
not enter the Low scratch state.
Whether borrowing is configured or not, as long as each pool has two scratch cartridges, the
Panic Reclamation mode is not entered. Panic Reclamation mode is entered when a pool has
less than two scratch cartridges and no more scratch cartridges can be borrowed from any
other pool defined for borrowing. Borrowing is described in “Physical volume pooling” on
page 68.
Important: Be aware that a physical volume pool running out of scratch might stop mounts
in the TS7740, impacting your operation. Mistakes in pool configuration (media type,
borrow and return, home pool, and so on) or operating with an empty common scratch pool
might lead to this situation.
Consider that one reclaim task consumes two drives for the data move, and CPU cycles.
When a reclamation starts, these drives are busy until the volume being reclaimed is empty. If
you raise the reclamation threshold level too high, the result is larger amounts of data to be
moved, with resultant penalty in resources that are needed for recalls and premigration.
Default setting for the reclamation threshold level is 10%, and generally you should operate
with reclamation threshold level in the range of 10 - 30%. Also see Chapter 8, “Operation” on
page 451 to fine-tune this function considering your peek load using new host functions.
Pools in either scratch state (Low or Panic state) get priority for reclamation.
For a TS7740
management interface
initiated priority move, the
option to honor the inhibit
reclaim schedule is given
to the operator.
Tips:
A physical drive is considered idle when no activity has occurred for the previous ten
minutes.
The Inhibit Reclaim Schedule is not honored by the Secure Data Erase function for a
volume that has no active data.
The Schedules table (Figure 4-30) displays the day, time, and duration of any scheduled
reclamation interruption. All inhibit reclaim dates and times are first displayed in Coordinated
Universal Time (UTC) and then in local time. Use the drop-down menu on the Schedules
table to add a new Reclaim Inhibit schedule, or modify or delete an existing schedule, as
shown in Figure 4-29.
The Reclaim Threshold Percentage is initially set at 10%. This percentage directly affects the
amount of data to be moved by the reclaim operation. As a general rule, try not to go above
30%. You can set a Reclaim Threshold Percentage of 0% to prevent reclamation in this pool
(assuming that all of the other reclamation criteria are also set to 0).
Be aware that a multiple of two drives are involved in the reclamation process and because of
this resource usage, you should not specify high percentages. BVIR reports can help you
adjust the percentages. See 9.9, “Bulk Volume Information Retrieval” on page 711 for details.
The MOUNT FROM CATEGORY command is not exclusively used for scratch mounts.
Therefore, the TS7700 Virtualization Engine cannot assume that any MOUNT FROM
CATEGORY is for a scratch volume.
The Fast Ready attribute provides a definition of a category to supply scratch mounts. For
z/OS it depends on the definitions. The Fast Ready definition is done through the
management interface. Figure 4-32 shows the Fast Ready Categories window.
The actual category hexadecimal number depends on the software environment and on the
definitions in the SYS1.PARMLIB member DEVSUPxx for library partitioning. Also, the
DEVSUPxx member must be referenced in IEASYSxx member to be activated.
Restriction: You cannot use category name 0000 or FFxx (where xx equals 0 - 9 or A - F)
because 0000 represents a null value, and “FFxx” is reserved for hardware.
Figure 4-33 shows the Add Category window (which opens by selecting Add in the Action
drop-down menu shown in Figure 4-32 on page 233).
When defining a Fast Ready category, define whether volumes in this category will expire or
not. If Set expiration is selected, define the expiration time for this category (00 will be
rejected). Selecting the Expire Hold check box puts the non expired volumes in this category
in hold state, meaning that they cannot be mounted or assigned to other categories until
expiration time is due. See 4.3.6, “Defining the logical volume expiration time” on page 234 for
details concerning this aspect of Fast Ready categories.
Tip: Add a comment to DEVSUPnn to make sure that the Fast Ready categories are
updated when the category values in DEVSUPnn are changed. They need to be in
sync at all times.
See Appendix G, “Library Manager volume categories” on page 905 for the scratch mount
category for each software platform. In addition to the z/OS DFSMS default value for the
scratch mount category, you can define your own scratch category to the TS7700
Virtualization Engine. In this case, you should also add your own scratch mount category to
the Fast Ready category list.
For example, assume that you have 20,000 logical volumes in scratch status at any point in
time, that the average amount of data on a logical volume is 400 MB, and that the data
compresses at a 2:1 ratio. The space occupied by the data on those scratch volumes is
4,000,000 MB or the equivalent of 14 3592 JA cartridges. By using the Delete Expired
Volume Data setting, you could reduce the number of cartridges required in this example by
14. The parameter Expire Time specifies the amount of time in hours, days, or weeks. The
data continues to be managed by the TS7700 Virtualization Engine after a logical volume is
returned to scratch before the data associated with the logical volume is deleted. A minimum
of 1 and a maximum of 32,767 hours (approximately 195 weeks) can be specified.
Remember:
Fast-Ready categories are global settings within a multi-cluster grid. Therefore, each
defined Fast-Ready category and the associated delete expire settings are valid on
each cluster of the grid.
The Delete Expired Volume Data setting applies also to TS7720 clusters. If it is not
used, logical volumes that have been returned to scratch will still be considered active
data, allocating physical space in the tape volume cache. Thus, setting an expiration
time on TS7720 is important to maintain an effective cache usage by deleting expired
data.
Specifying a value of zero used to work as the No Expiration option in older levels. Zero in this
field causes an error message as shown in Figure 4-34 on page 236. You see the message
because the data associated with the volume is managed as it was before the addition of this
option, meaning that it is never deleted. In essence, specifying a value (other than zero)
provides a “grace period” from when the logical volume is returned to scratch until its
associated data is eligible for deletion. A separate Expire Time can be set for each category
defined as fast-ready.
Expire Time
Figure 4-33 on page 234 shows the number of hours or days in which logical volume data
categorized as Fast Ready will expire. If the field is set to 0, the categorized data will never
expire. The minimum Expire Time is 1 hour and the maximum Expire Time is 195 weeks,
1365 days, or 32,767 hours. The Expire Time default value is 24 hours.
Establishing the Expire Time for a volume occurs as a result of specific events or actions. The
possible events or actions and their effect on the Expire Time of a volume are as follows:
A volume is mounted.
The data that is associated with a logical volume will not be deleted, even if it is eligible, if
the volume is mounted. Its Expire Time is set to zero, meaning it will not be deleted. It will
be re-evaluated for deletion when its category is subsequently assigned.
A volume's category is changed.
Whenever a volume is assigned to a category, including assignment to the same category
that it currently is in, it is re-evaluated for deletion.
Expiration.
If the category has a non-zero Expire Time, the volume's data is eligible for deletion after
the specified time period, even if its previous category had a different non-zero Expire
Time.
No action.
If the volume's previous category had a non-zero Expire Time or even if the volume was
already eligible for deletion (but has not yet been selected to be deleted) and the category
it is assigned to has an Expire Time of zero, the volume's data is no longer eligible for
deletion. Its Expire Time is set to zero.
After a volume's Expire Time has been reached, it is eligible for deletion. Not all data that is
eligible for deletion will be deleted in the hour it is first eligible. Once an hour, the TS7700
Virtualization Engine selects up to 500 eligible volumes for data deletion. The volumes are
selected based on the time that they became eligible, with the oldest ones being selected
first. Up to 500 eligible volumes for the TS7700 Virtualization Engine in the library are
selected first.
Expire Hold
Expire Hold is on for a volume and expire time has not passed. An unexpired volume in a
category with the hold attribute set will not be selected for a mount.
Checking Expire Hold (see Figure 4-33 on page 234) means that volumes in this fast-ready
category cannot be moved to another category or mounted before its data expires. Not
checking it means no expire hold, allowing volumes within this fast-ready category be freely
mounted or moved to other categories before its data expires.
These construct names are passed down from the z/OS host and stored with the logical
volume. The actions defined for each construct are performed by the TS7700 Virtualization
Engine. For non-z/OS hosts there is a possibility to manually assign the constructs to logical
volume ranges.
Storage Groups
On the z/OS host, the Storage Group construct determines into which tape library a logical
volume is written. Within the TS7740 Virtualization Engine, the Storage Group construct
allows you to define the Storage Pool to which you want the logical volume.
Even before you define the first storage group, there is always at least one storage group
present: The default storage group, which is identified by eight dashes (--------). This
storage group cannot be deleted, but you can modify it to point to another Storage Pool. You
can define up to 256 storage groups, including the default.
The Storage Groups table displays all existing storage groups available for a given cluster.
You can use the Storage Groups table to create a new storage group, modify an existing
storage group, or delete a storage group. The following status information is listed in the
Storage Groups table:
Name: The name of the storage group
Each storage group within a cluster must have a unique name. Valid characters for this
field are as follows:
A-Z Alphabetic characters
0-9 Numerals
$ Dollar sign
@ At sign
* Asterisk
# Number sign
% Percent
Primary Pool: The primary pool for premigration
Only validated physical primary pools can be selected. If the cluster does not possess a
physical library, this column will not be visible and the management interface will
categorize newly created storage groups using pool 1.
Description: A description of the storage group
Use the drop-down menu in the Storage Groups table to add a new storage group, or
modify or delete an existing storage group.
Restriction: If the cluster is not attached to a physical library, the Primary Pool field will not
be available in the Add or Modify menu options.
To modify an existing storage group, click the radio button from the Select column that
appears adjacent to the name of the storage group you want to modify. Select Modify from
the drop-down menu. Complete the fields for information that will be displayed in the Storage
Groups table.
To delete an existing storage group, select the button in the Select column next to the name of
the storage group you want to delete. Select Delete from the drop-down menu. You are
prompted to confirm your decision to delete a storage group. If you select Yes, the storage
group will be deleted. If you select No, your request to delete is cancelled.
Important: Do not delete any existing storage group if there are still logical volumes
assigned to this storage group.
Management Classes
You can define, through the Management Class, whether you want to have a dual copy of a
logical volume within the same TS7700 Virtualization Engine. In a grid configuration, you will
most likely choose to copy logical volumes over to the other TS7700 cluster instead of
creating a second copy in the same TS7700 Virtualization Engine. In a stand-alone
configuration, however, you might want to protect against media failures by using the dual
copy capability. The second copy of a volume can be in a pool that is designated as an Export
Copy pool. See 2.4.4, “Copy Export” on page 76 for more information.
If you want to have dual copies of selected logical volumes, you must use at least two Storage
Pools because the copies cannot be written to the same Storage Pool as the original logical
volumes.
The Current Copy Policy table displays the copy policy in force for each component of the
grid. If no Management Class is selected, this table will not be visible. You must select a
Management Class from the Management Classes Table to view copy policy details.
The Management Classes Table (Figure 4-37) displays defined Management Class copy
policies that can be applied to a cluster.
You can use the Management Classes Table to create a new Management Class, modify an
existing Management Class, or delete one or more existing Management Classes. The
default Management Class can be modified, but cannot be deleted. The default name of
Management Class uses eight dashes (--------).
Use the drop-down menu in the Management Classes table to add a new Management Class,
modify an existing Management Class, or delete one or more existing Management Classes.
To add a new Management Class, select Add from the drop-down menu and click Go.
Complete the fields for information that will be displayed in the Management Classes Table.
You can create up to 256 Management Classes per TS7700 Virtualization Engine Grid.
Tip: If cluster is not attached to a physical library, the Secondary Pool field will not be
available in the Add option.
The Copy Action drop-down menu is adjacent to each cluster in the TS7700 Virtualization
Engine Grid. Use the Copy Action menu to select, for each component, the copy mode to be
used in volume duplication. Actions available from this menu are as follows:
No Copy: No volume duplication will occur if this action is selected.
Rewind/Unload (RUN): Volume duplication occurs when the Rewind Unload command is
received. The command return only after the volume duplication completes successfully.
Deferred: Volume duplication occurs at a later time based on the internal schedule of the
copy engine.
To modify an existing Management Class, select the check box in the Select column that is in
the same row as the name of the Management Class you want to modify. You can modify only
one Management Class at a time. Select Modify from the drop-down menu and click Go. Of
the fields listed in the Management Classes Table, or available from the Copy Action
drop-down menu, you are able to change all of them except the Management Class name.
To delete one or more existing Management Classes, select the check box in the Select
column that is in the same row as the name of the Management Class you want to delete.
Select multiple check boxes to delete multiple Management Classes. Select Delete from the
drop-down menu and click Go.
Tip: Do not delete any existing management class if there are still logical volumes
assigned to this management class.
Storage Classes
By using the Storage Class construct, you can influence when a logical volume is removed
from cache.
A default Storage Class is always available. It is identified by eight dashes (--------) and
cannot be deleted. Use the window shown in Figure 4-40 on page 244 to define, modify, or
delete a storage class used by the TS7700 Virtualization Engine to automate storage
management through classification of data sets and objects.
The Storage Classes table lists storage classes that are defined for each component of the
grid.
The Storage Classes table displays defined storage classes available to control data sets and
objects within a cluster. Although storage classes are visible from all TS7700 clusters, only
those clusters attached to a physical library can alter tape volume cache preferences.
TS7700 clusters that do not possess a physical library do not remove physical volumes from
the tape cache, so the tape volume cache preference for these clusters is always Preference
Level 1.
Use the Storage Classes table to create a new storage class, or modify or delete an existing
storage class. The default storage class can be modified, but cannot be deleted. The default
storage class uses eight dashes as the name (--------).
Important: Care must be taken when assigning volumes to this group to avoid
cache overruns.
Volume Copy Retention Time: The minimum amount of time (in hours) after a logical
volume copy was last accessed that the copy can be removed from cache.
The copy is said to be expired after this time has passed, and the copy then becomes a
candidate for removal. Possible values include any in the range of 0 to 65536. The default
is 0.
Tip: This field is only visible if the selected cluster does not attach to a physical library
and all the clusters in the grid operate a microcode level of 8.7.0.xx or higher.
If the Volume Copy Retention Group displays a value of Pinned, this field is disabled.
Description: A description of the storage class definition
The value in this field must be 1 - 70 characters in length.
Use the drop-down menu in the Storage Classes Table to add a new storage class or modify
or delete an existing storage class.
To add a new storage class, select Add from the drop-down menu. Complete the fields for the
information that will be displayed in the Storage Classes Table. You can create up to 256
storage classes per TS7700 Virtualization Engine Grid.
To modify an existing storage class, click the radio button from the Select column that appears
in the same row as the name of the storage class you want to modify. Select Modify from the
To delete an existing storage class, click the radio button from the Select column that appears
in the same row as the name of the storage class you want to delete. Select Delete from the
drop-down menu. A dialog box opens where you confirm the storage class deletion. Select
Yes to delete the storage class, or select No to cancel the delete request.
Important: Do not delete any existing storage class if there are still logical volumes
assigned to this storage class.
Data Classes
From a z/OS perspective (SMS managed tape) the DFSMS Data Class defines the following
information:
Media type parameters
Recording technology parameters
Compaction parameters
For the TS7700 Virtualization Engine, only the Media type, Recording technology, and
Compaction parameters are used. The use of larger logical volume sizes is controlled through
Data Class.
A default Data Class is always available. It is identified by eight dashes (--------) and cannot
be deleted.
The Data Classes table (Figure 4-42) displays the list of Data Classes defined for each cluster
of the grid.
You can use the Data Classes table to create a new data class, or modify or delete an existing
Data Class. The default Data Class can be modified, but cannot be deleted. The default Data
Class shows the name as eight dashes (--------).
Restriction: Support for 25,000 MiB logical volume maximum size is available by RPQ
only.
Logical WORM (Yes or No) Whether Logical WORM (write once, read many) is set for
the Data Class. Logical WORM is the virtual equivalent of
WORM tape media, achieved through Licensed internal
code emulation.
Description: A description of the Data Class definition
The value in this field must be at least 0 and no greater than 70 characters in length.
To add a new Data Class, select Add from the drop-down menu and click Go. Complete the
fields for information that will be displayed in the Data Classes Table.
Tip: You can create up to 256 Data Classes per TS7700 Virtualization Engine Grid.
To modify an existing Data Class, select the check box in the Select column that appears in
the same row as the name of the Data Class you want to modify. Select Modify from the
drop-down menu and click Go. Of the fields listed in the Data Classes Table, you will be able
to change all of them except the default Data Class name.
To delete an existing Data Class, click the radio button from the Select column that appears in
the same row as the name of the Data Class you want to delete. Select Delete from the
drop-down menu and click Go. A dialog box opens where you can confirm the Data Class
deletion. Select Yes to delete the Data Class, or select No to cancel the delete request.
Important: Do not delete any existing data class if there are still logical volumes assigned
to this data class.
Clarification: Cache enablement license key entry applies only on a TS7740 Virtualization
Engine configuration because a TS7720 Virtualization Engine does not have a 1-TB cache
enablement feature (FC5267).
The amount of disk cache capacity and performance capability are enabled using feature
license keys. You will receive feature license keys for the features that you have ordered.
Each feature increment allows you to tailor the subsystem to meet your disk cache and
performance needs.
Use the Feature Licenses window (Figure 4-44) to activate feature licenses in the TS7700
Virtualization Engine. To open the window, select Activate New Feature License from the list
and click Go. Enter the license key into the fields provided and select Activate.
To remove a license key, select the feature license to be removed, select Remove Selected
feature License from the list, and click Go.
Important: Do not remove any installed peak data throughput features because removal
can affect host jobs.
The encryption key manager assists encryption-enabled tape drives in generating, protecting,
storing, and maintaining encryption keys that are used to encrypt information being written to
and decrypt information being read from tape media (tape and cartridge formats).
The following settings are used to configure the TS7700 Virtualization Engine connection to
an Encryption Key Manager (Figure 4-46 on page 250):
Primary key manager address: The key manager server name or IP address that is
primarily used.
Primary key manager port: The port number of the primary key manager. The default
value is 3801. This field is only required if a primary key address is used.
Secondary key manager address: The key manager server name or IP address that is
used when the primary key manager is unavailable.
Secondary key manager port: The port number of the secondary key manager. The
default value is 3801. This field is required only if a secondary key address is used.
Preferred DNS server: The Domain Name Server (DNS) that is primarily used. DNS
addresses are only needed if you specify a symbolic domain name for one of the key
manager addresses rather than a numeric IP address. If you need to specify a DNS, be
sure to specify both a primary and an alternate so you do not lose access to your
Encryption Key Manager because of one of the DNS servers being down or inaccessible.
This address can be in IPv4 format.
Alternate DNS server: The Domain Name Server that is used in case the preferred DNS
server is unavailable. If a preferred DNS server is specified, specify an alternate DNS also.
This address can be in IPv4 format.
Using the Ping Test: Use the Ping Test button to check component network connection to
a key manager after changing a component's address or port. If you change a key
Click the Submit button to save changes to any of the settings. To discard changes and
return the field settings to their current values, click the Reset button.
Use the window to configure SNMP traps that will log operation history events such as login
occurrences, configuration changes, status changes (vary on or off and service prep), shut
down, and code updates. SNMP is a networking protocol that allows a IBM Virtualization
Engine TS7700 to automatically gather and transmit information about alerts and status to
other entities in the network.
Destination Settings
Use the Destination Settings table to add, modify, or delete a destination for SNMP trap logs.
You can add, modify, or delete a maximum of 16 destination settings at one time. Settings that
can be configured are as follows:
IP Address: This setting is the IP address of the SNMP server in IPv4 format. A value in
this field is required.
Port: This port is where the SNMP trap logs are sent. This value must be a number
between 0 and 65535. A value in this field is required.
Restriction: A user with read-only permissions cannot modify the contents of the
Destination Settings table.
Use the Select Action drop-down menu in the Destination Settings table to add, modify, or
delete an SNMP trap destination. Destinations are changed in the VPD (vital product data) as
soon as they are added, modified, or deleted. These updates do not depend on selection of
the Submit Changes button:
Add SNMP destination: Select this menu item to add an SNMP trap destination for use in
the IBM Virtualization Engine TS7700 Grid.
Modify SNMP destination: Select this menu item to modify an SNMP trap destination that
is used in the IBM Virtualization Engine TS7700 Grid.
Confirm delete SNMP destination: Select this menu item to delete an SNMP trap
destination used in the IBM Virtualization Engine TS7700 Grid.
During logical volumes entry processing on z/OS, even if the library is online and operational
for a given host, at least one device needs to be online (or have been online) for that host for
the library to be able to send the volume entry attention interrupt to that host. If the library is
online and operational, but there are no online devices to a given host, that host will not
receive the attention interrupt from the library unless a device had previously been varied
online.
To work around this limitation, ensure that at least one device is online (or had been online) to
each host or use the LIBRARY RESET,CBRUXENT command to initiate cartridge entry
processing from the host. This task is especially important if you only have one host attached
to the library that owns the volumes being entered. In general, after you have entered
volumes into the library, if you do not see the expected CBR36xxI cartridge entry messages
being issued, you can use the LIBRARY RESET,CBRUXENT command from z/OS to initiate
Up to now, as soon as OAM started, and if volumes were in the Insert category, entry
processing started without giving you the chance to stop entry processing the first time OAM
is started.
Now, the LI DISABLE,CBRUXENT command can be used without starting the OAM address
space. This approach gives you the chance to stop entry processing before the OAM address
space initially starts.
The table at the top of Figure 4-49 on page 254 shows the current information about the
number of logical volumes in the TS7700 Virtualization Engine:
Currently Inserted: The total number of logical volumes inserted into the TS7700
Virtualization Engine
Maximum Allowed: The total maximum number of logical volumes that can be inserted
Available Slots: The available slots remaining for logical volumes to be inserted, which is
obtained by subtracting the Currently Inserted logical volumes from the Maximum Allowed
To view the current list of logical volume ranges in the TS7700 Virtualization Engine Grid,
enter a logical volume range and click Show.
Use the following fields if you want to insert a new logical volume range action:
Starting VOLSER: This is the first logical volume to be inserted. The range for inserting
logical volumes begins with this VOLSER number.
Quantity: Select this option to insert a set amount of logical volumes beginning with
Starting VOLSER. Enter the quantity of logical volumes to be inserted in the adjacent text
field. You can insert up to 10,000 logical volumes at one time.
Ending VOLSER: Select this option to insert a range of logical volumes. Enter the ending
VOLSER number in the adjacent text field.
Initially owned by: Indicates the name of the cluster that will own the new logical volumes.
Select a cluster from the drop-down menu.
Media type: Indicates the media type of the logical volume (volumes). Possible values are
as follows:
– Cartridge System Tape (400 MiB)
– Enhanced Capacity Cartridge System Tape (800 MiB)
Set Constructs: Select this check box to specify constructs for the new logical volume (or
volumes), then use the drop-down menu below each construct to select a predefined
construct name. You can specify the use of any or all of the following constructs:
– Storage Group
– Storage Class
– Data Class
– Management Class
Important: When using z/OS, do not specify constructs when the volumes are added,
instead they are assigned during job processing when a volume is mounted.
To insert a range of logical volumes, complete the fields listed and click Insert. You are
prompted to confirm your decision to insert logical volumes. To continue with the insert
Restriction: You can insert up to ten thousand (10,000) logical volumes at one time. This
applies to both inserting a range of logical volumes and inserting a quantity of logical
volumes.
These grid operational characteristics must have been carefully considered and thoroughly
planned by you and your IBM representative. The infrastructure needs must be addressed in
advance.
Tip: All clusters in a multi-cluster grid must have FC4015 (Grid Enablement) installed. This
includes existing stand-alone clusters that are being used to create a multi-cluster grid.
Cluster 0
TS7740
System z
FICON Attachment
Tape Library
LAN/WAN
TS7720 TS7740
TS7740
System z
FICON Attachment
Tape Library
Cluster 1
Cluster 0
TS7740
System z
FICON Attachment
Tape Library
LAN/WAN
TS7720 TS7740
TS7720
System z
FICON Attachment
Cluster 1
Each TS7700 cluster provides two or four FICON host attachments and 256 virtual tape
device addresses. The clusters in a grid configuration are connected together through two or
four 1 Gbps copper (RJ-45) or shortwave fiber Ethernet links (single- or dual-ported).
Alternatively two longwave fiber Ethernet links can be provided. The Ethernet links are used
for the replication of data between clusters, and passing control information and access
between a local cluster’s virtual tape device and a logical volume’s data in a remote cluster’s
TS7700 Cache.
The data consistency point is defined in the Management Classes construct definition
through the management interface. You can perform this task only for an existing grid system.
In a stand-alone cluster configuration, you will see only your stand-alone cluster in the Modify
Management class definition. Figure 4-52 on page 259 shows the Modify Management Class
window.
Remember: With a stand-alone cluster configuration, copy mode must be set to Rewind
Unload.
To open the Management Classes window (Figure 4-53), click Constructs Management
Classes under the Welcome Admin menu. Select the Management Class name and select
Modify from the Select Action drop-down menu.
As shown in Figure 4-54, you can choose between three consistency points per cluster:
No Copy: No copy (NC) is made to this cluster.
Rewind Unload (RUN): A valid version of the logical volume has been copied to this cluster
as part of the volume unload processing.
Deferred: A replication of the modified logical volume is made to this cluster after the
volume had been unloaded (DEF).
Table 4-3 provides an overview of the possible cluster Copy Data Consistency Points for the
Management Class in a two-cluster grid, and their consequences.
Table 4-3 Possible settings of the Data Consistency Points combinations: two-cluster grid
Cluster 0 Cluster 1 Consequence
RUN RUN Use this setting to have both clusters maintain a consistent copy when the
Rewind/Unload is acknowledged back to the host. The grid will manage
the utilization of each cluster.
RUN DEF This ensures that Cluster 0's site will have a valid copy of the logical
volume at job completion. Cluster 1 might have a valid copy at job
completion, or sometime after job completion, depending on the virtual
drive address selected and the override settings.
DEF RUN This ensures that Cluster 1's site will have a valid copy of the logical
volume at job completion. Cluster 0 might have a valid copy at job
completion, or sometime after job completion, depending on the virtual
drive address selected and the override settings.
DEF DEF Use this setting if you do not care to which cluster the initial creation of the
virtual volume is directed. A copy will be made as soon as possible after
Rewind/Unload is acknowledged back to the host.
RUN NC Use this setting to specify Cluster 0 to have the initial valid copy when the
Rewind/Unload is acknowledged back to the host. No copy will be made
to Cluster 1. Note that the “force” override and virtual device address
might have the system create a copy at Cluster 1.
NC RUN Use this setting to specify Cluster 1 to have the initial valid copy when the
Rewind/Unload is acknowledged back to the host. No copy will be made
to Cluster 0.
DEF NC Use this setting to specify Cluster 0 to have the initial valid copy when the
Rewind/Unload is acknowledged back to the host. No copy will be made
to Cluster 1.
NC DEF Use this setting to specify Cluster 1 to have the initial valid copy when the
Rewind/Unload is acknowledged back to the host. No copy will be made
to Cluster 0.
NC NC This setting is not supported because it implies that the volumes should
not be consistent anywhere. You will receive an error message when
trying to specify this combination on the TS7700 Virtualization Engine
management interface.
All data consistency rules shown in the Table 4-3 on page 260 apply also to three-, four-, five-
and six-cluster grid configurations.
The Management Class window shows all clusters in grid and the Data Consistency settings
for each defined management class. The following terminology applies:
RUN: The designated cluster has a consistent copy when rewind-unload operation is
acknowledged back to the host.
DEF: The designated cluster has a consistent copy sometime after job completion.
NC: No copy is made to the designated cluster.
Restriction: In a Geographically Dispersed Parallel Sysplex (GDPS), all three Copy Policy
Override settings (cluster overrides for certain I/O and copy operations) must be selected
on each cluster to ensure that wherever the GDPS primary site is, this TS7700
Virtualization Engine cluster is preferred for all I/O operations.
If the TS7700 Virtualization Engine cluster of the GDPS primary site fails, you must
perform the following recovery actions:
1. Vary virtual devices from a remote TS7700 Virtualization Engine cluster online from the
primary site of the GDPS host.
2. Manually invoke, through the TS7700 Virtualization Engine management interface, a
Read/Write Ownership Takeover, unless Automated Ownership Takeover Manager
(AOTM) has already transferred ownership.
If you have a grid with two or more clusters, you can define scratch mount candidates. For
example in a hybrid configuration, the SAA function can be used to direct certain scratch
allocations (workloads) to one or more TS7720 Virtualization Engines for fast access, while
other workloads can be directed to TS7740 Virtualization Engines for archival purposes.
Clusters not included in the list of scratch mount candidates are not used for scratch mounts
at the associated management class unless those clusters are the only clusters either known
to be available and configured to the host.
Tip: JES3 does not support DAA or SAA. If the composite library is being shared
between JES2 and JES3, do not enable SAA through the Scratch Mount Candidate
option on the management classes assigned to JES3 jobs. This could cause job
abends to occur in JES3.
SAA is enabled with the host Library Request command using the following LIBRARY
REQUEST command:
LIBRARY REQUEST,library-name,SETTING,DEVALLOC,SCRATCH,ENABLE
where library-name = composite-library-name
Disabled is the default setting.
An adequate number of devices connected to the scratch mount candidate clusters are
online at the host.
As shown in Figure 4-57, by default all clusters are chosen as scratch mount candidates.
Select which clusters are candidates by management class. If no clusters are checked, the
TS7700 will default to all clusters as candidates.
Figure 4-57 Scratch mount candidate list in Add Management Class window
Figure 4-59 Retain Copy Mode in the Add Management Class window
This function introduces a concept of grouping clusters together into families. Using cluster
families, you will be able to define a common purpose or role to a subset of clusters within a
grid configuration. The role assigned, for example production or archive, will be used by the
TS7700 microcode to make improved decisions for tasks such as replication and tape volume
cache selection. For example, clusters in a common family are favored for tape volume cache
selection, or replication can source volumes from other clusters within its family before using
clusters outside of its family.
To view or modify cluster family settings, first verify that these permissions are granted to your
assigned user role. If your user role includes cluster family permissions, click the Modify
button to perform the following actions:
Add a family
Move a cluster
Delete a family
Save changes
Tip: You cannot view or modify cluster family settings if permission to do so is not granted
to your assigned user role.
Add a family
Click Add to create a new cluster family. A new cluster family placeholder is created to the
right of any existing cluster families. Enter the name of the new cluster family in the active
Name text box. Cluster family names must be 1 - 8 characters in length and composed of
Unicode characters. Each family name must be unique. To add a cluster to the new cluster
family, move a cluster from the Unassigned Clusters area by following instructions in “Move a
cluster” on page 268.
Move a cluster
You can move one or more clusters between existing cluster families to a new cluster family
from the Unassigned Clusters area, or to the Unassigned Clusters area from an existing
cluster family:
Select a cluster: A selected cluster is identified by its highlighted border. Select a cluster
from its resident cluster family or the Unassigned Clusters area by using one of the
following ways:
– Clicking the cluster
– Pressing the Spacebar
– Pressing Shift while selecting clusters to select multiple clusters at one time
– Pressing Tab to switch between clusters before selecting a cluster
Move the selected cluster or clusters by one of the following ways:
– Clicking a cluster and dragging it to the destination cluster family or the Unassigned
Clusters area
– Using the keyboard arrow keys to move the selected cluster or clusters right or left
Restriction: An existing cluster family cannot be moved within the cluster families page.
Delete a family
You can delete an existing cluster family. Click the X in the top, right corner of the cluster
family you want to delete. If the cluster family you attempt to delete contains any clusters, a
warning message is displayed. Click OK to delete the cluster family and return its clusters to
the Unassigned Clusters area. Click Cancel to abandon the delete action and retain the
selected cluster family.
Save changes
Click Save to save any changes made to the Cluster families page and return it to read-only
mode.
Restriction: Each cluster family must contain at least one cluster. If you attempt to save
changes and a cluster family does not contain any clusters, an error message is displayed
and the Cluster families page remains in edit mode.
Clarification: Host writes to the TS7720 cluster and inbound copies continue during
this state.
Clarification: New host allocations do not choose a TS7720 cluster in this state as a
valid tape volume cache candidate. New host allocations issued to a TS7720 cluster in
this state choose a remote tape volume cache instead. If all valid clusters are in this
state or unable to accept mounts, the host allocations fail. Read mounts can choose the
TS7720 cluster in this state, but modify and write operations fail. Copies inbound to this
TS7720 cluster are queued as deferred until the TS7720 cluster exits this state.
Release 1.7 introduces enhancements that build upon the TS7700 Hybrid removal policy
introduced in release 1.6. These enhancements allow you more control over the removal of
content from a TS7720 as the active data reaches full capacity. To guarantee that data will
always reside in a TS7720 Virtualization Engine or will reside for at least a minimal amount of
time, a pinning time must be associated with each removal policy. This pin time in hours will
allow volumes to remain in a TS7720 Virtualization Engine tape volume cache for at least x
hours before it becomes a candidate for removal, where x is between 0 and 65,536. A pinning
time of zero assumes no minimal pinning requirement. In addition to pin time, three policies
are available for each volume within a TS7720 Virtualization Engine. These policies are as
follows:
Pinned
The copy of the volume is never removed from this TS7720 cluster. The pinning duration is
not applicable and is implied as infinite. After a pinned volume is moved to scratch, it
becomes a priority candidate for removal similarly to the next two policies. This policy must
be used cautiously to prevent TS7720 Cache overruns.
Prefer Remove - When Space is Needed Group 0 (LRU)
The copy of a private volume is removed as long as an appropriate number of copies
exists on peer clusters, the pinning duration (in x hours) has elapsed since last access,
and the available free space on the cluster has fallen below the removal threshold. The
order of which volumes are removed under this policy is based on their least recently used
(LRU) access times. Volumes in Group 0 are removed before the removal of volumes in
Group 1 except for any volumes in Fast Ready categories, which are always removed first.
Archive and backup data would be a good candidate for this removal group because it will
not likely be accessed after it is written.
Prefer Remove and Prefer Keep are similar to cache preference groups PG0 and PG1 with
the exception that removal treats both groups as LRU versus using the volume size.
In addition to these policies, volumes assigned to a Fast Ready category that have not been
previously delete-expired are also removed from cache when the free space on a cluster has
fallen below a threshold. Fast Ready category volumes, regardless of what their removal
policies are, are always removed before any other removal candidates in volume size
descending order. Pin time is also ignored for Fast Ready volumes. Only when the removal of
Fast Ready volumes does not satisfy the removal requirements will Group 0 and Group 1
candidates be analyzed for removal. The requirement for a Fast Ready removal is that an
appropriate number of volume copies exist elsewhere. If one or more peer copies cannot be
validated, the Fast Ready volume is not removed.
Only when all TS7700 Virtualization Engines within a grid are at level 1.7 or later will these
new policies be made visible within the management interface. All records creations before
this time should maintain the default Removal Group 1 policy and be assigned a zero pin time
duration.
To add or change an existent Storage Class, select the appropriate action in the drop-down
menu and click Go. See Figure 4-63.
Removal Threshold
The Removal Threshold is used to prevent a cache overrun condition in a TS7720 cluster
configured as part of a grid. It is a fixed, 2 TB value that, when taken with the amount of used
cache, defines the upper limit of a TS7720 Cache size. Above this threshold, logical volumes
begin to be removed from a TS7720 Cache.
Logical volumes in a TS7720 Cache can be copied to one or more peer TS7700 clusters.
When the amount of used cache in a TS7720 Cache reaches a level that is 3 TB (fixed 2 TB
plus 1 TB) below full capacity, logical volumes begin to be removed. Logical volumes are
removed from a TS7720 Cache in this order:
1. Volumes in fast ready categories
2. Private volumes least recently used, using the enhanced removal policy definitions
A particular logical volume cannot be removed from a TS7720 Cache until the TS7720
Virtualization Engine verifies that a consistent copy exists on a peer cluster. If a peer cluster is
not available, or a volume copy has not yet completed, the logical volume is not a candidate
for removal until the appropriate number of copies can be verified at a later time.
Tip: This field is only visible if the selected cluster is a TS7720 Virtualization Engine in a
grid configuration.
Logical volumes might need to be removed before one or more clusters enter Service mode.
When a cluster in the grid enters Service mode, remaining clusters can lose their ability to
make or validate volume copies, preventing the removal of an adequate number of logical
volumes. This scenario can quickly lead to the TS7720 Cache reaching its maximum capacity.
The lower threshold creates additional free cache space, which allows the TS7720
Virtualization Engine to accept any host requests or copies during the service outage without
reaching its maximum cache capacity.
The Temporary Removal Threshold value must be equal to or greater than the expected
amount of compressed host workload written, copied, or both to the TS7720 Virtualization
Engine during the service outage. The default Temporary Removal Threshold is 4 TB
providing 5 TB (4 TB plus 1 TB) of free space exists. You can lower the threshold to any value
between 2 TB and full capacity minus 2 TB.
The Temporary Removal Threshold is set independently for each TS7720 in the grid by using
the management interface. Go to the tape volume cache in the TS7720 cluster, set the
appropriate Temporary Removal Threshold, and click Submit Changes. See Figure 4-64.
Tip: This field is only visible if the selected cluster is a TS7720 Virtualization Engine in a
grid configuration.
The Temporary Removal Threshold is initiated through the temporary removal process on the
Service Mode page. To initiate this process, you must enable it on the cluster that is expected
to enter the Service mode.
See Figure 4-65 for reference. The Service Mode window contains a Lower Threshold
button. If you click this button, the Confirm Lower Threshold window opens. If confirmed, the
volume removal on TS7720 begins.
Fast Path: See Chapter 8, “Operation” on page 451 for complete details of this window.
For a TS7740 Virtualization Engine running in a multi-cluster grid configuration used for
business continuance, particularly when all I/O is preferenced to the local tape volume cache,
this default management method might not be desired. In the case where the remote site of
the multi-cluster grid is used for recovery, the recovery time is minimized by having most of
the needed volumes already in cache. What is really needed is to have the most recent copy
volumes remain in the cache, not being preferred out of cache.
Based on your requirements, your IBM System Service Representative (SSR) can set or
modify this control through the TS7740 Virtualization Engine SMIT menu for the remote
TS7740 Virtualization Engine, where:
The default is off.
When set to off, copy files are managed as Preference Group 0 volumes (prefer out of
cache first by largest size).
When set to on, copy files are managed based on the Storage Class construct definition.
In the case where the remote TS7740 Virtualization Engine is used for recovery, the recovery
time is minimized by having most of the needed volumes in cache. However, it is not likely that
all of the volumes to restore will be resident in the cache and that some amount of recalls will
be required. Unless you can explicitly control the sequence of volumes to be restored, it is
likely that recalled volumes will displace cached volumes that have not yet been restored
from, resulting in further recalls at a later time in the recovery process.
After a restore has been completed from a recalled volume, that volume is no longer needed,
and such volumes should be removed from the cache after they have been accessed so that
they minimally displace other volumes in the cache.
Based on your requirements, the IBM System Service Representative (SSR) can set or
modify this control through the TS7700 Virtualization Engine SMIT menu of the remote
TS7740 Virtualization Engine, where:
When off the default, recalls are managed as Preference Group 1 volumes (LRU).
When on, recalls are managed as Preference Group 0 volumes (prefer out of cache first
by largest size).
This control is independent of and not affected by cache management controlled through the
Storage Class SMS construct. Storage Class cache management affects only how the
volume is managed in the I/O tape volume cache.
TPF use of categories is flexible. TPF allows each drive to be assigned a scratch category.
Concerning private categories, each TPF has their own category that volumes are
assigned to when they are mounted.
For more information about this topic, see the zTPF Information Center:
http://publib.boulder.ibm.com/infocenter/tpfhelp/current/index.jsp
Because the hosts do not know about constructs, they ignore static construct assignment,
and the assignment is kept even when the logical volume is returned to scratch. Static
assignment means that at insert time of logical volumes, they are assigned construct names
also. Construct names can also be assigned later at any time.
Tip: In a z/OS environment, OAM controls the construct assignment and will reset any
static assignment made before using the TS7700 Virtualization Engine management
interface. Construct assignments are also reset to blank when a logical volume is returned
to scratch.
Define groups of logical volumes with the same construct names assigned and, during insert
processing, direct them to separate volume categories so that all volumes in one LM volume
category have identical constructs assigned.
Host control is given through usage of the appropriate scratch pool. By requesting a scratch
mount from a specific scratch category, the actions defined for the constructs assigned to the
logical volumes in this category are executed at Rewind/Unload of the logical volume.
From a software perspective, differences exist between the IBM Virtualization Engine TS7740
and the IBM Virtualization Engine TS7720. If no specific differences are indicated, the
implementation steps apply to both. Otherwise the differences are explained in each relevant
step.
The host does not know whether it is dealing with “physical” 3490E tape drives or with the
virtual 3490E tape drives of the TS7700 Virtualization Engine. Therefore, the TS7700
Virtualization Engine with virtual 3490E tape drives is defined just like multiple physical IBM
3490-C2A controllers with 16 addresses through the hardware configuration definition (HCD)
interface.
Before you can use the TS7700 Virtualization Engine, you need to define it to the System z
host through HCD. Because the virtual tape drives of the TS7700 Virtualization Engine are
library resident, you must specify LIBRARY=YES in the define device parameters. If FICON
directors are being installed for the first time, the directors themselves can also be defined in
the IOCP and HCD input/output definition file (IODF).
In a z/OS environment, you then must define the TS7700 Virtualization Engine logical library
to SMS. Update the SMS constructs and ACS routines to direct mounts to the TS7700
Virtualization Engine. See 5.3, “TS7700 Virtualization Engine software definitions for z/OS”
on page 306 for more details about the implementation in a System z environment. You might
need to update Missing Interrupt Handler (MIH) values also, as described in 5.2.6, “Set
values for the Missing Interrupt Handler” on page 304.
The software implementation steps for z/VM and z/VSE are described in 5.6, “Software
implementation in z/VM and z/VSE” on page 329. For TPF-related implementation details,
see 5.7, “Software implementation in Transaction Processing Facility” on page 336.
After defining the TS7700 Virtualization Engine to a system by whatever method, verify that
the devices can be brought online. Also plan to update the expected IPL configuration and
validate that an IPL doesn’t generate production problems with the changed definitions.
APARs have been created and address the use of five and six-cluster grids. You should
search for newest PSP-buckets before installing new clusters. Resent APARs that address
software handling for the 5th and 6th distributed libraries are OA32957, OA33450, OA33459,
and OA33570.
Partitioning is also appropriate for the attachment to a z/OS logical partition (LPAR) for
testing. If there is a need for running a test environment with a date different from the actual
date, as it was the case during Y2K tests, you should have a separate TCDB and tape
management system inventory for the test complex.
With the introduction of function Selective Device Access Control (SDAC) the partitioning
possibilities have been improved. See more on SDAC in 5.4, “Implementing Selective Device
Access Control” on page 323. A step by step description of partitioning is located in
Appendix E, “Case study for logical partitioning of a two-cluster grid” on page 863.
To calculate the number of logical paths required in an installation use the following formula:
Number of logical paths per FICON channel = number of LPARs x number of CU
This formula assumes all LPARs access all control units in the TS7700 Virtualization Engine
with all channel paths.
16 8 128 256
16 16 256 256
8 32 256 256
The FICON Planning and Implementation Guide, SG24-6497, covers the planning and
implementation of FICON channels and operating in FICON native (FC) mode. It also
discusses the FICON and Fibre Channel architectures, terminology, and supported
topologies.
Define one tape control unit (CU) in the HCD dialog for every 16 virtual devices. Up to eight
channel paths can be defined to each control unit. A logical path might be thought of as a
three element entity: A host port, a TS7700 Virtualization Engine port, and a logical control
unit in the TS7700 Virtualization Engine.
Remember: A reduction in the number of physical paths will reduce the throughput
capability of the TS7700 Virtualization Engine and the number of available logical paths. A
reduction in control units will reduce the number of virtual devices available for any
individual host.
On the host side, this means that several definitions must be made in HCD and others in
SMS. See Table 5-2 for an example, and create a similar one during your planning phase. It
will be used in later steps.
Table 5-2 lists examples of the library names and IDs needed in a z/OS implementation.
Table 5-2 Sample of library names and IDs: Four cluster TS7700 Virtualization Engine implementation
TS7700 Virtualization Engine SMS namea LIBRARY-ID HCD Define SMS Define
virtual library names
The Distributed Library name and the Composite Library name are not directly tied to
configuration parameters used by the IBM System Services Representative (SSR) during
installation of the TS7700 Virtualization Engine. These names are not defined to the TS7700
Virtualization Engine hardware. However, to make administration easier, it is useful to
associate the LIBRARY-IDs with the SMS Library names through the nickname setting in the
TS7700 Virtualization Engine management interface (MI).
Remember: Match the Distributed and Composite Library names entered at the host with
the aliases defined at the TS7700 Virtualization Engine MI. Although they do not have to
be the same, it will simplify management of the subsystem.
Tip: Specify the LIBRARY-ID and LIBPORT-ID in your HCD/IOCP definitions, even in a
stand-alone configuration. This reduces the likelihood of having to reactivate the IODF
when the library is not available at IPL, and providing enhanced error recovery in certain
cases. It might also eliminate the need to IPL when you make changes to your I/O
configuration. In a multi-cluster configuration, the LIBRARY-ID and LIBPORT-IDs must be
specified in HCD, as shown Table 5-6 on page 297.
Distributed Library ID
During installation planning, each cluster is assigned a unique, five-digit hexadecimal number,
(that is, the sequence number). This is used during subsystem installation procedures by the
IBM SSR. This is the Distributed Library ID. This sequence number is arbitrary, and can be
selected by you. It can start with the letter D.
In addition to the letter D, you can use the last four digits of the hardware serial number if it
only consists of hexadecimal characters. For each Distributed Library ID, it would be the last
four digits of the TS7700 Virtualization Engine serial number.
If you are installing a new multi-cluster grid configuration, you might consider choosing
LIBRARY-IDs that clearly identify the cluster and the grid. The Distributed Library IDs of a
four-cluster grid configuration could be:
Cluster 0 DA01A
Cluster 1 DA01B
Cluster 2 DA01C
Cluster 3 DA01D
Important: Whether you are using your own or IBM nomenclature, the important point is
that the subsystem identification should be clear. Because the identifier that appears in all
system messages is the SMS library name, it is important to distinguish the source of the
message through the SMS Library name.
Composite Library ID
The Composite Library ID is defined during installation planning and is arbitrary. The
LIBRARY-ID is entered by the IBM SSR into the TS7700 Virtualization Engine configuration
during hardware installation. All TS7700 Virtualization Engines participating in a grid will have
the same Composite Library ID. In the example in “Distributed Library ID”, the Composite
Library ID starts with a “C” for this five hex-character sequence number. The last four
characters can be used to uniquely identify each Composite Library in a meaningful way. The
sequence number must match the LIBRARY-ID used in the HCD library definitions and the
LIBRARY-ID listed in the ISMF Tape Library definition windows.
LIBPORT-ID
The LIBPORT-ID reflects the order in which the tape control units are configured to the
TS7700 Virtualization Engine across all Distributed Libraries participating in a Composite
Library. It also provides the tape drive pool ID, which is transparent and only used by
allocation and JES3.
Clarification: The Distributed Library number or cluster index number for a given logical
drive can be determined with the DS QT command. As identified in Figure 5-1, the
response shows LIBPORT-ID 01 for logical drive 9600. LIBPORT-ID 01 is associated with
Cluster 0. The association between Distributed Libraries and LIBPORT-IDs is discussed in
5.2.1, “Defining devices through HCD for Cluster 0” on page 290.
From the DS QT command in Figure 5-1, you can derive the LIBRARY-ID for the Composite
Library and the LIBPORT-ID of the logical control unit presenting the logical device. The real
device type of the physical devices is unknown to the host and DEVSERV always shows 3592
as DEVTYPE. The LIBID field identifies the Composite Library ID associated with the device.
Tip: You can get the real device type from the Host Console Request function LIBRARY
REQUEST,<Distributed Library Name>,PDRIVE located in the Distributed Library.
0 00-0F 01
1 00-0F 02
2 00-0F 03
3 00-0F 04
4 00-0F 05
5 00-0F 06
6 00-0F 07
7 00-0F 08
8 00-0F 09
9 00-0F 0A
A 00-0F 0B
B 00-0F 0C
C 00-0F 0D
D 00-0F 0E
E 00-0F 0F
F 00-0F 10
Table 5-4 CUADD and LIBPORT-ID for the first set of 256 virtual devices
CU 1 2 3 4 5 6 7 8
CUADD 0 1 2 3 4 5 6 7
LIBPORT-ID 01 02 03 04 05 06 07 08
Table 5-5 CUADD and LIBPORT-ID for the second set of virtual devices
CU 9 10 11 12 13 14 15 16
CUADD 8 9 A B C D E F
LIBPORT-ID 09 0A 0B 0C 0D 0E 0F 10
Figure 5-2 and Figure 5-3 on page 292 show the two important windows for specifying a tape
control unit.
Connected to switches . . . 01 01 01 01 __ __ __ __ +
Ports . . . . . . . . . . . D6 D7 D8 D9 __ __ __ __ +
If connected to a switch:
Figure 5-2 Adding the first TS7700 Virtualization Engine control unit through HCD: Part 1
Unit address . . . . . . 00 __ __ __ __ __ __ __ +
Number of units . . . . 16 ___ ___ ___ ___ ___ ___ ___
Figure 5-3 Adding the first TS7700 Virtualization Engine control unit through HCD: Part 2
Tip: When the TS7700 Virtualization Engine is not attached through FICON directors, the
link address fields would be blank.
Repeating the previous process, you would define the second through sixteenth TS7700
Virtualization Engine virtual tape control units, specifying the logical unit address (CUADD)=1
to F, in the Add Control Unit windows. The Add Control Unit summary window is shown in
Figure 5-3.
Connected to CUs . . 0440 ____ ____ ____ ____ ____ ____ ____ +
After entering the required information, you can specify to which processors and operating
systems the devices are connected. Figure 5-5 shows the window used to update the
processor’s view of the device.
Preferred CHPID . . . . . . . . __ +
Explicit device candidate list . No (Yes or No)
Parameter /
Feature Value P Req. Description
OFFLINE Yes Device considered online or offline at IPL
DYNAMIC Yes Device supports dynamic configuration
LOCANY No UCB can reside in 31 bit storage
LIBRARY Yes Device supports auto tape library
AUTOSWITCH No Device is automatically switchable
LIBRARY-ID CA010 5 digit library serial number
LIBPORT-ID 01 2 digit library string ID (port number)
MTL No Device supports manual tape library
SHARABLE No Device is Sharable between systems
COMPACT Yes Compaction
***************************** Bottom of data ****************************
F1=Help F2=Split F4=Prompt F5=Reset F7=Backward
F8=Forward F9=Swap F12=Cancel F22=Command
Tips:
If you are defining drives that are installed in a system-managed IBM Tape Library, such
as the TS7700 Virtualization Engine, you must specify LIBRARY=YES.
If more than one System z host will be sharing the virtual drives in the TS7700
Virtualization Engine, specify SHARABLE=YES. This will force OFFLINE to YES. It is
up to the installation to ensure proper serialization from all attached hosts.
For a stand-alone configuration, specify LIBRARY-ID and LIBPORT-ID. This will position
the site for possible future multi-cluster implementations.
You must use the Composite Library ID of the TS7700 Virtualization Engine in your
HCD definitions.
The Distributed Library IDs are not defined in HCD.
To define the remaining TS7700 Virtualization Engine 3490E virtual drives, you need to
repeat this process for each control unit in your implementation plan.
As an alternative to the procedures described below, you can always IPL the system.
You can check the details using the DEVSERV QTAPE command, which provides information
about Unit Control Block (UCB), UCB prefix, UCB common extension, Device Class
Extension (DCE), and Read Device Characteristics (RDC) and Read Configuration Data
(RCD) data, which are data buffers acquired directly from the device.
Tip: If you are just adding additional device address ranges to an existing TS7700
Virtualization Engine, you can use the same process as for a new tape library.
Alternatively, you can use DS QL,nnnnn,DELETE (where nnnnn is the LIBID) command to
delete the library’s dynamic control blocks. If you have IODEF with LIBID and LIBPORT coded
already, perform the following steps:
1. Use QLIB LIST to see if the INACTIVE control blocks have been deleted.
2. Use ACTIVATE IODF to redefine the devices.
3. Use QLIB LIST to verify that the ACTIVE control blocks are properly defined.
If LIBRARY-ID (LIBID) and LIBPORT-ID are not coded, perform the following steps:
1. MVS VARY ONLINE the devices in the library. This will create some control blocks, and
you will see the following message:
IEA437I TAPE LIBRARY DEVICE(ddd), ACTIVATE IODF=xx, IS REQUIRED
2. Use QLIB LIST to verify that the ACTIVE control blocks are properly defined.
Using Figure 5-7 as an example, define Host 1 as having physical connections to Cluster 0
and Cluster 1. Cluster 2 and Host 2 are probably far away in a disaster recovery site and
Host 2 has only physical connections to Cluster 2. You then configure Host 1 with 512 (2*256)
3490E drives and Host 2 with 256 3490E drives. In HCD, you use the LIBRARY-ID from the
Composite Library (CA010). Host 1 and Host 2 have three Distributed Libraries and one
Composite Library defined to SMS. The three clusters are connected with a IP-network.
0 0-7 01-08
8-F 09-10
1 0-7 41-48
8-F 49-50
2 0-7 81-88
8-F 89-90
3 0-7 C1-C8
8-F C9-D0
4 0-7 21-28
8-F 29-30
5 0-7 61-68
8-F 69-70
The definition steps are essentially the same as for a stand-alone grid configuration. The
important difference is that you need to specify the listed LIBPORT-IDs for all clusters forming
the multi-cluster grid.
The virtual device allocation for each cluster in a two-cluster grid configuration is managed by
the host. That means the host randomly picks a device from each cluster for an I/O operation
based upon a host device allocation algorithm. Referencing Figure 5-7 on page 296, if the
remote Cluster 2 is now attached through a limited bandwidth FICON connection to Host 1 or
Host 2, it might negatively affect I/O performance. The possibility would exist that the remote
Cluster 2 might be selected as the I/O cluster, even if the data is residing in the tape volume
cache of Cluster 0 or Cluster 1. To avoid those situations, vary offline the remote virtual
devices from each host’s point for normal operation. Only in the case of disaster recovery
should those remote virtual drives be varied online from the host. Other possibilities like
Selective Device Assistance and Scratch Allocation Assistance can also be used. For a
detailed description, see Chapter 9, “Performance and Monitoring” on page 635.
In your installation, you might have to review the FICON switch redundancy and performance
objectives. Policies are based on requirements for data availability and accessibility. Your local
policies might require that all FICON equipment must be attached through two FICON
switches, with half of the connections on each.
If you have two data centers and FICON switch equipment at both sites connected to the
hosts, use a cascading FICON switch configuration to attach to your tape subsystems. An
alternate solution could be to connect the FICON connections directly from your local CPU to
the switch in the remote center.
An example of how to do cascading FICON attachment with two sites can be found at the
following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2844
Port 06
ID=41 ID=43
ADD=51 ADD=53
48
4C 4 Channel
4 FICON Channels 4 FICON Channels TS7740
256 Dev
4A
4E
ID=42
z Series ID=44
ADD=52 ADD=54
Ficon Host
Port 08
Port 06
ID=61 ID=63
ADD=71 ADD=73
58
5C 4 Channel
4 FICON Channels 4 FICON Channels TS7740
256 Dev
5A
5E
ID=62
z Series ID=64
ADD=72 ADD=74
Ficon Host
Port 08
Remember: The scratch count of MEDIA2 does not necessarily match the number of
scratch volumes of your tape management system when you use the Expire Hold function
in the TS7700 Virtualization Engine. OAM displays the scratch count it receives from the
TS7700 Virtualization Engine.
Example 5-4 shows the sample output of a DISPLAY SMS,LIBRARY command against the
Composite Library. In this case, for a TS7700 Virtualization Engine, the Outboard Policy
Management(OPM) function is always supported by this type of library. The display shows
that MEDIA2 uses category 0002 and MEDIA1 uses category 0001.
Example 5-4 Display SMS,LIB from a Composite Library
D SMS,LIB(COMPLIB),DETAIL
F OAM,D,LIB,COMPLIB,L=ST6T10-Z
CBR1110I OAM LIBRARY STATUS: 141
TAPE LIB DEVICE TOT ONL AVL TOTAL EMPTY SCRTCH ON OP
LIBRARY TYP TYPE DRV DRV DRV SLOTS SLOTS VOLS
COMPLIB VCL 3957-V06 768 768 287 0 0 368298 Y Y
----------------------------------------------------------------------
MEDIA SCRATCH SCRATCH SCRATCH
TYPE COUNT THRESHOLD CATEGORY
MEDIA1 170345 0 0001
MEDIA2 197953 0 0002
----------------------------------------------------------------------
DISTRIBUTED LIBRARIES: DISTLIB0 DISTLIB1 DISTLIB2
----------------------------------------------------------------------
LIBRARY ID: 10001
OPERATIONAL STATE: AUTOMATED
ERROR CATEGORY SCRATCH COUNT: 33
CORRUPTED TOKEN VOLUME COUNT: 0
----------------------------------------------------------------------
LIBRARY SUPPORTS IMPORT/EXPORT.
LIBRARY SUPPORTS OUTBOARD POLICY MANAGEMENT.
Example 5-5 shows the sample output of a DISPLAY SMS,LIBRARY command against the
Distributed Library.
Example 5-5 Display SMS,LIB output from a Distributed Library
D SMS,LIB(DISTLIB1),DETAIL
F OAM,D,LIB,DISTLIB1,L=ST6T10-Z
CBR1110I OAM LIBRARY STATUS: 062
TAPE LIB DEVICE TOT ONL AVL TOTAL EMPTY SCRTCH ON OP
LIBRARY TYP TYPE DRV DRV DRV SLOTS SLOTS VOLS
DISTLIB1 VDL 3957-V06 0 0 0 1348 819 0 Y Y
----------------------------------------------------------------------
COMPOSITE LIBRARY: COMPLIB
----------------------------------------------------------------------
Four states can be reported (if the library is in the associated state) with the DISPLAY SMS
command. These are:
Limited Cache Free Space - Warning State (TS7720 Virtualization Engine only)
Out of Cache Resources - Critical State (TS7720 Virtualization Engine only)
Forced Pause Occurred
Grid Links Degraded
Example 5-6 Display SMS,LIB from a TS7720 Virtualization Engine composite library
D SMS,LIB(BARR60),DETAIL
CBR1110I OAM LIBRARY STATUS: 672
TAPE LIB DEVICE TOT ONL AVL TOTAL EMPTY SCRTCH ON OP
LIBRARY TYP TYPE DRV DRV DRV SLOTS SLOTS VOLS
BARR60 VCL 3957-VEA 512 0 0 0 0 495 Y Y
----------------------------------------------------------------------
MEDIA SCRATCH SCRATCH SCRATCH
TYPE COUNT THRESHOLD CATEGORY
MEDIA2 495 0 3002
----------------------------------------------------------------------
DISTRIBUTED LIBRARIES: BARR60A BARR60B
----------------------------------------------------------------------
LIBRARY ID: BA060
OPERATIONAL STATE: AUTOMATED
ERROR CATEGORY SCRATCH COUNT: 0
CORRUPTED TOKEN VOLUME COUNT: 0
----------------------------------------------------------------------
LIBRARY SUPPORTS OUTBOARD POLICY MANAGEMENT.
The only difference is in the Device Type field. In a grid consisting of TS7720 clusters only, it
is 3957-VEA or 3957-VEB. In a pure TS7740 grid, a device type of 3957-V06 or 3957-V07 is
displayed. In a multi-cluster hybrid grid, the device type is obtained from the library of the last
drive that was initialized in the grid and might be shown as 3957-VEB or as 3957-V07, which
are the newest models introduced with R2.0.
The important fields of the TS7720 Virtualization Engine Distributed Library are shown in the
output of D SMS,LIB(BARR66A),DETAIL for a distributed library (Example 5-7).
Example 5-7 The Display SMS,LIB from a TS7720 Virtualization Engine distributed library
D SMS,LIB(BARR60A),DETAIL
CBR1110I OAM LIBRARY STATUS: 088
The new command DS QLIB,CATS allows us to change logical VOLSER categories without
need to IPL the system.
Example 5-8 shows how you would list all categories used in a system.
After you have the actual categories, you can change them. To perform that task, change the
first three digits of the category. However, the last digit must remain unchanged because it
represents the media type.
Example 5-9 shows the command that changes all categories to 111 for the first three digits.
Ensure that this change will be done in the DEVSUPxx PARMLIB member. Otherwise, the
next initial program load (IPL) will revert categories to what they were in DEVSUPxx.
Example 5-11 shows a DEVSERV QLIB command output for Composite Library BA060. It
shows the virtual devices and the Distributed Libraries belonging to this Composite Library.
Example 5-12 shows a detailed list of one single library using the DS QL,<library-ID>, DETAIL
command. Check that no duplicate port IDs are listed and that each port has 16 devices. This
is the correct output for a TS7700 Virtualization Engine.
For a complete description of the QLIB command, see the following resources:
Appendix H, “DEVSERV QLIB command” on page 915
z/OS MVS System Commands, SA22-7627
You must specify the MIH timeout value for IBM 3490E devices. The value applies only to the
virtual 3490E drives and not to the real IBM TS1130/TS1120/3592 drives that the TS7740
Virtualization Engine manages in the back end. Remember that the host only knows about
logical 3490E devices.
Table 5-7 summarizes the minimum values, which might need to be increased, depending on
specific operational factors.
3480, 3490 with less than eight devices per CU or low usage 3 minutes
TS7700 Virtualization Engine stand-alone grid with 3490E emulation drives 20 minutes
TS7700 Virtualization Engine multi-cluster grid with 3490E emulation drives 45 minutes
Specify the MIH values in PARMLIB member IECIOSxx. Alternatively, you can also set the
MIH values through the System z operator command SETIOS. This setting is available until it
is manually changed or until the system is initialized.
Use the following statements in PARMLIB, or manual commands to display and set your MIH
values:
You can specify the MIH value in the IECIOSxx PARMLIB member:
MIH DEV=(0A40-0A7F),TIME=45:00
The settings of the SETIOS and the MIH values in the IECIOSxx member change the value
for the primary timeouts, but you cannot change the secondary timeouts. Those are delivered
by the self-describing values from the device itself.
More information about MIH settings is available in MVS Initialization and Tuning Reference,
SA22-7592.
During IPL (if the device is defined to be ONLINE) or during the VARY ONLINE process,
some devices might present their own MIH timeout values through the primary/secondary
MIH timing enhancement contained in the self-describing data for the device. The primary
MIH timeout value is used for most I/O commands, but the secondary MIH timeout value can
be used for special operations, such as long-busy conditions or long running I/O operations.
Any time a user specifically sets a device or device class to have an MIH timeout value that is
different from the IBM-supplied default for the device class, that value will override the
device-established primary MIH timeout value. This implies that if an MIH timeout value that is
equal to the MIH default for the device class is explicitly requested, IOS will not override the
device-established primary MIH timeout value. To override the device-established primary
MIH timeout value, you must explicitly set a timeout value that is not equal to the MIH default
for the device class.
Important: Overriding the device-supplied primary MIH timeout value might adversely
affect MIH recovery processing for the device or device class.
See the specific device's reference manuals to determine if the device supports
self-describing MIH timeout values.
The TS7700 Virtualization Engine must be defined as a new tape library with emulated 3490E
Tape Drives from the host system. See IBM TotalStorage 3494 Tape Library: A Practical
Guide to Tape Drives and Tape Automation, SG24-46322 for more information about defining
this configuration.
The software levels required to support a TS7700 Virtualization Engine are explained in 3.5.2,
“Software requirements” on page 156.
See “RMM and Copy Export” on page 320 for Copy Export functions. See 7.9, “DFSMSrmm
and other tape management systems” on page 437 for other new improvements in
DFSMSrmm functions.
To use the TS7700 Virtualization Engine, at least one Storage Group must be created to allow
the TS7700 Virtualization Engine logical tape library virtual drives to be allocated by the ACS
routines. Because all of the logical drives and volumes are associated with the Composite
Library, only the Composite Library can be defined in the Storage Group. The distributed
libraries must not be defined in the Storage Group.
See the following resources for information about host software implementation tasks for IBM
tape libraries:
z/OS DFSMS OAM Planning, Installation, and Storage Administration Guide for Tape
Libraries, SC35-0427
IBM TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise Tape
Drives and TS3500 Tape Automation, SG24-6789
Use ENTER to Perform Verification; Use DOWN Command to View Next window;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
Use ENTER to Perform Verification; Use DOWN Command to View Next window;
Use HELP Command for Help; Use END Command to Save and Exit; CANCEL to Exit.
Remember: Library ID is the only field that applies for the Distributed Libraries. All
other fields can be blank or left as the default.
6. Create or update the Data Classes (DCs), Storage Classes (SCs), and Management
Classes (MCs) for the TS7700 Virtualization Engine. Make sure that these defined
construct names are the same as those you have defined at the TS7700 Virtualization
Engine management interface (MI), especially in a grid configuration, because outboard
policy management is being used for multi-cluster grid copy control.
7. Create the Storage Groups (SGs) for the TS7700 Virtualization Engine. Make sure that
these defined construct names are the same as those you have defined at the TS7700
Virtualization Engine MI.
The Composite Library must be defined in the Storage Group. Do not define the
distributed libraries in the Storage Group.
8. Create ACS routines to assign the constructs, and translate, test, and validate the ACS
routines.
9. Activate the new Source Control Data Set (SCDS).
For more detailed information about defining a tape subsystem in a DFSMS environment, see
IBM TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise Tape
Drives and TS3500 Tape Automation, SG24-6789.
This section describes how to implement and execute Copy Export. For more details and
error messages that are related to the Copy Export function, see IBM Virtualization Engine
TS7700 Series Copy Export Function User’s Guide, which is available on the Techdocs
website at the following URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101092
Consideration: For the secondary pool used for Copy Export. it is advisable to keep
the same secondary pool as reclaim pool. If not, disaster recovery capabilities can be
compromised because logical volume can be removed from the pool used for Copy
Export.
In the response shown in Example 5-14, you can see the following items:
– Eight drives are defined. Their serial numbers (SN) are shown in the left column.
– All TS1120 drives are encryption-capable, as indicated by TYPE 3592E05E. TS1120
Tape Drives that are not encryption-capable will show as 3592E05. TS1130 Tape
Drives would be listed as 3592E06E because all TS1130 Tape Drives are
encryption-capable.
– All eight drives are available (AVAIL=Y).
– MODE indicates the format in which the drive is currently operating. The information is
only available when a tape is mounted on the drive.
– ROLE describes what the drive is doing at the moment, for example, RCLS is reclaim
source and RCLT is reclaim target. In this example, two reclamation and two
premigration operations are running:
• Logical volume Z09866 is being reclaimed from physical volume S70470 mounted
on a drive with SN-7878161 to physical volume S70421 mounted on a drive with
SN-1365176. Both stacked volumes reside in Pool 01.
• Logical volume XC4487 is being reclaimed from physical volume 310112 mounted
on drive SN-1365137 to physical volume S70479 mounted on drive SN-7878312.
Both stacked volumes reside in Pool 03.
• Logical volume JA8149 is being written to physical volume Z04381 mounted on
drive SN-7878175. Logical volume JA8145 is being written to physical volume
Z08629 mounted on drive SN-1365177.
– Six drives are in use and two are IDLE, meaning ready for use. Serial number 1365181
is IDLE, but a physical volume is still mounted.
Remember: The Copy Export operation requires a single drive to write the TS7700
Virtualization Engine database to the volumes being exported. Be sure to consider this
situation when analyzing workload and drive utilization. See Chapter 9, “Performance
and Monitoring” on page 635 for more information about workload and drive utilization.
Pool 00 is the Common Scratch Pool. Pool 9 is the one used for Copy Export.
Example 5-15 shows the command POOLCNT. The response that is listed per pool is as
follows:
– The media type used for each pool
– The number of empty physical volumes
– The number of physical volumes in the filling state
– The number of full volumes
– The number of physical volumes that have been reclaimed, but need to be erased
– The number of physical volumes in read-only recovery state
– The number of volumes unavailable or in a destroyed state (1 in Pool 1)
– The number of physical volumes in the copy exported state (45 in Pool 9)
You should determine when you usually want to start the Copy Export operation.
Thresholds could be the number of physical scratch volumes or other values that you
define. These thresholds could even be automated by creating a program that interprets
the output from the Library Request commands PDRIVE and POOLCNT, and acts based
on the required numbers.
For more information about the Library Request Command, see 8.5.3, “Host Console
Request function” on page 589.
4. Create an Export List volume that provides the TS7700 Virtualization Engine with
information about which data to export, and options to use during the operation.
If you use a multi-cluster grid, be sure to create the Export List volume only on the same
TS7700 Virtualization Engine that is used for Copy Export, but not on the same physical
volume pool as used for Copy Export. If more than one TS7700 Virtualization Engine in a
multi-cluster grid configuration contains the Export List volume, the Copy Export operation
will fail.
Figure 5-12 Management Class settings for the Export List volume
The information required in the Export List file is, as for BVIR, provided by writing a logical
volume that fulfills the following requirements:
That logical volume must have a standard label and contain three files:
– An Export List file, as created in STEP1 in Example 5-16 on page 316. You want to
export Pool 09. Option EJECT in record 2 tells the TS7700 Virtualization Engine to
eject the stacked volumes upon completion. With OPTIONS1,COPY, the physical
volumes will be placed in the export-hold category for later handling by an operator.
– A Reserved file, as created in STEP2 in Example 5-16 on page 316. This file is
reserved for future use.
– An Export Status file, as created in STEP3 in Example 5-16 on page 316. In this file,
the information is stored from the Copy Export operation. It is essential that you keep
this file because it contains information related to the result of the Export process.
All records must be 80 bytes in length.
The Export List file must be written without compression. Therefore, you must assign a
Data Class that specifies COMPRESSION=NO or you can overwrite the Data Class
specification by coding TRTCH=NOCOMP in the JCL.
Make sure that the files are assigned a Management Class that specifies that only the
local TS7700 Virtualization Engine has a copy of the logical volume. You can either have
the ACS routines assign this Management Class, or you can specify it in the JCL. These
files should have the same expiration dates as the longest of the logical volumes you
export because they must be kept for reference.
2. The host sends a command to the Composite Library and from there it is routed to the
TS7700 Virtualization Engine where the Export List VOLSER resides.
3. The executing TS7700 Virtualization Engine validates the request, checking for required
resources, and if all is acceptable, the Copy Export continues.
4. Logical volumes related to Pool 09 that still reside only in cache can delay the process.
They will be copied to physical volumes in pool 9 as part of Copy Export execution.
See the IBM Virtualization Engine TS7700 Series Copy Export Function User's Guide, which
is available at the Techdocs Library website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101092
After a successful completion, all physical tapes related to Pool 09 (in the example) are
ejected. The operator can empty the I/O station and transport the tapes to another location.
Fast Path: To perform Copy Export Recovery for disaster recovery or disaster recovery
testing, see Chapter 10, “Failover and disaster recovery scenarios” on page 749.
An export operation can be canceled from a host or through the TS7700 Virtualization Engine
management interface.
A request to cancel an export operation can be initiated from any host attached to the TS7700
Virtualization Engine subsystem by using one of the following methods:
Use the host console command LIBRARY EXPORT,XXXXXX,CANCEL, where XXXXXX
is the volume serial number of the Export List File Volume.
Use the Program Interface of the Library Control System (LCS) external services
CBRXLCS.
If an export operation must be canceled and there is no host attached to the TS7700
Virtualization Engine that can issue the CANCEL command, you can cancel the operation
through the TS7700 Virtualization Engine management interface. After confirming the
selection, a cancel request is sent to the TS7700 Virtualization Engine processing the Copy
Export operation.
Regardless of whether the cancellation originates from a host or the management interface,
the TS7700 Virtualization Engine can process it as follows:
If the processing of a physical volume has reached the point where it has been mounted to
receive a database backup, the backup completes and the volume placed in the
Messages differ depending on what the TS7700 Virtualization Engine encountered during the
execution of the operation:
If no errors or exceptions were encountered during the operation, message CBR3855I is
generated. The message has the format shown in Example 5-19.
If message CBR3856I is generated, examine the Export Status file to determine what errors
or exceptions were encountered.
Either of the completion messages provides statistics on what was processed during the
operation. The statistics reported are as follows:
Requested-number: This is the number of logical volumes associated with the secondary
volume pool specified in the export list file. Logical volumes associated with the specified
secondary volume pool that were previously exported are not considered part of this
count.
Exportable-number: This is the number of logical volumes that are considered exportable.
A logical volume is exportable if it is associated with the secondary volume pool specified
in the export list file and it has a valid copy resident on the TS7700 Virtualization Engine
performing the export. Logical volumes associated with the specified secondary volume
pool that were previously exported are not considered to be resident in the TS7700
Virtualization Engine.
Exported-number: This is the number of logical volumes that were successfully exported.
Stacked-number: This is the number of physical volumes that were successfully exported.
Clarification: The number of megabytes (MB) exported is the sum of the MB integer
values of the data stored on each Exported Stacked Volume. The MB integer value for
each Exported Stacked Volume is the full count by bytes divided by 1,048,576 bytes. If
the result is less than 1, the MB integer becomes 1, and if greater than 1 MB, the result
is truncated to the integer value (rounded down).
MBytes Moved: For Copy Export at code release level R1.3, this field reports the same
number as the MBytes Exported field. For Copy Export at code release level R1.4 and
later, this value is 0.
For more details and error messages related to the Copy Export function, see the white paper
IBM Virtualization Engine TS7700 Series Copy Export Function User’s Guide, which is
available at the Techdocs Library website. Search for TS7700 at the following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
To have DFSMSrmm policy management manage the retention and movement for volumes
created by Copy Export processing, you must define one or more volume Vital Record
Specifications (VRS). For example, assuming all copy exports are targeted to a range of
volumes STE000-STE999, you could define a VRS as shown in Example 5-21.
As a result of this, all matching stacked volumes that are set in AUTOMOVE will have their
destination set to the required location and your existing movement procedures can be used
to move and track them.
In addition to the support listed, a copy exported stacked volume can become eligible for
reclamation based on the reclaim policies defined for its secondary physical volume pool or
through the Host Console Request function (LIBRARY REQUEST). When it becomes eligible
for reclamation, the exported stacked volume no longer contains active data and can be
returned from its off-site location for reuse.
For users that use DFSMSrmm, when you have stacked volume support enabled,
DFSMSrmm automatically handles and tracks the stacked volumes created by Copy Export.
However, there is no way to track which logical volume copies are on the stacked volume. You
should retain the updated export list file created by you and updated by the library so that you
have a record of what logical volumes were exported and on what exported stacked volume
they reside.
You must set up procedures for when exported physical volumes can return to the library.
Normally, the physical volumes must return when all logical volumes are expired. To
determine when exported volumes have expired, the simplest way is to use the Bulk Volume
Information Retrieval (BVIR) facility. Use BVIR with Physical Volume Status Pool xx as input.
With BVIR, you can retrieve information for all physical volumes in a pool. See Chapter 9,
“Performance and Monitoring” on page 635 for more information about how to use BVIR. The
copy exported physical volumes continue to be managed by the source TS7700 in terms of
space reclamation. You can allow the TS7700 to determine when to reclaim a Copy Export
physical volume or you can manually cause a volume to be reclaimed.
Another reason for bringing back a physical volume applies to a stand-alone grid where the
original physical volume has been destroyed, meaning the exported copy would be the only
available copy available.
When you insert the cartridges through the I/O station, these volumes are directly added to
the insert category or the private category (if they contain active logical volumes) in the
TS7740 Virtualization Engine. The TS7740 Virtualization Engine recognizes the volumes it
owns and returns them to the pool in which they were previously. The state of the volumes is
changed to read-write and, if they are empty and without active logical volumes, they will have
an empty status also. When insertion of the volumes is finished, the entire Copy Export
operation for a single physical volume life cycle is complete. For more information about
managing unassigned volumes, see “Unassigned volumes in the TS3500 Tape Library” on
page 209.
Tape Encryption is supported for stacked volumes that are used for Copy Export, as
described in 4.2.4, “Defining the Encryption Method for the new logical library” on page 202.
The TS1130 and TS1120 drives must be encryption-enabled, the tape library must be at a
Licensed Internal Code level that supports tape encryption, and the Encryption Key Manager
must be available at the disaster recovery site to be able to process encrypted stacked
volumes that have been exported.
For example, assume that the TS7700 that is to perform the Copy Export operation is
Cluster 1. The pool on that cluster to export is pool 8. You would need to set up a
management class for the data that is to be exported such that it will have a copy on Cluster 1
and a secondary copy in pool 8. To ensure the data is on that cluster and is consistent with
the close of the logical volume, you would want to have a copy policy of Rewind/Unload
(RUN). You would define the following information:
Define a management class, for example, of MCCEDATA, on Cluster 1 as follows:
Secondary Pool 8
Cluster 0 Copy Policy RUN
Cluster 1 Copy Policy RUN
Define this same management class on Cluster 0 without specifying a secondary pool.
To ensure that the export list file volume gets written to Cluster 1 and only exists there,
define a management class, for example, of MCELFVOL, on Cluster 1 as follows:
Cluster 0 Copy Policy No Copy
Cluster 1 Copy Policy RUN
Define this management class on Cluster 0 as follows:
A Copy Export operation can be initiated through any virtual tape drive in the TS7700 grid
configuration. It does not have to be initiated on a virtual drive address in the TS7700 that is
to perform the Copy Export operation. The operation will be internally routed to the TS7700
that has the valid copy of the specified export list file volume. Operational and completion
status will be broadcast to all hosts attached to all of the TS7700s in the grid configuration.
Only the logical volumes resident on the TS7700 performing the operation, at the time it is
initiated, are exported. If a logical volume has not been copied yet or completed its copy
operation when the export operation is initiated, it is not considered for export during the
operation. It is assumed that Copy Export is performed on a regular basis and logical
volumes, whose copies were not complete when a Copy Export was initiated, will be exported
the next time Copy Export is initiated. You can check the copy status of the logical volumes on
the TS7700 that is to perform the Copy Export operation before initiating the operation by
using the Volume Status function of the BVIR facility. You can then be sure that all critical
volumes will be exported during the operation.
Normally, you would return the empty physical volumes to the library I/O station that
associated with the source TS7700 and re-insert them. They would then be reused by that
TS7700. If you want to move them to another TS7700, whether in the same grid configuration
or another, consider two important points:
Ensure that the VOLSER ranges you had defined for that TS7700 matches the VOLSERs
of the physical volume(s) you want to move.
Use the host console request function, Library Request, to have the original TS7700 stop
managing them:
LIBRARY REQUEST,libname,COPYEXP,volser,DELETE
The primary intended use of SDAC is to prevent one host lpar/sysplex with an independent
tape management configuration from inadvertently modifying or removing data owned by
A use case for the feature could be a customer that uses a service provider. The service
provider uses one Composite Library to deliver the services needed for all the sysplexes:
Each sysplex uses its own scratch categories based on logical partitioning. See 5.1.2,
“Partitioning the TS7700 Virtualization Engine between multiple hosts” on page 285
Each sysplex uses its own unique volume ranges and independent tape management
system (i.e. RMM or CA-1)
Service provider owns the host and therefore has exclusive access to the IODF settings
for each sysplex.
Access to the MI should be the responsibility of the service provider and access to the MI
should be restricted.
IODF is defined within the zSeries host, by the service provider, to determine which device
ranges and associated Library Port IDs that are configured for a particular sysplex. You
assign range of volumes to be mutually exclusive to that set of Library Port IDs
IBM RACF® security are used to protect the IODF dataset.
Tape volumes and datasets on tape volumes are RACF protected. To accomplish this part
and get more detailed information about RACF and Tape, see z/OS V1R11.0 DFSMSrmm
Implementation and Customization Guide, SC26-7405.
The function can be active on new volume ranges and the existing ranges. An example with
three hosts using SDAC is shown in Figure 5-13.
Group1 = LP ID 01-08
Group2 = LP ID 09-0E
Group3 = LP ID 0F-10
X
Vi rtual Tape D evice Addresses
VOLA00-VOLA99: Group2
VOLB00-VOLB99: Group1
VOLC00-VOLC99: Group3
VOLD00-VOLD99: Group3
X X
TS7700
Each Access Group includes one or more ranges of Library Port IDs. Access Groups are grid
scope, so all clusters see the same Access Groups. Figure 5-14 shows how you define and
connect an Access Group to LIBPORT-IDs.
When you are out of Logical Volumes and need to insert more, you will see a warning
message in the MI if the volumes are not fully covered by one or more existing ranges. This
allows you to correct the range definition before the insert.
In response to the SETTING request, the composite library or the cluster associated with the
distributed library in the request will modify its settings based on the additional keywords
specified. If no additional keywords are specified, the request will just return the current
settings.
With the Setting function, you have the ability to modify the internal behavior of the TS7700
using the reporting standard setting. The TS7700 Management Interface (MI) cannot be used
for viewing or altering the parameters controlled by the SETTINGS function of the Library
Request Host Control function.
Tip: There is no equivalent in the TS7700 Management Interface (MI) for the LIBRARY
REQUEST SETTINGS function.
All settings are persistent across machine restarts, service actions, and code updates. The
settings are not carried forward as part of Disaster Recovery from Copy Exported tapes or the
recovery of a system.
The Library Request command for Host Console Request is supported in z/OS V1R6 and
later. See OAM APAR OA20065 and device services APARs OA20066, OA20067, and
OA20313. A detailed description of the Host Console Request functions and responses is
available in IBM Virtualization Engine TS7700 Series z/OS Host Command Line Request
User’s Guide, which is available at the Techdocs website (search for the term “TS7700”):
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
Further detailed information can be found in 8.5.3, “Host Console Request function” on
page 589. Information about performance aspects of the various parameters can be found in
Chapter 9, “Performance and Monitoring” on page 635.
Table 5-8 Supported tape solutions for non-z/OS platforms in System z environments
Platform / Tape System IBM System TS7700 3592 drives
Storage TS3500 Virtualization
Tape Library Engine
Even if z/VM and z/VSE can use the TS7700 Virtualization Engine, there are restrictions you
should consider. For information about support for TPF, see 5.7, “Software implementation in
Transaction Processing Facility” on page 336.
However, one possibility is to use dedicated physical pools in a TS7700 Virtualization Engine
environment. After insert processing of virtual volumes completes, you can define a default
construct to the volume range as described in 4.5, “Implementing Outboard Policy
Management for non-z/OS hosts” on page 280.
Figure 5-18 shows the z/VM native support for the TS7700 Virtualization Engine.
Mount
RMSMASTR
Requester
z/VM
FICON Channels
Virtual TS7700
Tape Mgmt
Drives Software
TS7700
Figure 5-18 TS7700 Virtualization Engine in a native z/VM environment using DFSMS/VM
When you use the TS7700 Virtualization Engine in a VM environment, consider that many VM
applications or system utilities use specific mounts for scratch volumes, so every time a
mount request is issued from the host, the TS7700 Virtualization Engine has to recall the
requested logical volume from the stacked cartridge if it is not already in the tape volume
cache. This can lead to performance degradation when writing data in a VM environment. In
addition, VM backups usually require off-site movement, so the TS7700 Virtualization Engine
is not the best candidate for this data.
DFSMS/VM
After you have defined the new TS7700 Virtualization Engine Tape Library through HCD, you
must define the TS7700 Virtualization Engine to DFSMS/VM if the VM system is to use the
TS7700 Virtualization Engine directly. You define the TS7700 Virtualization Engine Tape
Library through the DFSMS/VM DGTVCNTL DATA control file. Also, you define the available
tape drives though the RMCONFIG DATA configuration file. See z/VM V6R1 DFSMS/VM
Removable Media Services, SC24-6185 for more information.
You have access to RMS as a component of DFSMS/VM. To allow RMS to perform automatic
insert bulk processing, you must create the RMBnnnnn DATA file in the VMSYS:DFSMS CONTROL
directory, where nnnnn is the five-character tape library sequence number that is assigned to
the TS7700 Virtualization Engine during hardware installation.
For details about implementing DFSMS/VM and RMS, see DFSMS/VM Function Level 221
Removable Media Services User's Guide and Reference, SC35-0141. If the TS7700
Virtualization Engine is shared by your VM system and other systems, additional
considerations apply. See Guide to Sharing and Partitioning IBM Tape Library Data,
SG24-4409 for further information.
Restriction: The outboard policy management functions are currently not supported with
z/VSE.
z/VSE supports the TS3500 Tape Library/3953 natively through its Tape Library Support
(TLS). In addition to the old Tape Library support, a function has been added to allow the
Tape Library to be supported through the S/390 channel command interface commands, thus
eliminating any XPCC/APPC communication protocol required by the old interface. The
external interface (LIBSERV JCL and LIBSERV macro) remains unchanged.
For native support under VSE, where TS7700 is used only by z/VSE select TLS, at least one
tape drive must be permanently assigned to VSE.
LIBSERV
The communication from the host to the TS7700 Virtualization Engine goes through the
LIBSERV JCL or macro interface. Example 5-24 shows a sample job using LIBSERV to
mount volume 123456 for write on device address 480 and, in a second step, to release the
drive again.
For additional information, see z/VSE System Administration Guide, SC33-8224 and z/VSE
System Macros Reference, SC33-8230.
z/OS guests
The environments described in 5.3.1, “z/OS and DFSMS/MVS system-managed tape” on
page 306 can operate when z/OS is running as a guest of z/VM Release 5.4 or later.
The STDEVOPT statement specifies the optional storage device management functions
available to a virtual machine. The LIBRARY operand with CTL tells the control program that
the virtual machine is authorized to issue Tape Library commands to an IBM Automated Tape
Library Dataserver. If the CTL parameter is not explicitly coded, the default of NOCTL is used.
NOCTL specifies that the virtual machine is not authorized to issue commands to a Tape
Library, and this results in an I/O error (command reject) when MVS tries to issue a command
to the library. For further information about the STDEVOPT statement, go to the following
address:
http://www.vm.ibm.com/zvm540/
z/VSE guests
Some VSE tape management systems require VSE Guest Server (VGS) support and also
DFSMS/VM RMS for communication with the TS7700 Virtualization Engine.
If the VGS is required, define the LIBCONFIG file and FSMRMVGC EXEC configuration file
on the VGS service machine's A disk. This file simply cross-references the z/VSE guest's
tape library names with the names that DFSMS/VM uses. To enable z/VSE guest exploitation
of inventory support functions through the LIBSERV-VGS interface, the LIBRCMS part must
be installed on the VM system.
If VGS is to service inventory requests for multiple z/VSE guests, you must edit the LIBRCMS
SRVNAMES cross-reference file. This file enables the inventory support server to access
Librarian files on the correct VSE guest machine. For further information, see the following
sections:
5.6.5, “z/VSE as a z/VM guest using a VSE Guest Server (VGS)” on page 334
7.6, “VSE Guest Server Considerations” in Guide to Sharing and Partitioning IBM Tape
Library Data, SG24-4409.
Figure 5-19 shows the flow and connections of a TS7700 Virtualization Engine in a z/VSE
environment under VM.
VSE/ESA
Library APPC/VM Standard RMS
API VSE Guest
Control Interface RMSMASTR
Server (VGS)
Tape I/O
VM/ESA
FICON Channels
Virtual TS7700
Tape Mgmt
Drives Software
Mount 1
RMSMASTR
Requester
3, 4
2
5 6
z/VSE
FICON Channels
Virtual TS7700
Tape Mgmt
Drives Software
Figure 5-20 TS7700 Virtualization Engine in a z/VSE environment as a VM guest (no VGS)
VSE uses OEM tape management products that support scratch mounts, so if you are using
VSE under VM, you have the benefit of using the fast-ready attribute for the VSE library’s
scratch category.
For more information about z/VSE, see z/VSE V4R1.0 Administration, SC33-8304.
Because TPF does not have a tape management system or a tape catalog system, z/OS
manages this function. In a TPF environment, most tape data is passed between the
systems. In general, 90 percent of the tapes are created on TPF and read on z/OS, and the
remaining 10 percent are created on z/OS and read on TPF.
Be sure to use the normal z/OS and TS7700 Virtualization Engine installation process.
After a volume is loaded into a TPF drive, you have an automated solution in place that
passes the volume serial number (VOLSER), the tape data set name, and the expiration date
over to z/OS to have it processed automatically.
On z/OS, you must update the tape management system’s catalog and the TCDB so that
z/OS can process virtual volumes that have been created by TPF. After the TPF-written
volumes have been added to the z/OS tape management system catalog and the TCDB,
normal expiration processing applies. When the data on a virtual volume has expired and the
volume is returned to scratch, the TS7700 Virtualization Engine internal database is updated
to reflect the volume information maintained by z/OS.
Specifics for TPF and z/OS with a shared TS7700 Virtualization Engine
From the virtual drive side, TPF must be allocated certain drive addresses. This information
depends on what tape functions are need on TPF, and can vary with your set. Therefore, the
TS7700 Virtualization Engine will have tape addresses allocated to multiple TPF and z/OS
systems, and can be shared by dedicating device addresses to other systems.
Consider the following information when you implement a TS7700 Virtualization Engine in a
TPF environment:
Reserving a tape category does not prevent another host from using that category. You
are responsible for monitoring the use of reserved categories.
Automatic insert processing is not provided in TPF.
Currently, no IBM tape management system is available for TPF.
Advanced Policy Management is supported in TPF through a user exit. The exit is called any
time a volume is loaded into a drive. At that time, the user can specify, through the TPF user
exit, whether the volume should inherit the attributes of an existing volume using the clone
For the two levels of TPF, two separate APARs are required for this support:
For TPF V4.1 the APAR number is PJ31643.
For zTPFV 1.1, the APAR number is PJ31394.
Library interface
TPF’s only operator interface to the TS7700 Virtualization Engine is a TPF functional
message, ZTPLF. The various ZTPLF functions allow the operator to manipulate the tapes in
the library as operational procedures require. These functions include Reserve, Release,
Move, Query, Load, Unload, and Fill. For more information, see IBM TotalStorage 3494 Tape
Library: A Practical Guide to Tape Drives and Tape Automation, SG24-4632.
SIM and MIM are represented in TPF by EREOP reports and the following messages:
CEFR0354
CEFR0355W
CEFR0356W
CEFR0357E
CEFR0347W
CDFR0348W
CDFR0349E
The issue with TPF arises when the period that clusters wait before recognizing that another
cluster in the grid has failed exceeds the timeout values on TPF. This issue also means that
during this recovery period, TPF is unable to perform any ZTPLF commands that change the
status of a volume, including the loading of tapes or changing the category of a volume
through a ZTPLF command or through the tape category user exit in segment CORU. The
recovery period when a response is still required from a failing cluster can be as long as six
minutes. Attempting to issue a tape library command to any device in the grid during this
period can render that device inoperable until the recovery period has elapsed even if the
device is on a cluster that is not failing.
To protect against timeouts during a cluster failure, TPF systems must be configured to avoid
issuing tape library commands to devices in a TS7700 grid along critical code paths within
TPF. This can be accomplished through the tape category change user exit in segment
To further protect TPF against periods in which a cluster is failing, TPF must keep enough
volumes loaded on drives that have been varied on to TPF so that the TPF system can
operate without the need to load an additional volume on any drive in the grid until the cluster
failure has been recognized. TPF must have enough volumes loaded so that it can survive the
six-minute period where a failing cluster prevents other devices in that grid from loading any
new volumes.
Important: Read and write operations to devices in a grid do not require communication
between all clusters in the grid. Eliminating the tape library commands from the critical
paths in TPF helps TPF tolerate the recovery times of the TS7700 and read or write data
without problems if a failure of one cluster occurs within the grid.
Guidelines
When TPF applications use a TS7700 multi-cluster grid represented by the Composite
Library, the following usage and configuration guidelines can help you meet the TPF response
time expectations on the storage subsystems:
The best configuration is to have the active and standby TPF devices and volumes on
separate Composite Libraries (either single- or multi-cluster grid). This configuration
prevents a single event on a Composite Library from affecting both the primary and
secondary devices.
If the active and standby TPF devices/volumes will be configured on the same composite
library in a grid configuration, be sure to use the following guidelines:
– Change the category on a mounted volume only when it is first mounted through
ZTPLF LOAD command or as the result of a previous ZTPLF FILL command.
This change can be accomplished through the tape category change user exit in
segment CORU. To isolate TPF from timing issues, the category for a volume must
never be changed if the exit has been called for a tape switch. Be sure that the exit
changes the category when a volume is first loaded by TPF, and then not changed
again.
– TPF must keep enough volumes loaded on drives that have been varied on to TPF so
that the TPF system can operate without the need to load additional volumes on any
drive in the grid until a cluster failure has been recognized and the cluster isolated. TPF
must have enough volumes loaded so that it can survive the six-minute period when a
failing cluster prevents other devices in that grid from loading any new volumes.
– TPF systems must always be configured so that any scratch category is made up of
volumes that are owned throughout all the various clusters in the grid. This way
assures that during cluster failures, volumes on other clusters are available for use
without having ownership transfers.
Restriction: The feature must be installed on all clusters in the grid before the function
becomes enabled.
Remember: The number of logical volumes supported in a grid is set by the cluster with
the smallest number of FC5270 increments installed.
When joining a cluster to an existing grid, the joining cluster must meet or exceed the
currently supported number of logical volumes of the existing grid.
Table 6-1 Upgrade configurations for four-drawer TS7720 Cache (containing 1 TB drives)
Existing TS7720 Additional TS7720 Total TS7720 Cache Usable capacity
Cache Configuration Cache Expansion Unitsa
(using 1 TB drives) Units (3956-XS7)
(2 TB drives)
Storage expansion frame cache upgrade for existing seven-drawer TS7720 Cache
(containing 1 TB and 2 TB drives)
You can use the FC7323, TS7720 Storage expansion frame and the FC9323, Expansion
frame attachment as an MES to add a storage expansion frame to a fully configured base
frame TS7720 Cache subsystem. The TS7720 Storage Expansion Frame contains two
additional cache controllers, each controlling up to five additional expansion drawers.
Table 6-2 Upgrade configurations for seven-drawer TS7720 Cache (containing 1 TB and 2 TB drives)
Existing TS7720 Additional Additional Total TS7720 Usable capacity
Cache TS7720 Storage TS7720 Storage Cache Units
Configuration Expansion Expansion (including
Frame Cache Frame Cache TS7720 Base
Controllers Expansion Units Frame)b
(3956-CS8)a (3956-XS7)a
(2 TB drives) (2 TB drives)
5 14 227.49 TB
(206.90 TiB)
6 15 251.33 TB
(228.58 TiB)
7 16 275.17 TB
(250.26 TiB)
8 17 299.00 TB
(271.94 TiB)
9 18 322.84 TB
(293.62 TiB)
10 19 346.68 TB
(315.30 TiB)
6 15 280.95 TB
(255.52 TiB)
7 16 304.79 TB
(277.20 TiB)
8 17 328.62 TB
(298.88 TiB)
9 18 352.46 TB
(320.56 TiB)
10 19 376.30 TB
(342.24 TiB)
a. The lower controller must have at most one more module than the upper controller.
b. “Total cache units” refers to the combination of cache controllers and cache expansion units.
Base Frame cache upgrade for existing TS7720 Cache (containing only 2TB drives)
In the base 3952-F05 Frame, you can use FC5647, Field install 3956-XS7, as an MES to
add up to a total of 7 TS7720 Cache Drawers to an existing TS7720 Cache subsystem
(containing the 2TB drives).
Table 6-3 Upgrade configurations for TS7720 Cache (containing only 2TB drives)
Existing TS7720 Additional TS7720 Total TS7720 Cache Usable capacity
Cache Configuration Cache Expansion Unitsa
(using 2TB drives) Units (3956-XS7) (2 TB drives)
(2 TB drives)
Storage expansion frame cache upgrade for existing TS7720 Cache (containing only 2TB
drives)
You can use FC7323, TS7720 Storage expansion frame and FC9323, Expansion frame
attachment as an MES to add a storage expansion frame to a fully configured TS7720
Cache subsystem. The TS7720 Storage Expansion Frame contains two additional cache
controllers, each controlling up to five additional expansion drawers.
Table 6-4 Upgrade configurations for seven-drawer TS7720 Cache (containing only 2 TB drives)
Existing TS7720 Additional Additional Total TS7720 Usable capacity
Cache TS7720 Storage TS7720 Storage Cache Units
Configuration Expansion Expansion (including
(using Frame Cache Frame Cache TS7720 Base
2TB drives) Controllers Expansion Units Frame)b
(3956-CS8)a (3956-XS7)a
(2 TB drives) (2 TB drives)
4 13 297.90 TB
(270.94 TiB)
5 14 321.74 TB
(292.62 TiB)
6 15 345.57 TB
(314.30 TiB)
7 16 369.41 TB
(335.98 TiB)
8 17 393.25 TB
(357.66 TiB)
9 18 417.08 TB
(379.34 TiB)
10 19 440.92 TB
(401.02 TiB)
a. The lower controller must have at most one more module than the upper controller.
b. “Total cache units” refers to the combination of cache controllers and cache expansion units.
Incremental features
Incremental features help tailor storage costs and solutions to your specific data
requirements.
Subsets of total cache and peak data throughput capacity are available through incremental
features FC5267, 1 TB cache enablement and FC5268, 100 MBps increment. These features
enable a wide range of factory-installed configurations and permit you to enhance and update
an existing system. They can help you meet specific data storage requirements by increasing
cache and peak data throughput capability to the limits of your installed hardware. Increments
of cache and peak data throughput can be ordered and installed concurrently on an existing
system through the TS7740 Virtualization Engine Management Interface.
Do not remove any installed peak data throughput features because removal can affect host
jobs.
Any excess installed capacity remains unused. Additional cache can be installed up to the
maximum capacity of the installed hardware. The following tables display the maximum
physical capacity of the TS7740 Cache configurations and the instances of FC5267, 1 TB
cache enablement required to achieve each maximum capacity. Perform the installation of
cache increments through the TS7740 Virtualization Engine Management Interface.
Table 6-5 shows the maximum physical capacity of the 3957-V06 Cache configurations using
the 3956-CC6 cache controller.
Table 6-5 Supported 3957-V06 Cache configurations using the 3956-CC6 cache controller
Configuration Physical Quantity of
capacity FC5267
Table 6-6 shows the maximum physical capacity of the TS7740 Cache configurations using
the 3956-CC7 cache controller.
Table 6-6 Supported TS7740 Cache configurations using the 3956-CC7 cache controller
Configurationa Physical Quantity of
capacity FC5267b
Table 6-7 Supported TS7740 Cache configurations using the 3956-CC8 cache controller
Configuration Physical Quantity of
capacity FC5267a
TS7740 Cache Controller 3956-CC6 TS7740 Cache Controller 3956-CC7 February 2009
TS7740 Cache Controller 3956-CC7 TS7740 Cache Controller 3956-CC8 June 2010
TS7720 Cache Controller 3956-CS7 TS7720 Cache Controller 3956-CS8 June 2010
TS7740 Server FC0202, 9-micron LC/SC FC0201, 9-micron LC/LC December 2009
3957-V06 31-meter 31-meter
TS7720 Server FC0202, 9-micron LC/SC FC0201, 9-micron LC/LC December 2009
3957-VEA 31-meter 31-meter
When updating code in a cluster in a grid configuration, planning an upgrade to minimize the
time that a grid will operate at different code levels is important.
before starting a code upgrade, all devices in this cluster must be quiesced and varied offline.
If the devices are in a grid configuration, the cluster must be put into Service Mode.
The management interface in the cluster being updated is not accessible during
installation. You can use a web browser to access the remaining clusters if necessary.
Apply the required software support before performing the Licensed Internal Code upgrade.
Software support for Release 2.0 included support, for example, for the new Scratch
Allocation Assistance, and the full support of five- and six-cluster grid configurations. Support
is provided in the following z/OS APARs:
OA32957
OA32958
OA32959
OA32960
OA33459
OA33570
Important: Make sure to check the D/T3957 PSP bucket for any recommended
maintenance prior performing the Licensed Internal Code upgrade.
Preventive Service Planning (PSP) buckets can be found at the following address. Search
for D/T3957:
http://www14.software.ibm.com/webapp/set2/psearch/search?domain=psp
TCP/IP
The TCP/IP infrastructure connecting a TS7700 Virtualization Engine multi-cluster grid with
two, three, four, five or six clusters is known as the Grid Network. The term grid refers to the
code and functionality that provides replication and management of logical volumes and their
attributes in cluster configurations. A multi-cluster grid provides remote logical volume
replication and can be used to provide disaster recovery and high availability solutions. A
disaster recovery solution is achieved when multiple clusters are geographically distant from
one another.
Migrations to a TS7700 Virtualization Engine multi-cluster grid configuration require using the
TCP/IP network. Be sure you have the network prepared at the time the migration starts. The
TS7700 Virtualization Engine provides two or four independent 1 Gbps copper (RJ-45) or
shortwave fiber Ethernet links (single- or dual-ported) for grid network connectivity.
Alternatively, on a 3957-V07 or 3957-VEB server two 10 Gbps longwave fibre Ethernet links
can be provided. Be sure to connect each one through an independent WAN interconnection
Tips: You can add a new TS7700 cluster at R2.0 to an existing R1.7 grid consisting of up
to five clusters. This option allows you to quickly integrate a new cluster in an existing grid
without having to update all other clusters first. However, this grid will be limited to the
functionality of R1.7 until all its clusters have been upgraded to R2.0
Two TS7740 Virtualization Engines connected to a single TS3500 Tape Library must not
be combined to form a TS7740 grid. Each TS7740 cluster in a TS7740 grid configuration
must possess its own TS3500 Tape Library.
For any of these upgrades, FC4015, Grid Enablement, must be installed on all TS7700
Virtualization Engines that are part of a grid.
In case FC1035, Grid optical longwave adapter are installed a 10 Gb network interface
needs to be considered unless it is only a two-cluster grid where the clusters are directly
connected.
6.4.2 Merge two TS7700 stand-alone clusters into a TS7700 two-cluster grid
This section describes how to add a TS7720 Virtualization Engine or TS7740 Virtualization
Engine to another existing or new cluster forming a two-cluster grid for the first time. This
process is also called a merge when clusters already containing data are joined into a grid.
Considering the host configuration changes that are needed before you attempt to use the
newly joined cluster is important. Before doing host configuration changes, you might need to
access the cluster, for example, to remove duplicate volumes that would prevent a successful
grid join. For this reason, the host configuration changes must not be performed until you are
absolutely sure that you do not need to access both clusters as stand-alone libraries any
more.
This migration scenario applies to TS7720 Virtualization Engine and TS7740 Virtualization
Engine and is disruptive to both TS7700 Virtualization Engines that are being merged.
TS7720 and TS7740 can be joined in a hybrid grid.
Single-cluster Grid
TS7700
FICON
Existing data
TS7700
TS7700
TS7700 System z
Host
FICON
Existing data
Single-cluster Grid
Figure 6-1 Merge of two existing stand-alone cluster grids into one two-cluster grid
Preparation
Before starting the merge process, make sure that all preparatory activities have been
completed. Review with your IBM System Service Representative the prerequisites that must
be covered before attempting to merge two existing TS7700 Virtualization Engine stand-alone
clusters. The prerequisites might include Licensed Internal Code levels, planning activities,
and gathering information from the current environment. A TS7700 Virtualization Engine
default number of logical volumes supported is 1,000,000. With Release 2.0 (or using an
RPQ with R1.7 code) you can add support for additional logical volumes in 200,000 volume
increments using FC5270, up to a total of 2,000,000 logical volumes. The number of logical
volumes supported in a grid is set by the cluster with the smallest number of FC5270
increments installed. If the current combined amount of logical volumes in the clusters to be
joined exceed the maximum number of logical volumes supported, some logical volumes
must be moved to another library or deleted to reach the allowed grid capacity.
Determine if any conflicting information exists between the TCDB, TMS (Tape Management
System) and the TS7700. An example of the job is shown in Example F-14 on page 902. You
need to check for duplicate logical volumes and, on TS7740 clusters, for duplicate physical
volumes. Logical volume ranges in a TS7700 Virtualization Engine must be unique. New
logical volumes cannot be inserted from the time you start checking for duplicate logical
volumes until the join is complete. If duplicate volumes are found during the join process, it
has to be stopped to remove the duplicate volumes.
Tip: The IBM SSR performs steps 5 - 8 as part of the merge. They are listed for
informational purposes only.
Update RMM and TCDB to match the respective storage groups to be able to have
continued access to the original volumes created when the systems were configured as
stand-alone clusters. You can get a list of all volumes in the TCDB by running a job, as
shown in Example F-16 on page 902.
Restriction: If the new SCDS is activated before the new library is ready, the host
cannot communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
Important: The existing logical volumes on both cluster will not be replicated as part of
the merge process.
13.If you want part or all of the existing logical volumes to be replicated to the second cluster,
this can be done in different ways. IBM has tools, such as PRESTAGE, to support these
actions. The logical volumes must be read or referred to retrieve the new management
policies that you define. The tools are available at the following URL:
ftp://ftp.software.ibm.com/storage/tapetool/
To configure a three-cluster grid for disaster recovery, you must plan for the following items:
Access from your local site’s hosts to the FICON channels on the TS7700 Virtualization
Engine Cluster located at the disaster recovery site (or sites). This might involve
connections using Dense Wavelength-Division Multiplexing (DWDM) or channel extension
equipment, depending on the distance separating the sites. If the local TS7700
Virtualization Engine Cluster becomes unavailable, you would use this remote access to
continue your operations using a remote TS7700 Virtualization Engine Cluster.
Because the virtual devices on a remote TS7700 Virtualization Engine Cluster are
connected to the host through channel extensions, there might be a difference in read or
write performance compared to the virtual devices on the local TS7700 Virtualization
Engine Cluster. If performance differences are a concern, consider using only the virtual
device addresses in a remote TS7700 Virtualization Engine Cluster when the local
TS7700 Virtualization Engine is unavailable. If these differences are an important
consideration, in addition to the ownership takeover procedure, you would need to provide
operator procedures to vary the virtual devices in a remote TS7700 Virtualization Engine
from online to offline.
You might want to maintain separate copy consistency policies for disaster recovery data
and data that requires high availability.
The following procedures show the required steps for joining a TS7700 Virtualization Engine
and a two-cluster grid configuration to form a three-cluster grid configuration. You must first
verify the items discussed in the following sections.
The assumption is that two or three TS7700 Virtualization Engine Clusters will reside in
separate locations, separated by a distance dictated by your company’s requirements for
disaster recovery. In a three-cluster grid configuration, disaster recovery and high availability
can also be achieved simultaneously by ensuring that two local, high availability clusters
possess RUN volume copies and have shared access to the host, while the third and remote
cluster possesses deferred volume copies for disaster recovery. During a stand-alone cluster
outage, the three-cluster grid solution maintains no single points of failure, which would
prevent you from accessing your data, assuming that copies exist on other clusters as defined
in the copy consistency point. See 4.4.3, “Defining grid copy mode control” on page 258 for
more details.
In the configuration shown in Figure 6-4, two clusters are in the same campus location or in
the same city. The clusters can have one of these Copy Consistency points specified:
Rewind/Unload (RUN) Copy Consistency Point
If a data consistency point of RUN is specified, the data created on one TS7700
Virtualization Engine Cluster is copied to the other TS7700 Virtualization Engine Cluster
as part of successful Rewind/Unload command processing, meaning that for completed
jobs, a copy of the volume will exist on both TS7700 Virtualization Engine Clusters.
Access to data written by completed jobs (successful Rewind/Unload) before the failure is
maintained by other TS7700 Virtualization Engine Cluster. Access to data of incomplete
jobs that were in process at the time of the failure is not provided. This example assumes
that the two existing clusters are using RUN Copy Consistency points specified in the
Management Class storage construct when the volumes are written.
The new cluster that will join the two-cluster grid to form the three-cluster grid must already be
installed. Every cluster in the system requires two network connections to a WAN for site to
site operations, and the WAN connections between the three clusters in the three-cluster grid
must be completed. The grid network on the new cluster must be configured, containing the
IP addresses of the three clusters.
Remember: You can join a new TS7700 cluster at R2.0 to an existing grid running at R1.7
code level.
7. IBM SSR: Configure the local grid network from the TS7700 Virtualization Engine System
Management Interface Tool (SMIT) window.
8. IBM SSR: Join Cluster 2 to Cluster 1 using SMIT. Cluster 0 can stay online and be used
when Cluster 2 is joined to Cluster 1.
a. Merge Vital Product Data (VPD).
b. Merge the TS7700 Virtualization Engine databases’ data.
9. IBM SSR: Exit Service Mode for Cluster 1.
10.Vary devices from Cluster 1 online for all connected hosts.
11.After a new cluster is joined to a cluster in an existing grid, all clusters in the existing grid
are automatically joined. The previous example joined the new Cluster 2 to Cluster 1 of an
existing grid. This is enough to make the existing two-cluster grid into a three-cluster grid.
Cluster 0 in the example could be operational and available for operation during that time.
12.Now you are ready to validate the new three-cluster grid:
a. Vary Cluster 2 defined FICON channels online, making sure that Cluster 2 can be
accessed through the channel paths defined in HCD.
b. Vary logical devices for Cluster 2 online.
c. Vary Cluster 2 online to the hosts.
d. Using the D SMS,LIB(libraryname),DETAIL commands, validate that the relationship
between the Composite and Distributed Libraries is correct as shown in Example 6-1.
e. Vary the logical devices for Cluster 2 offline again so that they will be ready to test if the
original two-cluster grid still works.
13.Modify Copy Policies defined in the Management Class.
The copy consistency points on all three clusters (Table 6-10) must be modified to support
a RUN copy between Cluster 0 and Cluster 1, and also a deferred copy from Cluster 0 and
Cluster 1 to Cluster 2. The values must be updated in the Management Classes using the
TS7700 Virtualization Engine management interface. Make sure that the definitions will
work when logical units are allocated from Cluster 2. See 4.4.3, “Defining grid copy mode
control” on page 258 for more information. Table 6-10 describes the settings needed for
the scenario shown in Figure 6-4 on page 365.
Table 6-10 Copy Consistency point on Management Class: three-cluster grid configuration 1
Management Class Cluster 0 Cluster 1 Cluster 2
14.Check all constructs on the management interface of all clusters and make sure they are
set properly for the three-cluster grid configuration. You can set up Scratch Allocation
Assistance as outlined in 4.4.4, “Defining scratch mount candidates” on page 264.
15.Run test jobs writing to and reading from the two original clusters in the two-cluster grid.
16.Vary logical devices for Cluster 0 and Cluster 1 offline to be ready to validate the use of
Cluster 2 as though there were a disaster, and set up copy consistency points that support
the your requirements, such as a deferred copy mode.
17.Test write and read with Cluster 2 and validate the result.
18.IBM SSR: The migration is done. Return to normal production mode.
19.If you want part or all of the existing logical volumes to have a logical copy on Cluster 2,
this can be done in various ways. IBM has tools, such as PRESTAGE, to support these
The new cluster that will join the two-cluster grid to form the three-cluster grid must be
installed in advance. Every cluster in the system requires two network connections to a WAN
for site to site operations, and the WAN connections between the three clusters in the
three-cluster grid must be completed. The grid network on the new cluster must be configured
and contain the IP addresses of the three clusters.
1. Stop host activity on Cluster 1, which must go in Service Prep mode. Cluster 0 stays
operational.
2. Vary the unit addresses for Cluster 1 offline. Complete or cancel all jobs for Cluster 1.
3. IBM SSR: Set Cluster 1 to Service Mode.
4. Make changes to SMS.
With a three-cluster grid, you need one Composite Library and three Distributed Libraries.
You must now define the third Distributed Library in SMS. Make sure to enter the correct
Tip: If the new SCDS is activated before the new library is ready, the host cannot
communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
8. IBM SSR: Configure the local grid network using the TS7700 Virtualization Engine System
Management Interface Tool (SMIT) window.
9. IBM SSR: Join Cluster 2 to Cluster 1 using SMIT:
a. Merge Vital Product Data (VPD).
b. Merge the TS7700 Virtualization Engine databases’ data.
10.IBM SSR: Take Cluster 1 out of Service Mode.
11.Vary devices from Cluster 1 online to all connected hosts.
12.Run test jobs to read and write from the original Cluster 0
13.Run test jobs to read and write from the original Cluster 1.
16.Check all constructs on the management interface of all clusters and make sure they are
set properly for the three-cluster grid configuration. You can set up Scratch Allocation
Assistance as outlined in chapter 4.4.4, “Defining scratch mount candidates” on page 264.
17.Vary logical devices for Cluster 0 and Cluster 1 offline to be ready to validate the use of
Cluster 2 as though there were a disaster, and set up Copy Consistency points that
support your requirements, such as a deferred copy mode.
18.Test write and read with Cluster 2 and validate the result.
19.IBM SSR: Return the grid to normal production mode.
20.It you want part or all of the existing logical volumes to have a logical copy on Cluster 2,
this can be done in various ways. IBM has tools, such as PRESTAGE, that will help you
with this task. The logical volumes must be read or referred to retrieve the new
management policies that you define. The tools are available at the following address:
ftp://ftp.software.ibm.com/storage/tapetool/
From a host perspective, the implementation steps are the same for a hybrid grid and for a
homogeneous grid configuration consisting only of TS7740 or TS7720 Virtualization Engines.
The same restrictions and recommendations apply. However, for a hybrid grid, be sure to
understand that, after a logical volume has been migrated out of the cache of a TS7720, it can
be recalled only through the TS7740. Therefore, be especially careful when you plan for Copy
Consistency Points and Cache Management Policies. To avoid unnecessary copies to be
created, be sure that you define the Retain Copy Mode setting in the Management Class
definition.
Customer
N etwork
TS7720 TS7720
TS7740
In this case, the customer has two separate production sites, A and B, and one remote
Disaster Recover site. You can implement this scenario using the process discussed in “Two
independent and one remote cluster scenario” on page 369.
Important: The existing logical volumes on Grid A will not be replicated to Cluster 2 as
part of the merge process. Also the existing logical volumes on Cluster 2 will not be
replicated to Cluster 0 and Cluster 1 (Grid A) as part of the merge process.
The IBM SSR performs the actual merge. See 6.4.2, “Merge two TS7700 stand-alone
clusters into a TS7700 two-cluster grid” on page 359 for more details about merging
clusters.
Whether or not you are implementing a hybrid grid configuration is not really important
from an implementation perspective because you perform the steps related to whether it is
a TS7740 or a TS7720 cluster. However, it is important how you define those parameters
that influence cache management and data movement in the grid:
– Copy Consistency Points define where and when a copy of a logical volumes is
created.
– Retain Copy Policy influences whether unnecessary copies are made.
– Cluster Families can help to avoid unnecessary traffic on the grid links.
– Using the software enhancements for workload balancing and Device Allocation
Assistance influence overall grid performance.
– Using the enhanced cache removal policies for TS7720 can improve the cache hit ratio
and the overall logical volume mount times.
A TS7720 Virtualization Engine is used at the production sites. For the remote disaster
recovery center, two TS7740 Virtualization Engines are used. The four data centers are
interconnected through both FICON directors and high-speed WAN.
With this configuration, you can achieve high availability and high performance because the
TS7720 Virtualization Engines provide for high cache hit rates.
Long-term retention data that have been created in the production clusters will be migrated to
TS7740 Virtualization Engine clusters in disaster recovery sites, and ultimately written to
physical tapes into a physical library at both backup sites. This migration is automatically
performed as defined in the enhanced removal policies.
The following section describes the steps that are required to install the four-cluster hybrid
grid shown in Figure 6-7. Assume an existing stand-alone grid, and Cluster 0 is the TS7740
Virtualization Engine in Backup Site X.
Plan for copy consistency points and how to set them during the migration steps.
Remember that you can only test the grid functionality if you change the default definition
of No Copy (N) to either Deferred Copy (D) or RUN Copy (R). Table 6-12 shows example
settings for the final configuration.
Table 6-12 Sample Copy Consistency Point definitions 1
Management Class Cluster 0 Cluster 1 Cluster 2 Cluster 3
(Site X) (Site A) (Site Y) (Site B)
Host in Site A D R D D
Host in Site B D D D R
Host in Site X R D D D
Host in Site Y D D R D
These settings result in each host writing an immediate copy (R) to its local cluster and a
deferred copy to each of the remote clusters. You must make these updates of the
Management Class on the TS7700 for each cluster after it has been added to the grid.
Plan for the enhanced cache management settings on the TS7720. In the Storage Class
construct, define the following items:
– Volume Copy Retention Group
– Volume Copy Retention Time
If you merge a TS7720 with data into a grid, these volumes are automatically assigned to
Volume Copy Retention Group Prefer Keep. To change this setting, use the Host Console
Request function. See 8.5.3, “Host Console Request function” on page 589.
During the process of adding clusters to the grid, only one cluster is required to be offline.
After you have added Cluster 1, you can use Cluster 0 for normal operation. However, no
copies are written to any of the other clusters. In the sample configuration, only one copy
to Cluster 1 is initiated and made subsequently when Cluster 1 comes back online, after
Cluster 2 has been added. Because no CCPs have been defined for the remaining
Tip: If a new IOCDS is activated before the new library is ready, the host cannot
communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
5. IBM SSR: Configure the local grid network from TS7700 Virtualization Engine System
Management Interface Tool (SMIT) window.
6. IBM SSR: Join Cluster 1 (the TS7720 at Site A) to Cluster 0 using SMIT windows.
Vital Product Data (VPD) will be merged, and databases will be exchanged between those
two clusters in the process.
7. IBM SSR: Bring both clusters in the two-cluster grid online using the SMIT window.
8. Proceed to validate the new two-cluster grid:
a. On Cluster 1, restore Constructs and Fast Ready Categories obtained from Cluster 0
using the management interface and make the adjustments necessary. Or define the
constructs you want to use on this cluster.
b. On Cluster 0 and 1, define Copy Consistency Points in the Management Classes
intended for testing. Define TS7720 Cache removal Policies in the Storage Classes
intended for testing.
c. Vary Cluster 0 attached FICON channels online to all connected hosts.
d. Vary logical devices online to the hosts. Vary Cluster 0 online to the hosts.
Consider only varying online the number of drives you need for testing.
e. Write and read from Cluster 0, making sure all hosts and partitions have proper access
to it. Use D SMS,LIB(libraryname),DETAIL commands and check for the proper
relation between Composite and Distributed Libraries, as described in Example 6-1 on
page 367.
f. Cluster 0 was validated, so it can be returned to cluster operations.
g. Vary new Cluster 1 defined channels online for all hosts, making sure that all paths are
validated from all hosts and partition (new Hardware and HCD definition now).
Similarly, you can group TS7740 Virtualization Engines Clusters 0 and 2 (Backup Sites X and
Y) as another family, for example, a DR family.
See 4.3.12, “Inserting logical volumes” on page 254 for more details about defining cluster
families.
Host Host
Writes Reads Writes Reads
Copy Modes
RNRNDN
NRNRND
Def
Def
TS7720 (4)
Def
TS7740 (5)
Def
A hybrid TS7700 grid can be configured using between two and six TS7700 clusters.
Five- or six-cluster hybrid
Five- and six-cluster hybrid grid configurations are supported by RPQ only. Possible
configurations of five- and six-cluster grids include:
– Two production sites each with two clusters, plus a one- or two-cluster disaster
recovery site
Each production site contains one disk-only cluster combined with one cluster attached
to a physical library. When part of a five-cluster grid, the disaster recovery site contains
one cluster attached to a physical library. When part of a six-cluster grid, the disaster
recovery site also contains a disk-only cluster.
Each production site writes and reads only to its local clusters and all data written
within either production site is replicated to the adjacent cluster and all disaster
recovery clusters.
– One production site with three clusters, plus one disaster recovery site with three
clusters
Each site contains two disk-only clusters and one cluster attached to a physical library.
This configuration is available only with a six-cluster grid.
Data written to a production cluster is replicated to one adjacent cluster, and to at least
two disaster recovery clusters.
Four-cluster hybrid
There are two common configurations of a four-cluster grid:
– Two disk-only production clusters, plus two remote clusters attached to physical
libraries
Disk-only clusters are used for production data, whereas clusters attached to physical
libraries are used for archival or disaster recovery purposes. Optimally, the clusters
attached to physical libraries are at metro distances to separate long-term data from a
single point of loss.
All clusters are configured for high availability. However, the production clusters are
also configured for high cache hit rates. Long-term retention data migrates to the
clusters attached to physical libraries. The copy export function can be used on these
clusters to produce a second or third offsite copy of long-term retention data.
Important: Copy policies must be set up correctly across the hybrid grid. Optimally, all
volumes created on a disk-only cluster have a copy policy that creates a copy on one or
more clusters attached to a physical library. Otherwise, the automatic removal cannot
remove volumes from the disk-only cache, leading to the disk-only cache reaching its
maximum capacity. If this happens, a manual intervention is required to reduce the volume
use of the disk-only cluster.
TS7700 clusters that attach to a physical library (TS7740) have a much higher total storage
capacity than disk-only TS7700 clusters (TS7720). In a hybrid grid, a copy export from a
TS7740 can export all the contents of the library including any hybrid-migrated data residing
only on the TS7700 clusters attached to a physical library. Therefore, when a complete failure
occurs to a copy export parent site, it is possible that the copy exported volumes are the only
source for the hybrid-migrated data. A recovery in this scenario can occur by initiating a
disaster recovery process into a new TS7740 cluster using the copy exported volumes. This
allows the data to be merged back into the composite library and any pre-existing TS7700
clusters remain present with valid content.
Guidance is provided to help you achieve the migration scenario that best fits your needs. For
this reason, methods, tools, and software products that can help make the migration easier
are highlighted.
TS7700 Virtualization Engine updates to new models introduced with TS7700 Release 2.0
are also described in this section.
Migrations to a TS7720 Virtualization Engine can only be done using the host. The TS7720
Virtualization Engine does not have any attached back-end tape drives. Therefore, data
needs to be copied into the TS7720 Virtualization Engine using host programs.
Migration of VTS Model B10 or B20 hardware to a TS7740 Virtualization Engine, also called
outboard VTS migration, is possible depending on the target configuration. It provides an
upgrade path for existing B10 or B20 VTS models to a TS7740 Virtualization Engine as long
as the VTS system contains only 3592-formatted data. The outboard migration is offered as
an IBM Data Migration Services for Tape Systems. Outboard migration provides the following
functions:
Planning for migration, including consideration related to hardware and zOS
Project management for this portion of the project
Assistance with the integration into a complete change plan, if required
The actual migration of data from a 3494 VTS B10 or B20 to the TS7740
Work with your IBM representative for more details about IBM Migration Services for Tape
Systems. These services are available from an IBM migration team and can assist you in the
preparation phase of the migration. The migration team performs the migration on the
hardware.
When migrating data from a VTS to a new TS7740 installed in the same Tape Library, the
process is called data migrate without tape move. If a source VTS is attached to one Tape
Library and the new target TS7740 is attached to another tape library, the process is called
data migration with tape move. Finally, when a source VTS is migrated to an existent
TS7740, or two VTS are migrated to the same target TS7740, the process is called merge.
Migration from VTS with 3590 Tape Drives, or native tape drives to the TS7740 Virtualization
Engine, always requires host involvement to copy the data into the TS7700 Virtualization
Engine. For more information about the methods you can use, see 7.7, “Moving data in and
out of the TS7700 Virtualization Engine” on page 419.
Tip: Tape Management System can be RMM or other products from other vendors.
Information is provided on the TS7700 family replacement procedures available with the new
hardware platform and the TS7700 Virtualization Engine R2.0 Licensed Internal Code level.
Upgrading tape drive models in an existing TS7740 Virtualization Engine to get more capacity
from their existing media or to provide encryption support is further addressed in this section.
It details the hardware upgrade procedure and the cartridge migration aspects.
Replacing the hardware is only one of the items considered here. Consider the following
aspects:
Planning costs and benefits
Determine the best migration method for you. The various methods have temporary
infrastructure implications. You can maintain the existing channel path configuration or
duplicate the channel paths during the migration to help minimize system outage during
the migration. Additional channel paths means the need for additional FICON director
ports, additional channel ports on the host system, the need for more addresses in the unit
control block (UCB), and more FICON cables.
Planning software
Verify any software prerequisites necessary for the migration, prepare a new dynamic
HCD generation and definitions, new DFSMS definitions, SMS, TCDB, RMM, and discover
what is necessary, depending on the scenario considered.
Planning environment
One consideration is whether to use additional channel paths for disaster recovery or to
minimize the system outages. Then, new cables can be laid out in the environment.
Consider new matrix configurations for the FICON Directors you might plan to update or
reconfigure.
Planning activities
Plan to perform the hardware migration activities at a time when you minimize the impact
on the production activities. Consider that the hardware migrations from B10/B20 VTS to
TS7740 Virtualization Engine are disruptive, and therefore, with the assistance of the IBM
System Service Representative (SSR), plan all the aspects of the activities for minimizing
a system outage.
Important: After you start writing data into the TS7740 Virtualization Engine after the
migration, the TS7740 Virtualization Engine starts normal background operations, such as
reclamation and Secure Data Erase. As soon as changes are made to the logical or
physical volumes, you cannot fall back to B10/B20 VTS.
Restriction: IBM 3494 Tape Library is no longer supported with TS7740 R2.0.
The TS7740 Virtualization Engine is attached to a separate library than the VTS that is
being replaced. All physical tapes that are associated with the VTS partition are also
moved to the new library.
All the migration procedures are disruptive, including those that involve existing PTP VTSs.
Plan for a downtime of 8 - 12.5 hours, depending upon the migration type. During this time,
you do not have access to the data in the VTS that is being migrated. Plan a systems outage
to perform the necessary steps because the outboard migration is not concurrent with the
system production activities.
If the VTS shares an IBM 3953 or 3494 Library Manager with another subsystem, or with
native drives, those subsystems are also affected because the Library Manager must be
“re-taught.” Those other subsystems are not affected for the entire outage time, but only for
the amount of time needed to vary the Library Manager offline, perform the re-teaching, and
return it again to the online status.
If you have more than one VTS installed, reduce the workload and, if possible, shift new
workloads away from the VTS that will be migrated next, a day before the migration. This
approach gives VTS time to copy all of the logical volumes in its tape volume cache to the
physical tape. This approach might reduce the outage time needed for the migration.
To minimize the impact of the migration, consider temporarily redirecting your tape workload
to another library and VTS, TS7700 Virtualization Engine, or native drives. This method can
be easily done by changing the ACS routines and enables the ability write tape data.
However, logical volumes residing in the VTS being migrated are inaccessible until the end of
the procedure.
With TS7740 Virtualization Engine R2.0, an existing 3494-B10 or B20 attached to a 3494
Tape Library can only be migrated to a TS7740 Virtualization Engine attached to a new
TS3500 Tape library.
Restriction: TS7740 Virtualization Engine R2.0 does not support IBM 3494 Tape Library
attachment any longer. IBM TS3500 Tape Library is the only supported library beginning
with TS7740 Virtualization Engine R2.0
If you are planning to migrate a B20 or B10 VTS with 3590 Tape Drives to a TS7700
Virtualization Engine, consider copying the VTS data under host control into a TS7700
configuration as described in 7.7, “Moving data in and out of the TS7700 Virtualization
Engine” on page 419.
These are the current tested level for the Data Migration procedures. Work with your IBM SSR
to check and upgrade the microcode level before the Data Migration if necessary.
Also, the IBM Data Migration Services for Tape Systems after engaged will assist you in the
planning and preparation steps, including checking the required level of Licensed Internal
Code.
Utilizing the same library name for the Composite Library might save you the changes in the
library name associated with all of the logical volumes in the TCDB and the tape management
system catalog. It might also eliminate the need to change the storage group definitions.
Remember that only one configuration can be active at the same time if you choose to keep
the same Composite Library name, though. Names must be unique across the entire system.
Assuming that you are moving a existing stand-alone 3494 VTS into a new stand-alone
TS7740, this is your Cluster 0. If you are moving a existing 3494 PTP VTS into a new
two-cluster grid, you have Cluster 0 and Cluster 1 in the new two-cluster TS7740 grid,
corresponding to both members of your old 3494 PTP VTS.
You might need additional FICON director ports unless the B10/B20 ports can be reused.
Because the TS7740 Virtualization Engine supports only FICON attachment, reuse of
existing paths is not possible for ESCON-attached B10/B20 VTSs.
See 5.2, “Hardware configuration definition” on page 289 for details about this subject.
The whole source VTS tape set (physical cartridges) are moved into target TS7740’s tape
library by the data migration procedure. The logical volumes residing in those cartridges will
be accessed by TS7740 normally. VTS-formatted tapes and TS7740-formatted tapes can
coexist in the same or separate pools.
Also, reclamation process can expedite converting from VTS format to the TS7740 format.
Whenever a VTS-formatted tape is eligible to reclamation, the reclamation process identifies
the VTS format in it and takes the appropriate measures to accomplish format conversion.
Differently from a standard reclamation (which is essentially tape-to-tape copy), logical
volumes will be recalled to tape volume cache, converted, and written to a new tape in the
new format. After volumes have been migrated, they are removed from the cache.
Reclamation target pool is determined by the storage group assigned to those logical
volumes. Other than to be ready for the export functions, there is no haste to convert the data
format in those VTS cartridges because data in it is completely accessible in the old format.
Figure 7-1 illustrates this concept.
3956-CX7
3956-CC7
I/O I/O
3957-V06
Consider adding extra physical scratch volumes before the migration to be sure that there will
be enough scratch media on the new TS7740 Virtualization Engine configuration.
Also you might consider to disable reclamation for the hours following the data migration. The
data migration outage is likely to be followed by a heavy workload period. Disabling
reclamation will leave all available tape drives to attend recalls and premigration in this period.
The backup process does the following tasks (see Figure 7-2 on page 392):
Migrates all data from B10/B20 VTS cache to tape
Reconciles the B10/B20 VTS database
TS7740
3956-CX7
3956-CC7
I/O I/O
3957-V06
That section is not meant as a literal migration instruction guide. It provides a high level
description of the process and the related concepts.
Tip: The outboard migration is now offered as an IBM Data Migration Services for Tape
Systems. Planning for the migration is within the scope of the Migration services.
One or more Stand-alone VTS B10 or B20 One TS7740 Virtualization Engine stand-alone
cluster
One Peer-to-Peer VTS B10 or B20 One TS7740 Virtualization Engine two-cluster grid
Restriction: TS7740 attachment to the IBM 3494 Tape Library is not supported with R2.0.
Where the TS7740 Virtualization Engine connects to the same tape library as the VTS
(see “Existing tape library” on page 393)
Where the TS7740 Virtualization Engine connects to a separate library (see “New target
tape library” on page 397)
Check the following items before the migration and correct them if necessary:
1. Logical volume ranges in a TS7740 Virtualization Engine must be unique. Verify that the
logical volume ranges for the two VTSs are unique before you start the merge procedure.
2. Validate that no conflicting information exists between the TCDB, RMM, and the Library
Manager. This step is important because in this migration scenario, the library names of
volumes belonging to one of the two existing library names must be changed. Only one of
the existing library names can continue to exist after you merge two VTSs to one TS7740
Virtualization Engine.
Consideration: This migration does not change the tape drive configuration. Therefore, in
a 3494 Tape Library, the VTS to be replaced must already have 3592 Tape Drives installed,
and all data must have already been migrated to 3592 cartridges.
TS7740 attachment to the IBM 3494 Tape Library is not supported with R2.0.
Step completed by SSR: Your IBM Service Representative (SSR) will perform steps 1,
and 6 - 10 of the migration steps listed. They are included for informational purposes only.
1. IBM SSR: Start the installation of the TS7740 Virtualization Engine hardware a few days
before the outage window.
2. Validate whether any conflicting information exists in the TCDB, the tape management
system catalog, and the Library Manager database. This step is needed only if you plan to
change the library name, as described in step 4. Appendix F, “Sample JCL” on page 881
offers a sample job for using the RMM utility EDGUTIL for verification.
3. Stop all host activity:
a. If another VTS or TS7740 Virtualization Engine system is available to the host during
the migration, you can change the ACS routines to direct the allocations to that other
system.
b. Complete or cancel all host jobs for the VTS.
c. Vary offline all device addresses associated with the VTS for all attached hosts.
d. Vary the VTS to be migrated offline to all hosts.
e. Vary the channel paths to the VTS offline.
4. Prepare the software changes. All of the following steps can be done concurrently with the
hardware, starting with step 6 on page 396.
a. Define the library names to SMS.
When you define the stand-alone cluster TS7740 Virtualization Engine, you must
define a Composite Library and one Distributed Library in SMS. Reuse the existing
library name of the VTS as the Composite Library name, as summarized in the top half
of Figure 7-5 on page 395. Reusing the library name has the advantage that you do not
have to update the library name in the volume records in the TCDB and the TMS
VTS TS7700
Library Name: MyLib Library Name: MyLib
Library-ID: 12345 Library-ID: 98765
VTS TS7700
Library Name: MyLib Library Name: TS7700
Library-ID: 12345 Library-ID: 98765
If you do not keep existing library names, as shown in the bottom half portion of
Figure 7-5, you must change the library names in all volume entries in the TCDB and
your tape management system catalog to the new names before you can use the
volumes. Delete the old library definition MyLib and define the two new libraries TS7740
and Distlib in SMS through ISMF. Note that it can take hours to complete this update.
Remember to write the new Library-IDs in the definitions. If a new Composite Library
name is used, update existing Storage Group definitions to relate to that new library
name. When you delete a library in the SCDS through ISMF, the change must be
reflected in the TCDB.
b. Make changes to HCD channel definitions.
Plan appropriately if wanting to reuse existing host channels and FICON adaptor
connections, or plan to define new channels, or even switch from ESCON to FICON
with this process. If you define the devices as offline in HCD and use any product for
device sharing, you need to define the new addresses to that product also.
c. Make changes to LCU definitions.
Changes are needed because you change the Library-ID. If the VTS is a model with 64
or 128 logical units, you must also define more LCUs to enable the use of 256 logical
units supported on a stand-alone cluster.
d. Set missing-interrupt handler (MIH) values.
If you are defining specific address ranges in the IECIOSxx member in
SYS1.PARMLIB, make sure that MIH values for the new devices are set. Proper values
are described in 5.2.6, “Set values for the Missing Interrupt Handler” on page 304.
e. Complete the software definitions.
If you have redirected workload to other systems during the migration, you might need
to change the ACS routines back. If you are planning to use new SMS constructs and
implement functions provided through outboard policy management, define them on
the host now.
Tip: If the new SCDS is activated ahead of the new library being ready, the host
cannot communicate with the new library yet. Message CBR3006I is issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
Figure 7-6 Stand-alone VTS to TS7740 Virtualization Engine in another tape library
In one option, the TS7740 Virtualization Engine will be connected to the same TS3500 Tape
Library where the VTS is attached but in a separate logical library. For this reason, it is also
considered within this section.
If the migration scenario includes the following characteristics, no physical stacked cartridges
that belong to the B10/B20 VTS are moved:
Within the same physical TS3500 Tape Library
One B10/B20 VTS is attached to one logical library with its own Library Manager
The TS7740 Virtualization Engine is attached to another logical library
Instead, the logical associations of the physical volumes from the VTS to the TS7740
Virtualization Engine will be transferred. You must reassign the physical volume serial
numbers that are dedicated to the B10/B20 VTS logical library to the TS7740 Virtualization
Engine logical library by using the TS3500 Tape Library Specialist. For details, see IBM
TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise Tape Drives
and TS3500 Tape Automation, SG24-6789.
The time that is required for the physical move of the cartridges from one library to another
might vary depending on the number of cartridges and the distance they are transported. Be
sure to plan extra outage time for this activity.
If you are installing new drives together with the TS7740 Virtualization Engine in the new
library, the TS7740 Virtualization Engine can be connected and tested in advance to minimize
the system outage window needed for the migration. If you are reusing the existing
VTS-attached drives in the new library, plan for an extra outage to remove the drives from the
existing library and install them in the new library. This scenario assumes that you are not
moving the existing VTS drives.
Clarification: The IBM Service Representative (SSR) will perform steps 1, 6, 7, 8, 10, 11,
13, and 14 as part of the installation. They are listed for informational purposes only.
1. IBM SSR: Back up the Library Manager Administrative Data of the library constructs from
the source library. You can perform this activity before the outage window.
2. Determine whether there is any conflicting information in the TCDB, the tape management
system catalog, and the Library Manager database. This step is needed only if you plan to
change the library name, as described in step 4a on page 394.
3. Stop all host activity.
a. If another VTS or TS7740 Virtualization Engine system is available to the host during
the migration, you can change the ACS constructs to direct allocation to that system.
b. Complete or cancel all host jobs for the VTS.
c. Vary off all device addresses associated with the library for all attached hosts.
d. Vary the existing VTS offline to all hosts.
e. Vary the existing channels offline.
4. Prepare the software changes. All of the following steps can be done concurrent with the
hardware changes that follow:
a. Defining the library names to SMS.
When you define the stand-alone cluster TS7740 Virtualization Engine, you must
define a Composite Library and one Distributed Library in SMS. Generally, reuse the
existing library name of the VTS as the Composite Library name, as shown in
Figure 7-5 on page 395.
b. Make changes to HCD channel definitions.
If you plan to reuse existing host channels and FICON adaptor connections, or plan to
define new channels, or maybe even switch from ESCON to FICON with this process,
complete these planning considerations at this time. You could define new FICON
channels to ease the process, for example.
c. Make changes to HCD LCU definitions.
Changes are needed because you change the Library-ID. If the VTS is a model with 64
or 128 logical units, you also need to define more LCUs, to enable the use of 256
logical units supported on a stand-alone cluster. See Chapter 5, “Software
implementation” on page 283 for more information.
If you define the devices as offline in HCD and use any product for device sharing, you
need to provide the new addresses to that product.
d. Set the missing interrupt values.
If you are defining specific address ranges in the IECIOSxx member in
SYS1.PARMLIB, make sure that MIH values for the new devices are set. Proper values
are described in 5.2.6, “Set values for the Missing Interrupt Handler” on page 304.
e. Complete the software definitions.
If you have redirected workload to other systems during the migration, you might need
to change the ACS routines back. If you are planning to use new SMS constructs and
Tip: If the new SCDS is activated before the new library is ready, the host cannot
communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
Tip: TS7740 attachment to the IBM 3494 Tape Library is not supported with R2.0.
The TS7740 Virtualization Engine can be attached to another TS3500 Tape Library (see
“New target tape library” on page 403).
If two stand-alone VTS systems are migrated to a stand-alone cluster TS7740 Virtualization
Engine, the assumption is that the physical volumes for one VTS system will remain in the
current tape library to be used by the TS7740 Virtualization Engine, and that physical
volumes for the other VTS system will reside in a separate tape library. The physical volumes
belonging to the other VTS will be moved to the library where the TS7740 Virtualization
Engine is attached. If both VTS systems share the same Library Manager, physical transfer of
physical volumes does not occur. Instead, all the associations of the partitions to which the
physical volumes belong are transferred.
Figure 7-7 Two stand-alone VTS to a stand-alone cluster TS7740 Virtualization Engine in the same
tape library
Before the migration, check the following items and correct them if necessary:
1. Logical volume ranges in a TS7740 Virtualization Engine must be unique. Verify that the
logical volume ranges for the two VTSs are unique before you start the merge procedure.
2. Validate that no conflicting information exists between the TCDB, RMM, and the Library
Manager. This step is important because in this migration scenario, the library names of
volumes belonging to one of the two existing library names need to be changed. Only one
of the existing library names can continue to exist after you merge two VTSs to one
TS7740 Virtualization Engine.
Clarification: Your IBM Service Representative (SSR) performs steps 1, and 7 - 13 as part
of the installation. They are listed for informational purposes only.
1. IBM SSR: Start the installation of the TS7740 Virtualization Engine hardware a few days
before the outage window. Installation cannot be completed if TS7740 Virtualization
Engine does not have at least four physical tape drives. In advance, consider whether or
not TS7740 Virtualization Engine will use the same Logical Library definition that is in use
by the 3953 Library Manager (and VTSs).
2. Verify that the logical volume ranges for the two VTSs are unique before you start the
merge procedure. Logical volume ranges in a TS7740 Virtualization Engine must be
unique.
3. Validate that no conflicting information exists between the TCDB, RMM, and the Library
Manager. This step is important because in this migration scenario, the library names of
volumes belonging to one of the two existing library names need to be changed. Only one
of the existing library names can continue to exist after you merge two VTSs to one
TS7740 Virtualization Engine.
4. Stop all host activity:
a. If there is another VTS or TS7740 Virtualization Engine system available to the host
during the migration, you can change the ACS constructs to direct allocation to that
system.
b. Complete or cancel all host jobs for the two VTSs.
c. Vary off all device addresses associated with the library for all attached hosts.
d. Vary the existing VTSs offline to all hosts.
e. Vary the existing channels offline.
5. Prepare the software changes. Make sure that you apply the changes to all connected
hosts. All of the following steps can be done concurrently with the hardware changes that
follow:
a. Changes to SMS
• When you define a stand-alone cluster, you need to define a Composite Library and
one Distributed Library. Reuse the existing library name of one of the two existing
VTSs as Composite Library name for the new TS7740 Virtualization Engine. For the
VTS that is removed, you need to change all volume entries in the TCDB and in
RMM to the remaining Composite Library name before you can use the volumes. If
the Composite Library name is not reused, you must change the information for all
volumes from both VTSs.
• Delete both of the old libraries and define the new library in SMS through ISMF.
Remember to write the new Library-IDs in the definitions. You must also remember
to relate all existing Storage Group definitions from the old library names to the new
library name.
Remember: If you are reusing an existing library name, you only need to update its
Library-ID in the library definition and do not need to delete it.
Tip: If the new SCDS is activated before the new library is ready, the host cannot
communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
Clarification: This procedure applies also for merging two or more VTSs into a TS7740
Virtualization Engine stand-alone cluster. The limiting factor is the amount of virtual
devices and volumes that can be managed for the TS7740 Virtualization Engine. The
TS7740 Virtualization Engine can also reuse the Library name of only one of the VTSs, so
you need to change the volume entries in TCDB for the other VTS. It is also possible to
merge one VTS at a time.
Figure 7-8 Two stand-alone VTSs to a stand-alone cluster with different tape library cartridges move
The time required for moving tapes from one old library to the new one might vary, depending
on the number of cartridges to be moved and the distance between the sites.
Install the new TS7740 Virtualization Engine using new tape drives in the new Tape Library
before the migration. This can abbreviate the outage period for the migration.
Clarification: Your IBM Service Representative (SSR) performs steps 1, 6, 7, 8, 10, and
12 as part of the installation. They are listed for informational purposes only.
1. IBM SSR: Back up the Library Manager administrative data, including the library
constructs from the source library. You can perform this activity before the outage window.
2. Determine whether any conflicting information exists between the TCDB, RMM, and the
Library Manager. This step is important because in this migration scenario, volumes
belonging to one of the two existing library names must be handled. Only one of the
existing library names can continue to exist after you merge two VTSs into one TS7740
Virtualization Engine.
3. Stop host activity:
a. If another VTS or TS7740 Virtualization Engine system is available to the host during
the migration, you can change the ACS constructs to direct allocation to that system.
b. Complete or cancel all host jobs for the two VTSs.
c. Vary off all device addresses associated with the library for all attached hosts.
d. Vary the existing VTSs offline to all hosts.
e. Vary the existing channels offline.
4. Prepare the software changes. All the following steps could be done concurrently with all
the hardware changes that follow:
a. Changes to SMS
When you define a stand-alone cluster, you must define a Composite Library and one
Distributed Library. Reuse the existing library name of one of the two existing VTSs as
the Composite Library name for the new TS7740 Virtualization Engine. For the VTS
that is removed, you must change all volume entries in the TCDB and in RMM to the
remaining Composite Library name before you can use the volumes. If the Composite
Library name is not reused, you must change information for all volumes from both
VTSs.
Delete both old libraries and define the new library in SMS through ISMF. Remember to
write the new Library-IDs in the definitions. You must relate all existing Storage Group
definitions with the old library name (not used anymore) to the new (or remaining)
library name. When you delete a library in the SCDS through ISMF, the change needs
to be reflected in the TCDB.
Tip: If you are reusing one of the existing library names, you only need to update its
Library-ID in the library definition and do not need to delete it.
Tip: If the new SCDS is activated before the new library is ready, the host cannot
communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
Figure 7-9 illustrates the data migration process from one VTS to the corresponding, empty
TS7740 Virtualization Engine.
Figure 7-9 Data migration PTP VTS to a TS7740 Virtualization Engine two-cluster grid
Important: Both installation teams must be on site at the same time and be synchronized
with regard to the installation instruction steps.
Figure 7-10 Migration VTS PTP to TS7740 Virtualization Engine two-cluster grid with the same tape
libraries
Clarification: The IBM Service Representative (SSR) performs steps 1, and 6 - 13 as part
of the installation. They are listed for informational purposes only.
1. IBM SSR: Prepare the TS7740 Virtualization Engines a few days before the outage
window.
2. Determine whether conflicting information exists between the TCDB, RMM, and the
Library Manager. This step is required only if you plan to change the library name, as
described in Step 4 on page 394.
3. Stop host activity:
a. If another VTS or TS7740 Virtualization Engine system is available to the host during
the migration, you can change the ACS constructs to direct allocation to that system.
b. Complete or cancel all host jobs for the PTP VTS.
c. Vary off all device addresses associated with the library for all attached hosts.
d. Vary the existing VTSs offline to all hosts.
e. Vary the existing channels offline.
4. Prepare the software changes. The following steps could be done concurrently with all the
hardware changes that follow:
a. Changes to SMS
When you define a two-cluster grid, you need to define a Composite Library and two
Distributed Libraries, which is the same as for PTP VTS. Reuse the existing library
name as the Composite Library name so that you only need to write the new
Library-IDs in the existing library definitions. If you do not keep the existing library
name, you must delete the old names and add the new names, and then relate all
existing Storage Group definitions to that new name. After that task is done, you must
Tip: If the new SCDS is activated before the new library is ready, the host cannot
communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
6. IBM SSR: Place the non-master PTP VTS in service preparation. The team on the other
site’s master VTS does not need to perform this function. It must wait for the non-master
VTS to be in service. When the non-master VTS is in service, both teams start Step 7 in
synchronization with each other.
7. IBM SSR: Drain the PTP VTSs tape volume cache. This is done by both teams on the PTP
sites.
8. IBM SSR: Extract the PTP VTSs database. See “Backing up VTS data” on page 391 for
details. This step is done by both teams on the PTP sites.
9. IBM SSR: Disconnect the VTSs from their Tape Library, either 3953/TS3500 Tape Library
or 3494 Tape Library, and TS1120 or 3592 Tape Drives. Both teams perform this step.
Tip: TS7740 attachment to the IBM 3494 Tape Library is not supported with R2.0.
The time required for physically moving cartridges from one library to another might vary,
depending on the number of cartridges and the distance that they are transported.
Figure 7-11 Migration of VTS PTP to TS7740 Virtualization Engine two-cluster grid with separate tape
libraries
Clarification: The IBM SSR performs steps 2, 6 - 9, 1, 13, and 14 as part of the
installation. They are listed for informational purposes only.
1. Back up the PTP VTS Library Managers Administrative Data of the library constructs from
the PTP VTS source libraries. You can perform this activity before the outage window
2. Determine if any conflicting information exists between the TCDB, RMM, and Library
Manager. This step is needed only if you plan to change the library name, as described in
Step 4 on page 394.
3. Stop host activity:
a. If another VTS or TS7740 Virtualization Engine system is available to the host during
the migration, you can change the ACS constructs to direct allocation to that system.
b. Complete or cancel all host jobs for the PTP VTS.
c. Vary off all device addresses associated with the library for all attached hosts.
d. Vary the existing VTSs offline to all hosts.
e. Vary the existing channels offline.
4. Prepare the software changes. The following steps must be performed simultaneously
with the hardware changes listed in step 5 on page 411:
a. Change SMS
When you define a two-cluster grid, you must define a Composite Library and two
Distributed Libraries, which is the same as for PTP VTS. Reuse the existing library
name as the Composite Library name so you only have to write the new Library-IDs in
the existing library definitions.
If you do not keep the existing library names, you must delete the old names, add the
new names, and then relate all existing Storage Group definitions to those new names.
Tip: If the new SCDS is activated before the new library is ready, the host cannot
communicate with the new library yet. Expect message CBR3006I to be issued:
CBR3006I Library library-name with Library ID library-ID unknown in I/O
configuration.
10.Restore the PTP VTSs database into the TS7740 Virtualization Engines. See “Restoring
VTS data” on page 392 for details. Both installation teams perform this procedure on the
PTP sites.
11.Insert the physical cartridges from the source tape libraries into the new target libraries.
12.Inventory new libraries and determine whether volumes are correctly assigned to the
proper TS7740 Virtualization Engine logical library. Cartridge Assignment Policies (CAP)
should have been defined in advance. If definitions are correct, you will see volumes that
are assigned to the correct TS7740 Virtualization Engine Logical Partition. TS7740
Virtualization Engine will request an inventory upload startup and sort it out into its own
database. This step must be performed at both sites.
13.Vary the TS7740 Virtualization Engines online.
14.Now you are ready to test the new two-cluster grid:
a. Vary the defined channels online.
b. Vary the logical devices online.
c. Vary the Distributed and Composite Libraries online to the hosts.
d. Run test jobs to read and write from the two-cluster grid TS7740 Virtualization Engine.
e. Further validate the cluster by issuing the Library Request Host Console command.
See 8.5.3, “Host Console Request function” on page 589 for command usage.
This section highlights two new procedures that have similarities with VTS to TS7740 data
migration previously described in here.
With the new capabilities displayed by the new hardware platform delivered with R2.0 level,
you might want to upgrade your existent TS7700 Virtualization Engine equipped with the
previous 3957-V06/VEA server to the newer 3957-V07/VEB IBM Power7 pSeries.
You might need another flavor of upgrade, like replacing the existing TS7740 with a new
TS7740 frame from manufacturing. Such replacement might be required for a number of
reasons. For example, a bigger or faster cache controller or a new server engine is wanted.
Or even due to non technical reason, as in an expiring leasing. This process is now supported
by IBM Data Migration Services. Contact your IBM representative for more details.
In this procedure, it is assumed that the original library remains the same and the logical
library is unchanged. The source V06/VEA must be at Licensed Internal Code R1.7 or higher.
Standalone and grid members are supported by this replacement procedure. If the cluster is
part of a grid, the procedure will support replacing one cluster at a time. This is meant to keep
the risks at a minimum while the grid operates normally. Cluster ID changes between old
source server and the new replacement are also not allowed by the procedure. Keep in mind
that new 3957-V07 server will be running on R2.0 LIC level. As stated before, R2.0 does not
support 3494 Tape Library attachment.
Plan ahead to minimize Host jobs to the cluster undergoing the engine replacement. Minimize
write jobs as well to speed up the service preparation and cache migration times. Removing
volumes that are not often used from cache (migrate to tape) ahead of the procedure might
help reduce cache migrate time further.
The new target 3957-V07 server must be configured with the appropriate Feature Codes
corresponding to those present in the old 3957-V06 server. Feature Codes are not
transferable because they are only valid for a specific box S/N.
Tip: Keep the old 3957-V06/VEA safely stored until the entire procedures has been
completed successfully, and new 3957-V07/VEB is online and completely operational. If
a problem occurs, the old 3957-V06/VEA can be used as a backout plan.
In this procedure, it is assumed that the original library remains the same and the logical
library is unchanged. The source V06 must be at Licensed Internal Code R1.7 or later.
It is assumed the target TS7740 is completely cleaned with the expected TS7740
Virtualization Engine Licensed Internal Code R2.0 installed. If you are planning to use a
previously used TS7740 in this procedure, you must order FC4017 (Cluster Cleanup) for that
TS7740 before the replacement. Also, the target TS7740 must be upgraded to TS7740
Virtualization Engine R2.0 level of the Licensed Internal Code.
The Frame Replacement procedure supports clusters both in stand-alone or part of a grid.
However, if the cluster is part of a grid, this procedure only supports replacing one cluster at a
time. This is designed to keep risks to the minimum and allow the grid to continue operating
normally during the change.
Cache cleanup for the old TS7740 frame is a separate option. Work with your IBM
representative to order the Cluster Cleanup feature code (FC 4017) if needed.
Cluster ID change for this cluster and IP addresses changes in other members of the grid are
not covered by this procedure.
You might plan to pre-stage volumes back into the new cache using existing host tools like
BVIR, VESYNC, or PRESTAGE after the replacement is complete, if desired.
Keep in mind that new 3957-V07 server will be running on R2.0 LIC level. As stated before,
R2.0 does not support 3494 Tape Library attachment.
You might plan ahead to minimize Host jobs to the cluster undergoing the replacement
operation. Minimize write jobs as well to speed up the service preparation and cache
migration times. If possible, volumes data that are not used often can be migrated to tape
before the procedure. This would reduce cache migrate times further.
Your IBM System Service Representative (SSR) will prepare the new TS7740 frame
installation ahead of the replacement date, positioning it near the final location. Work with
your IBM SSR to determine where to place it.
Important: Do not tamper with the old TS7740 until the entire procedure has been
completed successfully, and new TS7740 is fully operational. If a problem occurs, the
old TS7740 Virtualization Engine frame can be used as a backout plan.
Restriction: Be aware that drive model changes can only be made in upward direction
(from an older to a newer model). Fall-back to the older models is not supported.
Clarification: This section gives you an high-level information regarding the subject. Do
not use it as a literal step-by-step guide. Work with your IBM SSR when preparing for an
upgrade.
The only change is the number of available drives because new drives are functionally
equivalent to the old ones.
Remember: 3592-E05 and 3592-E06 are mutually exclusive. They cannot be intermixed in
the same TS7740 (as in the previous example with J1A and E05).
Your TS7740 will emerge from the activity, fully equipped with TS1120-E05 or TS1130-E06 for
all drives. There is no intermix in this case.
1. Stop all host activity in this cluster. If this cluster is part of a grid, move your host activity to
the other cluster (varying device range for the other cluster online, and them varying
device range on this cluster offline).
2. IBM SSR places this cluster in service, and offline.
3. IBM SSR have all 3592-J1A tape drives belonging to this TS7740 cluster unassigned
(drives must be unloaded for this operation) in the Drive Assignment window on TS3500
Web Interface. See Figure 4-9 on page 201 for reference.
4. IBM SSR replaces the 3592-J1A drives with the new ones. Drive check should be
performed at that time using TS3500 Operator window.
5. IBM SSR verifies that drives are in proper operation mode: TS1120-E05 should be set as
NATIVE E05, encryption-enabled. TS1130 should be encryption-enabled. See Figure 4-13
on page 204 for the procedure.
6. IBM SSR assign all new drives back to this TS7740 Logical Library in tape library using
TS3500 Web Interface. SSR will make sure that there are 4 drives defined as control path
and new drives are encryption-enabled E06 or native E05 mode.
7. If fiber switches are being replaced by the new 8 Gb switches, IBM SSR should do it at this
point.
8. IBM SSR cable new tape drives using the same cabling already in place.
9. IBM SSR check if connections are operational for the new drives, using switch and drive
lights or TS3500 Web Interface (or both).
10.IBM SSR start a reconfiguration procedure for the Tape Drives within TS7740.
Remember: In the previous scenario, all cartridges in filling state are closed, and the
scratch tapes being open for writing new data are reinitialized to the new drive format
automatically.
You can use the same description to understand what happens when changing the tape
emulation mode in the TS7740 from 3592-J1A emulation to TS112-E05 Native mode. All
steps apply but the ones related to changing physically drives or assignments within TS3500.
The drives are the same. The only change will be made in the drive emulation mode. Drive
emulation is changed in the TS3500 Web Interface (see Figure 4-12 on page 203 for a
reference) using a specific command in the TS7740 by the IBM SSR. Media format is handled
as described above.
Considering that you might want to change your 3592 Data Cartridges JA to 3592 Extended
Data Cartridges JB for a capacity upgrade, perform the following steps:
1. Create a new range of physical volumes in the TS7740 for the new JB media. See
Figure 4-23 on page 218 for guidance.
2. Create a new Cartridge Assign Police (CAP) for the new JB range and assign them to the
proper TS7740 Logical Partition. See “Defining Cartridge Assignment Policies” on
page 205 for reference.
3. Insert the new JB Cartridges in the TS3500 Tape Library. See 4.2.6, “Inserting TS7740
Virtualization Engine physical volumes” on page 206.
4. Assign an existing pool or pools of physical volumes in the TS7740 to the new media type.
See 4.3.4, “Defining physical volume pools (TS7740 Virtualization Engine)” on page 219.
5. Modify the Storage Group in the TS7740 Constructs pointing to the new JB cartridge
pool(s). See 4.3.8, “Defining TS7700 constructs” on page 238.
6. Modify the Reclaim settings in the JA media pool using a JB pool(s) as target pool. See
4.3.4, “Defining physical volume pools (TS7740 Virtualization Engine)” on page 219 for
more details.
These settings cause TS7740 to start using the new media JB for stacking newly created
logical volumes. Existing JA physical volumes will be reclaimed into the new JB media, thus
becoming empty.
You might prefer to keep your pool definitions unchanged throughout the media change
process. In this case, empty the common scratch pool of the previous media type and fill it up
with the new cartridges. Also, set in the Pool Properties Table the new media-type as your
You can temporarily change pool(s) setup from Borrow, Return to Borrow, Keep during
transition, when both media types would coexist into Tape Library. See the topic under 4.3.4,
“Defining physical volume pools (TS7740 Virtualization Engine)” on page 219. In this way
cartridges of the previous media type would not be available for selection in the common
scratch pool. After the old-type cartridges are emptied, they can be ejected from the Tape
Library.
The pool(s) can be set back to Borrow, Return after old media-type cartridges have been
completely removed from the TS7740’s inventory or at your convenience.
Clarification: You might use the new Host Console Request RRCLSUN (ReCaLl SUNset)
to expedite the replacement of the older media with newer media. In this case, make sure
that the common scratch pool has the new media type available and the storage pools are
set to borrow from common scratch pool. Otherwise, logical volumes will be pre-migrated
again to the older media type.
This function invalidates the logical volume on the older physical volume just after recall,
regardless of logical volume being updated or not. As a result, any recalled volume will be
pre-migrated to another physical volume. The library request command is as follows:
LI REQ, lib_name,RRCLSUN ENABLE/DISABLE/STATUS
where:
Enable Activate the force residence on recall function
Disable Deactivate the force residency on recall function
Status Display the current setting
If you are changing existing drives to new drive models using the same media type, use the
Library Request command to accelerate the drive format conversion. This process allows you
to reclaim capacity from your existing media. In this scenario, you are not changing the
existing cartridges already in use. There are no changes needed regarding the existing
physical volume pools.
Examples of this type of configuration are native tape drives, VTS with 3590 Tape Drives, or
other vendor tape solutions. Although they can all can be migrated to TS7700 Virtualization
Engine, the process requires host involvement to copy the data into the TS7700 Virtualization
Engine.
Hints about how to move data out of the TS7700 Virtualization Engine are provided in 7.7.6,
“Moving data out of the TS7700 Virtualization Engine” on page 426. However, the TS7700
Virtualization Engine is a closed-storage method, so you must be careful about selecting data
to move into it. You do not want to store a large amount of data in the TS7700 Virtualization
Engine that will need to be moved back out.
You can select data based on data set name, by application, or by any other variable that you
can use in the ACS routines. You can also select data based on type, such as SMF data or
DASD DUMP data.
For data other than DFSMShsm and DFSMSdss, if you are using SMS tape, update the ACS
routines to include the data you want to move. You decide what you filter for and how you
write the ACS routines. You can also migrate based on the UNIT parameter in the JCL to
reflect the applicable unit for the TS7700 Virtualization Engine.
You can select data types that create large quantities of data, like SMF records or DASD
DUMPS, and you can also select data types that create many small data sets. By observing
how the TS7700 Virtualization Engine handles each type of data, you become familiar with
the TS7700 Virtualization Engine, its functions, and capabilities.
Be aware that certain applications have knowledge of the VOLSER where the data is stored.
There are special considerations for these applications. If you change the VOLSER that the
data is on, the application has no way of knowing where the data resides. For more
information about this topic, see 7.7.4, “Considerations for static VOLSERs” on page 425.
If you are using DFSMSrmm, you can easily acquire data from an RMM EXTRACT file, which is
normally created as part of the regular housekeeping. Then, using a REXX EXEC or
ICETOOL JCL program, you extract the information needed, such as data set name,
VOLSER, and file sequence of the input volumes.
When using SMS tape, the first step is to update the ACS routines to create all new data in
the TS7700 Virtualization Engine. With this minimal change, tapes are created in a new
TS7700 Virtualization Engine, so moving it again is not necessary.
If you move DFSMShsm-owned data, use the DFSMShsm recycle process to move the data
to the TS7700 Virtualization Engine. Use a COPYDUMP job to move DFSMSdss data to the
TS7700 Virtualization Engine.
The utility to use depends on the data selected. In most cases, it is sequential data that can
be copied using the IEBGENER utility, DITTO/ESA. If you have DFSORT or a similar utility,
ICEGENER and ICETOOL can probably give better performance.
You must use a specific utility when the input data is in a special format, for example,
DFSMSdss dump data. DFSMSdss uses a 64 KB blocksize and only the proper DSS utility,
such as COPYDUMP, can copy with that blocksize. Also, be careful when copying multifile
and multivolume chains. You might want to separate these files, because there is no penalty
when they are in a TS7700 Virtualization Engine.
In general, all other data (except data owned by an application, such as DFSMShsm) belongs
to batch and backup workloads. Use EXPDT and RETPD from the DFSMSrmm Extract file to
find what tapes have a distant expiration date, begin moving these tapes, and leave the short
time retention tapes to the last phase of data movement (they likely will be moved by the
everyday process).
To avoid this time-consuming process, use a tape copy tool because they are ready to make
all the necessary changes in a tape management system.
Using this quick-method sequence, you can copy every kind of tape data, including GDGs,
without modifying the generation number.
In an RMM environment, you can use REXX CLIST and RMM commands, listing data from
the input volumes and then using the RMM REXX variables with the CD command to update
the output. Then call IDCAMS to update the ICF catalog. When the operation completes and
all errors have been corrected, use the RMM DELETEVOLUME command to release the
input volumes. See z/OS DFSMSrmm Guide and Reference, SC26-7404 for more information
about RMM commands and REXX variables. If you are using a tape management system
other than RMM, see the appropriate product functions to obtain the same results.
Migrating data inside the TS7700 Virtualization Engine can be made easier by using products
such as DFSMShsm or IBM Tivoli Storage Manager. If you are planning to put DFSMShsm or
IBM Tivoli Storage Manager data in the TS7700 Virtualization Engine, see the following
sections:
7.8, “Migration of DFSMShsm-managed data” on page 429
7.10, “IBM Tivoli Storage Manager” on page 438
With DFSMShsm, you can change the ARCCMDxx tape device definitions to an esoteric
name with TS7700 Virtualization Engine virtual drives (in a BTLS environment) or change
SMS ACS routines to direct DFSMShsm data in the TS7700 Virtualization Engine. The
DFSMShsm RECYCLE command can help speed the movement of the data.
A similar process can be used with IBM Tivoli Storage Manager, changing the device class
definitions for the selected data to put in the TS7700 Virtualization Engine and then invoking
the space reclamation process.
If you are moving DB2 data into the TS7700 Virtualization Engine, be sure that, when copying
the data, the DB2 catalog is also updated with the new volume information. You can use the
DB2 MERGECOPY utility to speed up processing, using TS7700 Virtualization Engine virtual
volumes as output.
In general, DB2 Imagecopies and Archlog are not retained for a long time. After all new write
activity goes to the TS7740 Virtualization Engine, you can expect that this data is moved by
the everyday process.
The DFSMSrmm Tape Copy Tool cannot be used when you have a Tape Management
System other than DFSMSrmm. You must choose another Tape Copy Tool from Table 7-3.
Consider the following factors when you evaluate a tape copy product:
Interaction with your tape management system
Degree of automation of the process
Speed and efficiency of the copy operation
Flexibility in using the product for other functions, such as duplicate tape creation
Ease of use
Ability to create a pull list for any manual tape mounts
Ability to handle multivolume data sets
Ability to handle volume size changes, whether from small to large, or large to small
Ability to review the list of data sets before submission
Audit trail of data sets already copied
Ability to handle failures during the copy operation, such as input volume media failures
Flexibility in filtering the data sets by wild cards or other criteria, such as expiration or
creation date
Table 7-3 lists several common tape copy products. You can choose one of these products or
perhaps use your own utility for tape copy. You do not need any of these products, but a tape
copy product can make your job easier if you have many tapes to move into the TS7700
Virtualization Engine.
Tape Copy Tool/ IBM Contact your IBM Representative for more
DFSMSrmm information about this service offering.
In addition to using one of these products, consider using IBM Global Services Global
Technology Services (GTS) to assist you in planning and moving the data into the TS7700
Virtualization Engine. For more information about these services, see 3.9.2, “Implementation
services” on page 184.
For assistance with DFSMShsm tapes, see 7.7.2, “Quick method of moving data” on
page 421.
The preferred method for moving this type of data is to use instructions from the application
author. If, however, you must copy the data to a volume with the same VOLSER, consider the
following information:
The source and target media might not be the exact same size.
You cannot mount two volumes with the same VOLSER at the same time.
If the source tape is a system-managed tape, you cannot have two volumes with the same
VOLSER.
This method is not preferred for moving data to the TS7700 Virtualization Engine. This
method applies only if you have to maintain the dataset-to-VOLSER relationship. It has
limitations and weaknesses.
To move data to the TS7700 Virtualization Engine, perform the following steps:
1. Copy the non-TS7700 Virtualization Engine tape volumes to DASD or other tape volumes.
2. If the source volume is resident on a 3494 Tape Library, eject the cartridge from the 3494
Tape Library using the LIBRARY EJECT command or the ISMF EJECT line operator
command from the ISMF window.
3. Delete the ejected volume from the tape management system.
4. Define the VOLSER range, including the once-duplicated number, to the TS7700
Virtualization Engine.
7.7.5 Combining methods to move data into the TS7700 Virtualization Engine
You will most likely want to use a combination of the phased and quick methods for moving
data into the TS7700 Virtualization Engine. One approach is to classify your data as static or
dynamic.
Static data is information that will be around for a long time. This data can be moved into the
TS7700 Virtualization Engine only with the quick method. You must decide how much of this
data will be moved into the TS7700 Virtualization Engine. One way to make this decision is to
examine expiration dates. You can then set a future time when all volumes, or a subset, are
copied into the TS7700 Virtualization Engine. There might be no reason to copy volumes that
are going to expire in two months. By letting these volumes go to scratch status, you can save
yourself some work.
Dynamic data is of a temporary nature. Full volume backups and log tapes are one example.
These volumes typically have a short expiration period. You can move this type of data with
the phased method. There is no reason to copy these volumes if they are going to expire
soon.
With this method, the data is reprocessed by the host and copied to another medium. This
method is described in 7.7.1, “Phased method of moving data” on page 420. The only
difference is that you need to address the TS7700 Virtualization Engine as input and the
non-TS7700 Virtualization Engine drives as output.
Copy Export
You can use the Copy Export function to copy the data.
With this function, a copy of selected logical volumes that is written to the TS7700
Virtualization Engine can be removed and taken off site for disaster recovery purposes. The
benefits of volume stacking, which places many logical volumes on a physical volume, are
retained with this function. In addition, because the data being exported is a copy of the
logical volumes, the logical volumes data remains accessible by the production host systems.
The Copy Export function for stand-alone configurations was introduced with TS7700
Virtualization Engine code level 8.3.x.x and Library Manager code level 534.x. For grid
configurations, the Copy Export function was introduced with TS7700 Virtualization Engine
code level 8.4.x.x. Although no host software updates required to support the Copy Export
function are required, other functions are supported in the TS7700 Virtualization Engine that
do require a later level of host software for support. One of those, host console request,
requires z/OS support, which is provided with z/OS V1R6 and later. See OAM APAR
OA20065 and device services APARs OA20066, OA20067, and OA20313.
Remember: Execute the Copy Export operation on a periodic basis, possibly even more
than once a day. Because the purpose is to get a copy of the data off site for disaster
recovery purposes, perform it soon after the data is created to minimize the time for the
recovery point objective.
DFSMShsm ABARS
The third way is to copy the data with the DFSMShsm ABARS function.
You can identify the data set names in a single selection data set, or you can divide the
names among as many as five selection data sets. You can specify six types of data set lists
in a selection data set. The type you specify determines which data sets are backed up and
how they are recovered.
An INCLUDE data set list is a list of data sets to be copied by aggregate backup to a tape
data file where they can be transported to the recovery site and recovered by aggregate
recovery. The list can contain fully qualified data set names or partially qualified names with
place holders. DFSMShsm expands the list to fully qualified data set names.
Using a selection data set with the names of the data sets you want to export from the
TS7700 Virtualization Engine, obtain a list of files on logical volumes that the ABARS function
copies to non-TS7700 Virtualization Engine drives.
Define the aggregate group and Management Class used for aggregate backup to DFSMS
through ISMF screens.
The aggregate group lists the selection data set names, instruction data set name, and
additional control information used by aggregate backup in determining which data sets are to
be backed up.
With the PROCESSONLY(USERTAPE) keyword, only tape data sets are processed. In this
way, therefore, you can be sure that only the input data from TS7700 Virtualization Engine
logical volumes is used.
When you issue the ABACKUP command with the EXECUTE option, the following tape files
are created for later use as input for aggregate recovery:
Data file: Contains copies of the data sets that have been backed up.
Control file: Contains control information needed by aggregate recovery to verify or
recover the application's data sets.
Instruction/activity log file: Contains the instruction data set, which is optional.
Summary
At the end of this process, you obtain an exportable copy of the TS7700 Virtualization Engine
data, which can be used for disaster recovery and stored off site using other physical tapes.
Consider using the Copy Export function, which allows you to move a copy of the original
logical volumes to an off-site location without reading the tape data twice. The Copy Export
function operates on another Physical Volume Pool in the library and creates the copy in the
background without any process required on the host. However, Copy Export requires an
empty TS7700 Virtualization Engine at your disaster site.
Although DFSMShsm is an application that is capable of using the full cartridge capacity, for
various reasons you might want to consider using the TS7700 Virtualization Engine instead of
native physical drives for DFSMShsm data. For example, when writing ML2 data onto a
cartridge with an uncompressed capacity of 300 GB, chances are higher that a recall request
needs exactly this cartridge that is currently being written to by a space management task.
This incident is known as recall takeaway.
The effects of recall takeaway can be a real disadvantage when writing ML2 data onto native,
high capacity cartridges because the space management task must set aside its output tape
to make it available to the recall task. Although the partially-filled output tape remains eligible
for subsequent selection, the next time that space management runs, it is possible to
accumulate a number of partial tapes beyond DFSMShsm's needs if recall takeaway activity
occurs frequently. Excess partial tapes created by recall takeaway activity result in poor
utilization of native cartridges. In addition, because recall takeaway activity does not cause
the set-aside tape to be marked full, it is not automatically eligible for recycling, despite its
poor utilization.
High capacity cartridges are more likely to experience both frequent recall takeaway activity,
and also frequent piggy-back recall activity, in which recalls for multiple data sets on a single
tape are received while the tape is mounted. Although piggy-back recalls have a positive
effect by reducing the number of mounts required to perform a given number of recalls, you
must also consider that multiple recalls from the same tape must be performed serially by the
same recall task. Were those same data sets to reside on separate tapes, the recalls could
potentially be performed in parallel, given a sufficient number of recall tasks. In addition, the
persistence of the virtual tape in the tape volume cache after it has been demounted allows
DFSMShsm to perform ML2 recalls from the disk cache for a period of time without requiring
that a physical tape be mounted.
Other reasons also exist for directing DFSMShsm data into a TS7700 Virtualization Engine.
The amount of native drives limits the number of DFSMShsm tasks that can run concurrently.
With the large number of up to 256 virtual drives in a stand-alone cluster configuration or 512
virtual drives in a two-cluster grid configuration, you can dedicate a larger number of virtual
drives to each DFSMShsm function and allow for higher throughput during your limited
backup and space management window.
When increasing the number of DFSMShsm tasks to take advantage of the large number of
virtual drives in a TS7700 Virtualization Engine, consider adding more DFSMShsm auxiliary
tasks (MASH), rather than simply increasing the number of functional tasks within the existing
started tasks. Each DFSMShsm started task can support up to 15 AUTOBACKUP tasks.
Other reasons for using the TS7700 Virtualization Engine with DFSMShsm are the greatly
reduced run times of DFSMShsm operations that process the entire volume, such as AUDIT
MEDIACONTROLS and TAPECOPY.
DFSMShsm data is well suited for the TS7700 Virtualization Engine, given the appropriate
tailoring of those parameters that can affect DFSMShsm performance. The subsequent
sections describe this tailoring in more detail.
For more details, see the DFSMShsm Storage Administration Guide, SC26-7402.
Table 7-4 lists the maximum data set sizes supported in z/OS environments.
Important: A single DFSMShsm user data set can span up to 40 tapes. This limit is for
migration, backup, and recycle.
After DFSMShsm writes a user data set to tape, it checks the volume count for the
DFSMShsm tape data set. If the volume count is greater than 215, the DFSMShsm tape data
set is closed, and the currently mounted tape is marked full and is de-allocated.
Let us assume that you have a very large data set of 300 GB. Such a data set does not fit on
40 volumes of 800 MB each, but it would fit on 6000 MB large virtual volumes, as shown in
the following example:
6000 MB x 2.5 x 40 = 600 GB
Any single user data set larger than 600 GB is a candidate for native TS1120 Tape Drives.
Assuming a compression rate of 2.5:1, they might not fit onto the supported number of 40
volumes. In this case, consider using native TS1130 Tape Drives rather than TS7700
Virtualization Engine.
Important: DFSMShsm can have more than one address space on one LPAR, Multi
Address Space Support (MASH), or have a separate setup on separate LPARs in your
PARMLIB member ARCCMDxx and separated by ONLYIF statements. One DFSMShsm
address space can have a MIGUNIT(3590-1) and the other address space a
MIGUNIT(TS7700 Virtualization Engine). The same is true for BUUNIT. The DFSMShsm
instance that has the 3592 as migration or backup unit can run space management or auto
backup only for that Storage Group (SG) where all your large data sets, such as z/FS,
reside.
The other DFSMShsm instance would migrate and back up all the smaller data sets onto
TS7700 Virtualization Engine. You can use a command such as F DFSMS2,BACKDS or F
DFHSM2,BACKVOL(SG2) to issue the command to the second address space of
DFSMShsm:
SETSYS TAPEMIGRATION(ML2TAPE(TAPE(unittype))
SETSYS RECYCLEOUTPUT(MIGRATION(unittype))
SETSYS BACKUP(TAPE(unittype))
SETSYS RECYCLEOUTPUT(BACKUP(unittype))
Each tape identified as being empty or partially filled must be marked full by using one of the
following DFSMShsm commands:
DELVOL volser MIGRATION(MARKFULL)
DELVOL volser BACKUP(MARKFULL)
A key point, however, is that whether or not the data set spans, DFSMShsm uses FEOV
processing to get the next volume mounted. Therefore, the system believes that the volume is
part of a multi-volume set regardless of whether DFSMShsm identifies it as a connected set.
Because of the EOV processing, the newly mounted DFSMShsm volume will use the same
Data Class and other SMS constructs as the previous volume.
With the DFSMShsm SETSYS PARTIALTAPE MARKFULL option, DFSMShsm marks the last
output tape full, even though it has not reached its physical capacity. By marking the last
volume full, the next time processing starts, DFSMShsm will use a new volume, starting a
new multi-volume set and allowing for the use of a new Data Class and other SMS constructs.
If the volume is not marked full, the existing multi-volume set will continue to grow and to use
the old constructs.
Use the SETSYS PARTIALTAPE MARKFULL option because it reduces the number of
occasions in which DFSMShsm will append to a partial tape, which results not only in the
need to mount a physical tape, but also in the invalidation of the existing virtual tape, which
will eventually need to be reclaimed by the TS7700 Virtualization Engine.
See Table 7-5 on page 435 when tailoring the ARCCMDxx SETSYS parameters. From
TS7700 Virtualization Engine V1.4 onwards, with the host support added with APAR
OA24969, HSM is aware of the large virtual volume capacity. Therefore, it is not necessary to
use high PERCENTFULL values any more to tune capacity of tapes from a DFSMShsm point
of view. The maximum PERCENTFULL value that can be defined is 110%.
In the case of OAM’s Object Tape Support, the TAPECAPACITY parameter in the SETOAM
statement of the CBROAMxx PARMLIB member is used to specify the larger logical volumes
sizes. Because of this new functionality, defining TAPECAPACITY in the CBROAMxx
PARMLIB member is not necessary. For additional information about outboard policy
management, see the z/OS DFSMS Object Access Method Planning, Installation and
Storage Administration Guide for Tape Libraries, SC35-0427.
Multisystem considerations
If multiple TS7700 Virtualization Engines are eligible for a request, also consider that the
same logical volume size is used for the request across all libraries. When displaying the
volumes through your tape management system, the tape management system might
continue to display the volume capacity based on the default volume size for the media type
with the volume usage (or a similar parameter) showing how much data has actually been
written to the volume reflecting its larger capacity.
Scratch volumes
The default volume size is overridden at the library through the Data Class policy
specification, and is assigned or reassigned when the volume is mounted for a scratch mount.
Using a global scratch pool, you benefit from a fast mount time by using the fast-ready
attribute for the scratch category, as explained in 4.3.5, “Defining Fast Ready categories” on
page 233. Consider using the following definitions to benefit from the fast scratch mount
times:
SETSYS SELECTVOLUME(SCRATCH): Requests DFSMShsm to use volumes from the common
scratch pool
SETSYS TAPEDELETION(SCRATCHTAPE): Defines that DFSMShsm returns tapes to the
common scratch pool
SETSYS PARTIALTAPE(MARKFULL): Defines that an DFSMShsm task will mark the last tape it
used in a cycle to be full, thus avoiding a specific mount during the next cycle
The MARKFULL parameter does not mean a waste of space using TS7700 Virtualization
Engine because the stacked volume contains only the written data of each logical volume
copied and the same applies to the tape volume cache.
Tape spanning
You can use the optional TAPESPANSIZE parameter of the SETSYS command to reduce the
spanning of data sets across migration or backup tape volumes, for example:
SETSYS TAPESPANSIZE(4000)
The value in parentheses represents the maximum number of megabytes of tape (ML2 or
backup) that DFSMShsm might leave unused while it tries to eliminate spanning of data sets.
To state this in another way, this value is the minimum size of a data set that is allowed to
span tape volumes. Data sets whose size is less than the value do not normally span
This parameter offers a trade-off: It reduces the occurrences of a user data set spanning
tapes in exchange for writing less data to a given tape volume than its capacity would
otherwise allow. The amount of unused media can vary from 0 to nnnn physical megabytes,
but roughly averages 50% of the median data set size. For example, if you specify 4000 MB
and your median data set size is 2 MB, on average only 1 MB of media is unused per
cartridge.
Installations that currently experience an excessive number of spanning data sets might want
to consider specifying a larger value in the SETSYS TAPESPANSIZE command. Using a high
value reduces tape spanning. In a TS7700 Virtualization Engine, this value reduces the
number of virtual volumes that need to be recalled to satisfy DFSMShsm recall or recover
requests. You can be generous with the value because no space is wasted. For example, a
TAPESPANSIZE of 4000 means that any data set with less than 4000 MB that does not fit on
the remaining space of a virtual volume will be started on a fresh new virtual volume.
Volume dumps
When using TS7700 Virtualization Engine as output for the DFSMShsm AUTODUMP
function, do not specify the following parameters:
DEFINE DUMPCLASS(dclass STACK(nn))
BACKVOL SG(sgname)|VOLUMES(volser) DUMP(dclass STACK(10))
These parameters were introduced to force DFSMShsm to use the capacity of native physical
cartridges. If used with TS7700 Virtualization Engine, they cause unnecessary multivolume
files and reduce the level of parallelism possible when the dump copies are restored. Use the
default value, which is NOSTACK.
For example, if your installation often has more than 10 tape recall tasks at one time, you
probably need twelve TS1120 back-end drives to satisfy this throughput request because all
migrated data sets might already have been removed from the tape volume cache and need
to be recalled from tape.
TAPECOPY
The DFSMShsm TAPECOPY function requires that original and target tape volumes are of
the same media type and use the same recording technology. Using a TS7700 Virtualization
Engine as the target for the TAPECOPY operation from a original volume that is not a TS7700
volume can cause problems in DFSMShsm because TS7700 Virtualization Engine virtual
volumes have different volume sizes, even though they are defined as CST or ECCST.
For example, if you are planning to put DFSMShsm alternate copies into a TS7700
Virtualization Engine, a tape capacity of 45% might not be enough for the input non-TS7700
Virtualization Engine ECCST cartridges. TAPECOPY fails if the (virtual) output cartridge
encounters EOV before the input volume has been copied completely.
However, using TS7700 Virtualization Engine logical volumes as the original and 3490E
native as the TAPECOPY target might cause EOV at the alternate volume because of the
higher IBMLZ1 compression seen on the virtual drive compared to the IDRC compression on
the native drive.
Guideline: In TS7700 Virtualization Engine V1.4 and later with host support applied, 1 GB
and larger sized, you do not need to use PERCENTFULL greater than 100%.
DUPLEX TAPE
For duplexed migration, both output tapes must be of the exact same size and unit type. A
good practice is to use a multi-cluster grid and let the hardware do the duplex rather than the
DFSMShsm software function. This method also allows you to more easily manage the
disaster side. You can use GDPS and switch to the remote DASD side and the tape VOLSER
itself does not need to be changed. No TAPEREPL or SETSYS DISASTERMODE commands
are needed.
When using the TS1120 or TS1130 Tape Drive for duplexed migration output, performance is
degraded because of the back-to-back SYNCDEV operations done for the original and the
alternate tapes. APAR OA09928 provides a patch allowing syncs on the alternate tape to be
disabled. The performance improvement varies with data set size, with the greatest
improvements seen for the smaller data sets. Performance improvements can be quite
substantial.
When HSM writes ML2 data to tape, it deletes the source data as it goes along, but before the
RUN is issued to the TS7700 Virtualization Engine. This means that, for a period of time until
the copy is made, only one copy of the ML2 data might exist. The reason is because the
TS7700 Virtualization Engine grid, even with a Copy Consistency Point (CCP) of RR, makes
a second copy at RUN time.
By using the appropriate Management Class settings in SMS, you can make sure that a data
set is not migrated to ML2 before a valid backup copy of this data set exists. This way, there
are always two valid instances from which the data set can be retrieved: One backup and one
ML2 version. After the second copy is written at rewind-unload time, two copies of the ML2
data will exist in the grid.
Another way to ensure that two copies of the ML2 data exist is to use HSM duplexing. This
way creates two separate copies of the ML2 data before HSM deletes it. Ideally, with a
multi-cluster grid, you want one copy of the data in one cluster and the second copy in
another to avoid loss of data if one of the clusters experiences a disaster. You can use the
CCPs to ensure that each copy of the duplexed data is sent to separate clusters.
RECYCLE
The DFSMShsm RECYCLE function reduces the number of logical volumes inside the
TS7700 Virtualization Engine, but when started, it can cause bottlenecks in the TS7700
Virtualization Engine recall process. If you have a TS7700 Virtualization Engine with four
physical drives, use a maximum of two concurrent DFSMShsm RECYCLE tasks. If you have
a TS7700 Virtualization Engine with six physical drives, use no more than five concurrent
DFSMShsm RECYCLE tasks.
Use a RECYCLEPERCENT value that depends on the logical volume size, for example:
5 for 1000, 2000, 4000, or 6000 MB volumes
10 for 400 or 800 MB volumes
Using the following commands for RECYCLE input can be helpful while selecting and
migrating data to and from a TS7700 Virtualization Engine:
RECYCLE SELECT (INCLUDE(RANGE(nnnnn:mmmmm)))
RECYCLE SELECT (EXCLUDE(RANGE(nnnnn:mmmmm)))
The immediate purpose is to enable you to set up volume ranges for various media types and
emulation types, such as TS7700 Virtualization Engine logical volumes and 3490-emulated
cartridges. There are no special data set names for RECYCLEOUTPUT, although you must
code your ACS routines to route RECYCLEOUTPUT to the library using the &UNIT variable.
See the DFSMShsm Primer, SG24-5272 for more information about implementing
DFSMShsm.
DFSMSrmm accepts logical volumes capacity from an open close end-of volume (OCE)
module. DFSMSrmm can now always list the actual reported capacity from TS7740
Virtualization Engine.
When you direct allocations inside the TS7700 Virtualization Engine, the Vital Record
Specifications (VRSs) or vault rules indicates to the tape management system that the data
set will never be moved outside the library. During VRSEL processing, each data set and
volume is matched to one or more VRSs, and the required location for the volume is
determined based on priority. The volume’s required location is set. The volume is not moved
unless DSTORE is run for the location pair that includes the current volume location and its
required location. For logical volumes, this required location can be used to determine which
volume should be exported. For Copy Export, the required location is only used for stacked
volumes that have been copy exported.
Other tape management systems must modify their definitions in a similar way. For example,
CA-1 Tape Management must modify their “RDS” and “VPD” definitions in CA/1 PARMLIB.
Control-M/Tape (Control-T) must modify its “rules” definitions in the Control-T PARMLIB.
The DFSMSrmm return-to-scratch process has been enhanced to allow more parallelism in
the return-to-scratch process. EDGSPLCS is a new option for the EDGHSKP SYSIN file
EXPROC command that can be used to return to scratch tapes in an asynchronous way. With
the most recent software support changes, EDGSPLCS can be used to run scratch
Stacked volumes cannot be used by the host; they are managed exclusively by the TS7700
Virtualization Engine. Do not allow any host to either implicitly or explicitly address these
stacked volumes. To indicate that the stacked VOLSER range is reserved and cannot be used
by any host system, define the VOLSERs of the stacked volumes to RMM.
Use the following PARMLIB parameter, assuming that VT is the prefix of your stacked TS7700
Virtualization Engine cartridges:
REJECT ANYUSE(VT*)
This parameter causes RMM to deny any attempt to read or write those volumes on native
drives. There are no similar REJECT parameters in other tape management systems.
You do not need to explicitly define the virtual volumes to RMM. During entry processing, the
active RMM automatically records information about each volume in its control data set. RMM
uses the defaults that you specified in ISMF for the library entry values if there is no existing
RMM entry for an inserted volume. Set the default entry status to SCRATCH.
When adding 1,000,000 virtual volumes, the size of the RMM CDS and the amount of
secondary space available must be checked. RMM uses 1 MB for every 1000 volumes
defined in its CDS. An additional 1,000,000 volumes would need 1000 MB of space. However,
do not add all the volumes initially. See 4.3.12, “Inserting logical volumes” on page 254 for
more information.
To increase the size of the RMM CDS, you must quiesce RMM activities, back up the CDS,
then reallocate a new CDS with a larger size and restore the CDS from the backup copy. To
calculate the correct size of the RMM CDS, see z/OS DFSMSrmm Guide and Reference,
SC26-7404. You might consider using VSAM extended format in your CDS. Extended format
and Multivolume would support almost any growth rate in the Configuration Data Set.
Other tape management systems, such as BrightStor, CA-1 Tape Management Copycat
Utility (BrightStor CA-1 Copycat), and BrightStor CA-Dynam/TLMS Tape Management
Copycat Utility (BrightStor CA-Dynam/TLMS Copycat) must reformat their database to add
more volumes. This means that they must stop to define more cartridges
Additionally, some tape management systems do not allow the specification of tape volumes
with alphanumeric characters or require user modifications to do so. See the proper product
documentation for this operation.
In both RMM and the other tape management systems, the virtual volumes do not have to be
initialized. The first time a VOLSER is used, TS7700 Virtualization Engine marks the virtual
volume with VOL1, HDR1, and a tape mark, as though it had been done by EDGINERS or
IEHINITT.
When using a multi-cluster grid, you can get two or more copies instantly without any use of
the ITSM COPYSTORAGEPOOL function and CPU cycles needed to do the function. You
can even move a copy of the logical volumes off site with the Copy Export function.
Restriction: Starting with Tivoli Storage Manager 6.1, which was released in 2009, there
is no Tivoli Storage Manager Server support for z/OS anymore. Tivoli Storage Manager
Client support is still available. For the most current information about IBM Tivoli Storage
Manager support, go to the following address:
http://www.ibm.com/support/docview.wss?uid=swg21243309
If you plan to store IBM Tivoli Storage Manager data into the TS7700 Virtualization Engine,
consider the following suggestions for placing data on your TS7700 Virtualization Engine:
Use TS7700 Virtualization Engine for IBM Tivoli Storage Manager archiving for archiving
and back up of large files or databases for which you do not have a high performance
requirement during backup and restore. TS7700 Virtualization Engine is ideal for IBM
Tivoli Storage Manager archive or long term storage because archive data is not
frequently retrieved. Archives and restores for large files can see less impact from the
staging. Small files, such as individual files on file servers, can see performance impacts
from the TS7700 Virtualization Engine staging. If a volume is not in cache, the entire
volume must be staged before any restore can be done.
Set IBM Tivoli Storage Manager reclamation off by setting the reclamation threshold to
100%. IBM Tivoli Storage Manager, like DFSMShsm, has a reclamation function to
consolidate valid data from tapes with a low valid data percentage onto scratch tapes so
that tapes can be freed up for reuse. IBM Tivoli Storage Manager reclamation with TS7700
Virtualization Engine can be slower because all volumes must be staged to the cache.
Periodically set ITSM reclamation “on” by setting the threshold to a lower value to regain
the use of TS7700 Virtualization Engine volumes with a small amount of valid data that will
not expire for a longer period of time. IBM Tivoli Storage Manager reclamation must be
scheduled for off-peak hours.
Use collocation to reduce the number of TS7700 Virtualization Engine volumes required
for a full restore. IBM Tivoli Storage Manager has a collocation function to group IBM Tivoli
Storage Manager client data onto a minimum set of tapes to provide a faster restore and to
provide separation of client data onto separate physical tapes. Collocation with TS7700
Virtualization Engine does not minimize the physical tapes used, but minimizes the
number of logical volumes used. Collocation with TS7700 Virtualization Engine can
improve restore time for large amounts of data. TS7700 Virtualization Engine does not
ensure physical tape separation when collocation is used because separate logical
volumes can reside on the same physical tape.
Use TS7700 Virtualization Engine for IBM Tivoli Storage Manager database backups that
are to be used for recovery from local media, and use TS7700 Virtualization Engine at a
recovery site or native drives for backups that are to be used for recovery from off-site
media. IBM Tivoli Storage Manager requires a separate tape for every backup of the IBM
Tivoli Storage Manager database, so a large number of logical volumes with less data is
For details about setting up Tivoli Storage Manager, see the IBM Tivoli Storage Manager
Administrators Guide at the following address:
http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp
7.11 DFSMSdss
This section describes the uses of DFSMSdss with the TS7700 Virtualization Engine.
With TS7700 Virtualization Engine, you fill the stacked cartridge completely without changing
JCL, using multiple virtual volumes. TS7700 Virtualization Engine then moves the virtual
volumes created onto a stacked volume.
The only problem you might experience when using TS7700 Virtualization Engine for the DSS
volume dumps is related to the size of the virtual volumes. If a single dump does not fit onto
five logical volumes, you can use an SMS DATACLAS specification, Volume Count nn, to
enable more than five volumes. A better method is to have TS7700 Virtualization Engine
Release 7.4 installed and choose a 4000 MB logical volume through your SMS DATACLAS.
This method prevents unneeded multi-volume files.
Using the COMPRESS keyword of the DUMP command, you obtain a software compression
of the data at the host level. Because data is compressed at the TS7700 Virtualization Engine
before being written into the tape volume cache, host compression is not required unless
channel utilization is high already.
Stand-Alone Services supports the 3494 Tape Library and the Virtual Tape Server. You can
use it to restore from native and virtual tape volumes in a TS7700 Virtualization Engine. With
Stand-Alone Services, you specify the input volumes on the RESTORE command and send
the necessary mount requests to the TS3500 Tape Library.
You can use an initial program load (IPL) of the Stand-Alone Services core image from a
virtual tape device and use it to restore dump data sets from virtual tape volumes.
To use Stand-Alone Services, create a stand-alone core image suitable for IPL by using the
new BUILDSA command of DFSMSdss. Create a new virtual tape as non-labeled and then
put the stand-alone program on it.
Restriction: The BUILDSA command does not write over the label. A tape that the
TS7700 Virtualization Engine marked as labeled initially cannot be changed to unlabeled.
It is not possible to alter an LVOL from labeled to unlabeled. This criteria is valid for all
other stand-alone programs also.
Use the following steps to use an IPL of the Stand-Alone Services program from a virtual
device and to restore a dump data set from virtual volumes (see “Modify Logical Volumes
window” on page 492 for information about how to use the TS7700 management interface to
set a device in stand-alone mode):
1. Ensure that the virtual devices you will be using are offline to other host systems. Tape
drives to be used for stand-alone operations must remain offline to other systems.
2. Set in stand-alone mode the virtual device from which you will load the Stand-Alone
Services program by selecting Virtual Drives on the TS7700 management interface of the
cluster where you want to mount the logical volume.
3. Select Stand-alone Mount Logical Volume, select a virtual device, and click Go
(Figure 7-12).
4. Load the Stand-Alone Services program from the device you just set in stand-alone mode.
As part of this process, select the operator console and specify the input device for
entering Stand-Alone Services commands.
5. When the IPL is complete, enter the Stand-Alone Services RESTORE command from the
specified input device. Example 7-1 shows a group of statements for using this command.
L00001 and L00002 are virtual volumes that contain the dump data set to be restored,
0A40 is the virtual device used for reading source volumes L00001 and L00002, and 0900
is the device address of the DASD target volume to be restored.
Stand-Alone Services requests the TS7700 to mount the source volumes in the order in
which they are specified on the TAPEVOL parameter. It automatically unloads each
volume, then requests the TS7700 Virtualization Engine to demount it and to mount the
next volume.
6. When the restore is complete, unload and demount the IPL volume from the virtual device
by using the TS7700 MI’s Setup Stand-alone Device window.
7. In the Virtual Drives window (see Figure 7-12 on page 442), select Stand-alone Demount
Logical Volume to change the IPL device from stand-alone mode.
Stand-Alone Services issues the necessary mount and demount orders to the library. If you
are using another stand-alone restore program that does not support the mounting of library
resident volumes, you would have to set the source device in stand-alone mode and manually
instruct the TS7700 Virtualization Engine to mount the volumes using the Setup Stand-alone
Device window.
For details about how to use Stand-Alone Services, see the z/OS DFSMSdss Storage
Administration Reference, SC35-0424.
OAM stores objects on a TS7700 Virtualization Engine as in a normal TS3500 Tape Library,
with up to 256 virtual drives and many virtual volumes available.
Consider using the TAPEPERCENTFULL parameter with object tape data because the
retrieval time of an OAM object is important. The recall time for smaller logical volumes can
be reduced considerably.
TAPECAPACITY is not needed anymore. From TS7700 Virtualization Engine V1.4 onwards,
with APAR OA24966 installed, OAM should know the real capacity of each virtual tape.
Virtual volumes in a TS7700 Virtualization Engine can contain primary or backup copies of
OAM objects, addressing either OBJECT or OBJECT BACKUP Storage Groups. Address
TS7700 Virtualization Engine with the OBJECT Storage Group and other non-TS7700
Virtualization Engine tape devices with the OBJECT BACKUP Storage Group.
A virtual volume can contain multiple OAM objects, separated by a buffer space. To optimize
the use of TS7700 Virtualization Engine storing OAM object data, consider the following
suggestions:
Review the MOUNTWAITTIME parameter when using TS7700 Virtualization Engine to
store OAM object tape data. The default (five minutes) should probably be increased.
Twelve minutes is a better number in case you must recall a logical volume to read object
data and there are other recall requests queued at the time. The TS7700 Virtualization
Engine might need to stage the data back into cache, which accounts for the extra mount
time.
Review the MAXTAPERETRIEVETASKS and MAXTAPESTORETASKS parameters when
using TS7700 Virtualization Engine because you have more virtual tape drives available.
Other parameters, such as DEMOUNTWAITTIME, TAPEPERCENTFULL, and
TAPEFULLTHRESHOLD, also might need to be reviewed when using TS7700
Virtualization Engine to store OAM data.
Archive logs
DB2 keeps track of database changes in its active log. The active log uses up to 31 DASD
data sets (up to 62 with dual logging) in this way: When a data set becomes full, DB2 switches
to the next one and automatically offloads the full active log to an archive log.
Archive logs contain unique information necessary for DB2 data recovery. Therefore, to
ensure DB2 recovery, make backups of archive logs. You can use general backup facilities or
DB2’s dual archive logging function.
When creating a Dual Copy of the archive log, usually one is local and the other is for disaster
recovery. The local copy can be written to DASD, then moved to tape, using Tape Mount
Management (TMM). The other copy can be written directly to tape and then moved to an
off-site location.
With TS7700 Virtualization Engine, you can write the local archive log directly inside the
TS7700 Virtualization Engine. Avoiding the use of TMM saves DASD space, saves
DFSMShsm CPU cycles, and simplifies the process. The disaster recovery copy must be
created on non-TS7700 Virtualization Engine tape drives, so that it can be moved off site.
The size of an archive log data set varies from 150 MB to 1 GB. The size of a virtual volume
on a TS7700 Virtualization Engine can be up to 6000 MB, so be sure that your archive log can
fit in only one virtual volume. Use a single volume when unloading an archive log to tape. The
size of a virtual volume on a TS7700 Virtualization Engine can be up to 18,000 MB, assuming
a 3:1 compression ratio.
Tailoring the size and number of active log DASD data sets allows you to obtain an archive log
on tape whose size does not exceed the virtual volume size.
Limiting data set size might increase the frequency of offload operations and reduce the
amount of active log data on DASD. However, this should not be a problem because TS7700
Virtualization Engine does not require manual operation, and archive logs will stay in the tape
volume cache for some time and be available for fast recovery.
One form of DB2 recovery is backward recovery, typically done after a processing failure,
where DB2 backs out uncommitted changes to resources. When doing so, DB2 processes
log records in reverse order, from the latest back toward the oldest.
If the application being recovered has a large data set and makes only a few commit
operations, you probably need to read the old archive logs that are on tape. When archive
logs are on tape, DB2 uses read-backward channel commands to read the log records.
Read-backward is a slow operation on tape cartridges processed on real IBM 3480 (if IDRC
enabled) and IBM 3490 tape drives. On a TS7700 Virtualization Engine, it is only about 20%
slower than a normal I/O because data is retrieved from the tape volume cache, so the tape
drive characteristics are replaced by the random access disk characteristics. Another benefit
TS7700 Virtualization Engine can provide for DB2 operations is the availability of up to 256
(stand-alone cluster) or 1536 virtual drives (six-cluster grid configuration) because DB2 often
needs a large number of drives concurrently to perform recovery or backup functions.
Image copies
Image copies are backup copies of table spaces in a DB2 database. DB2 can create both full
and incremental image copies. A full image copy contains an image of the whole table space
at the time the copy was taken. An incremental image copy contains only those pages of a
table space that have changed since the last full image copy was taken. Incremental image
copies are typically taken daily, whereas full image copies are typically taken weekly.
DB2 provides the option for multiple image copies. You can create up to four identical image
copies of a table space, one pair for local recovery use and one pair for off-site storage.
When a database is recovered from image copies, a full image copy and the subsequent
incremental image copies need to be allocated at the same time. This can potentially tie up
many tape drives and, in smaller installations, can prevent other work from being run. With
one TS7700 Virtualization Engine, with its 256 virtual drives, this is not an issue.
The large number of tape drives is important also for creating DB2 image copies. Having
more drives available allows you to run multiple copies concurrently and use the
MERGECOPY DB2 utility without impact. An advisable solution is to run a full image copy of
the DB2 databases once a week outside the TS7700 Virtualization Engine, and run the
incremental image copies daily using TS7700 Virtualization Engine. The smaller incremental
copy fits better with the TS7700 Virtualization Engine volume sizes.
CICS is only a data communication product, whereas IMS has both the data communication
and the database function (IMS-DL/1). CICS uses the same DL/1 database function to store
its data.
CICS and IMS logs are sequential data sets. When offloading these logs to tape, you must
request a scratch volume every time.
The logs contain the information necessary to recover databases and usually those logs are
offloaded, as with DB2, in two copies, one local and one remote. You can write one local copy
and then create the second for disaster recovery purposes later, or you can create the two
copies in the same job stream.
With TS7700 Virtualization Engine, you can create the local copy directly on TS7700
Virtualization Engine virtual volumes, and then copy those volumes to non-TS7700
Virtualization Engine tape drives, or to a remote TS7700 Virtualization Engine.
Having a local copy of the logs written inside the TS7700 Virtualization Engine allows you
faster recovery because the data stays in the tape volume cache for some time.
When recovering a database, you can complete backout operations in significantly less time
with the TS7700 Virtualization Engine because when reading logs from tape, IMS uses the
slow read backward operation (100 KBps) on real tape drives. With the TS7700 Virtualization
Engine, the same operation is much faster because the data is read from tape volume cache.
Lab measurements do not see much difference between read forward and read backward in a
TS7700 Virtualization Engine. Both perform much better than on physical drives. The reason
is not just that the data is in the tape volume cache, but the TS7700 Virtualization Engine
code also fully buffers the records in the reverse order that they are on the volume when in
read backwards mode.
The IMS change accumulation utility is used to accumulate changes to a group of databases
from several IMS logs. This implies the use of many input logs that will be merged into an
output accumulation log. With the TS7700 Virtualization Engine, you can use more tape
drives for this function.
Image copies
Image copies are backup copies of the IMS databases. IMS can create only full image copies.
To create an image copy of a database, use a batch utility, copying one or more databases to
tape.
With the TS7700 Virtualization Engine, you do not have to stack multiple small image copies
to fill a tape cartridge. Using one virtual volume per database does not waste space because
the TS7700 Virtualization Engine then groups these copies into a stacked volume.
IMS, unlike DB2, has a batch function that works with databases through an IMS batch
region. If you do not use logs when running an IMS batch region, then to recover the
database, you must use an image copy taken before running the batch job. Otherwise, you
can use logs and checkpoints, which allows you to restart from a consistent database image
taken during the batch execution processing. Using TS7700 Virtualization Engine, you can
access these image copies and logs at a higher speed.
The TS7700 Virtualization Engine volume stacking function is the best solution for every
database backup because it is transparent to the application and does not require any JCL
procedure change.
The amount of data from these applications can be huge if your environment does not use
TMM or if you do not have DFSMShsm installed. All such data benefits from using the
TS7700 Virtualization Engine for output.
With TS7700 Virtualization Engine, the application can write one file per volume, using only
part of the volume capacity and TS7700 Virtualization Engine takes care of completely filling
the stacked cartridge for you, without JCL changes.
The only step you must remember is that if you need to move the data off site, you must
address a device outside the local TS7700 Virtualization Engine, or use other techniques to
copy TS7700 Virtualization Engine data onto other movable tapes, as described in 7.7.6,
“Moving data out of the TS7700 Virtualization Engine” on page 426.
Part 3 Operation
This part describes daily operations and the monitoring tasks that are related to the IBM
Virtualization Engine TS7700 with R2.0. It also provides you with planning considerations and
scenarios for disaster recovery, and for disaster recovery testing.
Chapter 8. Operation
This chapter describes operational considerations and usage guidelines unique to the IBM
Virtualization Engine TS7700. For general guidance about how to operate the IBM System
Storage TS3500 Tape Library, see the following publications:
IBM TS3500 Tape Library with System z Attachment A Practical Guide to Enterprise Tape
Drives and TS3500 Tape Automation, SG24-6789
z/OS Object Access Method Planning, Installation and Storage Administration Guide for
Tape Libraries, SC35-0427
This chapter provides information about how to operate the TS7700 Virtualization Engine by
covering the following main topics:
User interfaces
IBM Virtualization Engine TS7700 management interface
System-managed tape
Basic operations
Tape cartridge management
Managing logical volumes
Messages and displays
Recovery scenarios
IBM Virtualization Engine TS7720 considerations
The logical view is named the host view. From the host allocation point of view, there is only
one library, called the Composite Library. A Composite Library can have up to 1536 virtual
addresses for tape mounts. The logical view includes virtual volumes and tape drives.
The host is only aware of the existence of the physical libraries because they are defined
through ISMF in a z/OS environment. The term distributed library is used to denote the
physical libraries and TS7700 Virtualization Engine components that are part of one cluster of
the multi-cluster grid configuration. The physical view is the hardware view that deals with the
hardware components of a stand-alone cluster or a multi-cluster grid configuration, with
TS3500 Tape Libraries and 3592 J1A, TS1120, or TS1130 Tape Drives.
The operator interfaces for providing information about the TS7700 Virtualization Engine are:
OAM commands are available at the host operator console. These commands provide
information regarding the TS7700 Virtualization Engine in stand-alone and grid
environments. This information represents the host view of the components within the
TS7700 Virtualization Engine. Other z/OS commands can be used against the virtual
addresses. These commands are not aware that the 3490E addresses are part of a
TS7700 Virtualization Engine configuration.
Web Specialist functions are available through web-based user interfaces:
– You can access the web interfaces with Microsoft Internet Explorer Version 6.0 or a
fully compatible alternative browser with JavaScript and Java enabled.
– There are two specialists available for tape library management:
• The TS3500 Tape Library Specialist allows for management (configuration and
status) of the TS3500 Library.
• The TS7700 Virtualization Engine management interface (MI) is used to perform all
TS7700 Virtualization Engine related configuration, setup, and monitoring actions.
Call Home Interface: This interface is activated on the TS3000 System Console (TSSC)
and allows for Electronic Customer Care by IBM System Support. Alerts can be sent out to
IBM RETAIN® systems and the IBM System Service Representative (SSR) can connect
through the TSSC to the TS7700 Virtualization Engine and the TS3500 Tape library.
For further information about using the operator windows, see the IBM TotalStorage Library
Operator Manual, GA32-0280 or the IBM System Storage TS3500 Tape Library Operator
Guide, GA32-0560.
This chapter focuses on the interfaces related to the operation of the TS7700 Virtualization
Engine. For detailed information about the related TS3500 Tape Library operational tasks,
see the respective operator’s guide or to the IBM TS3500 Tape Library with System z
Attachment A Practical Guide to Enterprise Tape Drives and TS3500 Tape Automation,
SG24-6789.
Figure 8-1 shows the TS3500 Tape Library Specialist Welcome window with System
Summary, where you can choose to view the status of complete library.
Figure 8-2 shows a flowchart of the functions that are available depending on the
configuration of your TS3500 Tape Library.
Data Cartridges Logical Library View Drive View or Change Set W eb Security View Library VPD
Cleaning View Accessor Summary Fibre Channel Set Operator Panel View Drive VPD
Cartridges Statistics Perform Drive Settings Security View Node Card VPD
I/O Station Enable or Disable Assignment Set Key Manager Download Library
Set Cartridge ALMS Assign Control Addresses Logs
Assignment Policy Enable or Disable Paths Set SNMP Settings Download Drive Logs
Set Insert Virtual I/O Slots View W orld W ide Set SNMP View Library Error
Notification Set Date and TIme Names Destinations Logs
Set Cleaning Set SNMP System View Drive Error Logs
Mode Data Perform Firmware
Library IP Update
Addresses Set Master Console
Secure Socket Settings
Layer Adjust Scanner
Speed
View Status of
License Keys
The TS3500 windows are mainly used during the hardware installation phase of the TS7740
Virtualization Engine. These activities involved in installation are described in 4.2, “TS3500
Tape Library definitions (TS7740 Virtualization Engine)” on page 192.
The Call Home function generates a service alert automatically when a problem occurs with
one of the following components:
TS3500 Tape Library
3592 Model J70 and TS1120 Model C06 controller
TS7700 Virtualization Engine
Error information is transmitted to the IBM System Storage TS3000 System Console for
service, and then to the IBM Support Center for problem evaluation. The IBM Support Center
can dispatch an IBM System Services Representative (SSR) to the client installation. Call
Home can send the service alert to a pager service to notify multiple people, including the
operator. The SSR can deactivate the function through service menus, if required.
The TSSC console uses analog phone lines and a modem or an broadband connection to
connect to the IBM Remote Technical Assistance Information Network, better known as
RETAIN. The code running in RETAIN then decides what to do with the information. A
Problem Management Record (PMR) will be opened in the case of a problem. After the PMR
is created, RETAIN automatically consults with the IBM Knowledge Based Systems (RKBS)
to add information pertinent to the reported problem. Finally, in the case where RETAIN
detects that the call home is a data package, the data is forwarded to catchers that move the
data to an IBM Internal Server called DFS Cell. From there, it is pulled into the IBM Tape Call
Home Database.
Figure 8-3 on page 455 describes the Electronic Customer Care Call Home function. There
are three security zones:
The Red Zone is defined as your data center. This zone is where the IBM Storage
subsystem and the TSSC reside.
The Yellow Zone is defined as the open Internet. This is open to all outside
communication.
The Blue Zone is defined as IBM Support. This sits inside the IBM intranet and is only
accessible to IBM-authenticated users.
In the event of a data call home, the data is sent from the same TSSC connection to an
IBM-managed server located in the Yellow Zone known as Testcase. Dumpers monitor this
server for new information. When they see this new information, they move the data package
to DFS space where it gets pulled into the RMSS Call Home Database.
Problem Reporting communication is then sent to IEPD, which consults Technical Services
Knowledge Base System for the systems listed, and opens a PMR in RETAIN. Of course, all
of these details are for informational purposes only. After initial setup, these details are used
in the background without knowledge of the user.
G
A
TSKBS
Flow A T Flow B Flow 1 D
E IEPD (Expert
Consolidation)
W
A
Consumer Y Flow 1 E
Flow 1
Client
H
CELF Flow 5
Data
T Flow 1 E (IEPD
E Error Logging)
Flow 1 G
S
Metric
T
DB
C
A
RETAIN PSP
S (Problem Flow 1 F
E (Credentials)
Handling Fix
Search)
Flow 1 F.1
Flow 4
Flow 3
Flow 2B
IEPD Flow 2A
By default, all inbound connections by IBM service personnel are still through a dial-in modem
connection.
ECC adds another Ethernet connection to the TSSC, bringing the total number to three.
These connections are labeled:
The External Ethernet Connection, which is the ECC Interface
The Grid Ethernet Connection, which is used for the TS7740 Virtualization Engine
Autonomic Ownership Takeover Manager (AOTM)
The Internal Ethernet Connection, used for the local attached subsystems subnet
All of these connections are set up using the Console Configuration Utility User Interface
located on the TSSC.
Starting from TS7700 Virtualization Engine R2.0 the Call Home events can be found in the
Management Interface windows Health Monitoring - Events. It will show if the event initiated a
Call Home or not.
AOS uses the same network as broadband call home, and will work on either HTTP or
HTTPS. The AOS function is disabled by default. When enabled, the AOS can be configured
to run in either attended or unattended modes.
Attended mode requires that the AOS session be initiated at the TSSC console associated
with the target TS7700 Virtualization Engine. This will require physical access by the IBM
SSR to the TSSC.
Unattended mode, also called Lights Out mode, allows a remote support session to be
establish without manual intervention at the TSSC console associated with the target
TS7700 Virtualization Engine.
All AOS connections are outbound. In unattended mode the session is establish by
periodically connecting to regional AOS relay servers to determine if remote access is
needed. If access has been requested, AOS will authenticate and establish the connection,
allowing a remote desktop access to the TSSC.
3. If you are using your own name server, where you can associate a name with the virtual IP
address, you can use the name instead of the hardcoded address for reaching the MI.
4. The login page for the management interface displays as shown in Figure 8-4. Enter the
default login name as admin and the default password as admin.
After entering your password, you see the first web page presented by the MI, the
Virtualization Engine Grid Summary, as shown in Figure 8-5 on page 458.
After security polices have been implemented locally at the TS7700 Virtualization Engine
cluster or through the use of centralized Role-Base Access Control (RBAC) a unique user
identifier and password can be assigned by the administrator. The user profile can be
Figure 8-5 shows a visual summary of the TS7700 Virtualization Engine Grid. It presents a
multi-cluster hybrid grid, the components, and health status. Composite Library is depicted as
a blue grid on a white background. Attached to the grid are individual TS7700 Virtualization
Engine Clusters, each connected to a host.
Each cluster is represented by a single blue square icon and is named using the cluster's
nickname, cluster ID, and distributed library sequence number. The cluster that you are
currently connected to is identified by a dark blue border.
The health of the system is checked and updated automatically at times determined by the
TS7700 Virtualization Engine. Data loaded on the page is not real time. The Last Refresh
field reports the date and time the displayed data was retrieved from the TS7700
Virtualization Engine. To populate the summary with an updated health status, click Refresh.
This operation can take some time to complete.
The health status of each cluster is indicated by a status sign affixed to its icon. The Legend
explains the meaning of each status sign. To obtain additional information about a specific
cluster, click that component's icon.
Network interfaces
The TS7700 Virtualization Engine provides two Ethernet connections to the network for
access to the TS7700 Virtualization Engine management interface (MI). Access to the
functions provided by the TS7700 Virtualization Engine MI are secured with a user ID and
password. There are two security policy methodologies are available. The current method
uses an internal, locally administered security policy management function. A centrally
managed, role-based access control method using LDAP is also supported in the TS7700.
Tip: To unlock an account, an administrator can modify the user account and change the
password from the Manage Users page.
In environments where the tape configuration is separated from the LAN-attached hosts or
web clients by a firewall, these are the only ports that must be opened on the firewall. All
others can be closed. See Table 8-1 for more information.
For a description of all ports needed for components within the grid, see 3.2.2, “TCP/IP
configuration considerations” on page 140. For more information about the network interface,
see the following publications:
IBM System Storage TS3500 Tape Library Operator Guide, GA32-0560
IBM System Storage TS3500 Tape Library Introduction and Planning Guide, GA32-0559
IIBM System Storage TS1120 and TS1130 Tape Drives and TS1120 Controller Operator
Guide, GA32-0556
Standards compliance
The storage management interface (MI) for the TS7700 Virtualization Engine complies with
industry standards that make it easier to manage devices from different manufacturers.
For more information about the MI windows that enable you to control and monitor the
TS7700 Virtualization Engine, see 8.4.3, “Health & Monitoring” on page 462 through 8.4.10,
“Service and troubleshooting” on page 566.
For details about the Performance and Statistics selections, see Chapter 9, “Performance and
Monitoring” on page 635.
To watch a tutorial showing how to monitor system status, click the View tutorial link.
Following the current component selected graphic, the cluster summary includes these
cluster identifiers:
Library Sequence Number: The library sequence number for this cluster.
Management Interface IP Address: The IP address for the management interface, as
defined in the Cluster Network Settings window. Click the IP address to display the Cluster
Network Settings window where network settings for the cluster can be viewed or altered.
This window also contains a visual summary of the TS7700 Virtualization Engine Cluster
selected from the Virtualization Engine Grid Summary window. The summary will show the
cluster; its key components, such as the vNode, hNode, tape library and drives; and their
connectivity. If the cluster does not possess a physical library, this graphic will not display a
physical library, physical drives, or Fibre Channel Switch. The health of the system is checked
and updated automatically at times determined by the TS7700 Virtualization Engine.
Cluster components
The following components are displayed in the cluster summary (Figure 8-7 on page 463).
Components can be in a normal, failed, degraded, or offline state, as indicated by the status
sign attached to each component's icon. Additionally, in preparation for maintenance
activities, the vNode and hNode can also go into Service Prep or Service modes. A degraded
state indicates that a component is working but one of its redundant parts has stopped
functioning.
vNode: vNode on the cluster.
hNode: hNode located on the cluster.
Disk cache: Disk cache located on the cluster.
Fibre switch: Fibre Switch connecting the tape library and the cluster.
Tape Library: Tape library connected to the cluster.
Tape Drives: Physical drives located in the tape library.
Tip: The Fibre Channel Switch, Tape Library, and Tape Drives image will not be visible if
the cluster does not possess a physical library.
Table fields
The following information can be found at the bottom of the window (Figure 8-7 on page 463)
below the cluster diagram.
Cluster Health State: The health state of the cluster as a whole. Possible values are:
– Normal
– Degraded
– Failed
– Offline
– Service or Service Prep
– Unknown
Physical Tape Drives (Available/Total): Available and total physical drives for this cluster.
Tip: This image is not visible if the cluster does not possess a physical library.
Events, which used to be called Operator Interventions: Indicates whether Events are still
active and actions must be taken. Clicking this item will open the window shown in
Figure 8-9 on page 466.
The send future events to host function allows any interventions that occur to be sent to the
host during the course of operations. Click the Disable or Enable buttons to change the
current setting for this function.
Events table
The Events table displays the details of each operator intervention on the cluster.
You can use the Events table to view, sort, and filter the details of events. You can also clear
one or more events by removing them from this table.
Use the drop-down menu on the Event table to control the appearance of data in the table or
clear one or more entries.
To clear an event, check the check box in the row of the intervention you want to clear, then
select Clear from the drop-down menu and click Go. If you clear an event, you will be
presented with a confirmation window asking you to confirm the event to be cleared. Click
Yes to clear the entry or No to retain the entry on the table.
Use the Select Table Action section of the drop-down menu to select or deselect multiple
rows, or control the appearance of the table through sort and filter actions.
Restriction: If the cluster does not possess a physical library, the Physical Tape Drives
window will not be visible on the TS7700 Virtualization Engine management interface.
Figure 8-10 TS7700 Virtualization Engine management interface Physical Tape Drives
Tip: As stated in the middle of the window, “The health status of the physical tape drives
can only be retrieved every 20 minutes. The health status of the physical tape drives
cannot be retrieved when the library is paused”.
You can sort the information by a single column heading by clicking the column heading you
want to sort by. You can also sort using multiple column headings by using the Edit Sort table
action. To access the Edit Sort action, use the Pencil icon at the top of the Physical Drives
table, or select Edit Sort from the drop-down menu and click Go. The Edit Sort action allows
you to choose up to three columns of information by which to sort the Physical Drives table.
You can also select whether your sort criteria are displayed in ascending or descending order.
To clear an earlier sort, click the Eraser icon at the top of the Physical Drives table or select
Clear All Sorts from the drop-down menu and click Go.
Remember: If you are monitoring this field while changing the encryption status of a
drive, the new status will not display until you bring the TS7700 Virtualization Engine
Cluster offline and then return it to an online state.
Online: Whether the drive is currently online to the TS7740 Virtualization Engine.
Drive Health: The health of the physical drive. This value is obtained automatically at times
determined by the TS7700 Virtualization Engine. Possible values are:
– OK: The drive is fully functional.
– WARNING: The drive is functioning but reporting errors. Action should be taken to
correct the errors.
– DEGRADED: The drive is functioning but at lesser redundancy and performance.
– FAILURE: The drive is not functioning and immediate action should be taken to correct
the fault.
– OFFLINE/TIMEOUT: The drive is out of service or could not be reached within a
certain time frame.
In addition to the Serial Number, Drive Type, Drive Format, Encryption, Online, and Health
information shown in the Physical Drives table, the Physical Drives Details table also displays
the following status and identity information:
Encryption Enabled: Whether encryption is enabled on the drive.
Remember: If you are monitoring this field while changing the encryption status of a
drive, the new status will not display until you bring the TS7700 Virtualization Engine
Cluster offline and then return it to an online state.
The Virtual Tape Drives table displays status information for all drives accessible by the
cluster, whereas the Virtual Drives Details table displays additional information for a specific,
selected drive.
You can sort the information by a single column heading by clicking the column heading you
want to sort by. You can also sort using multiple column headings by using the Edit Sort table
action. To access the Edit Sort action, use the Pencil icon at the top of the Virtual Drives table,
or select Edit Sort from the drop-down menu and click Go. The Edit Sort action allows you to
choose up to three columns of information by which to sort the Virtual Drives table. You can
also select whether your sort criteria are displayed in ascending or descending order. To clear
an earlier sort, click the Eraser icon at the top of the Virtual Drives table or select Clear All
Sorts from the drop-down menu, and click Go.
To view additional information pertaining to a specific drive in a cluster, in the Virtual Drives
table, select the radio button from the Select column that appears in the same row as the
drive's path. From the drop-down menu, select Details and click Go to display the Virtual
Drives Details table, as shown in Figure 8-13.
Use the drop-down menu on the Virtual Drives table to control the appearance of data in the
table, mount a logical volume, or demount a logical volume. Use the Select Table Action
section of the drop-down menu to select or deselect multiple rows or control the appearance
of the table through sort and filter actions.
To mount a logical volume, select Stand-alone Mount Logical Volume from the drop-down
menu and click Go. To demount a logical volume, select the radio button in the row of the
volume you want to demount, select Stand-alone Demount Logical Volume from the
drop-down menu, and click Go.
Remember: If the cluster does not possess a physical library, the Physical Media Inventory
page will not be visible on the TS7700 Virtualization Engine management interface.
Figure 8-14 TS7700 Virtualization Engine management interface Physical Media Inventory
The following physical media counts are displayed for each media type in each storage pool:
Pool: This is the number that identifies the pool for the specific media type.
Media Type: The media type defined for the pool. A storage pool can have multiple media
types, and each media type will be displayed separately. Possible values are:
– JA (ETC): Enterprise Tape Cartridge (ETC)
– JB (EDETC): Enterprise Extended-Length Tape Cartridge (EDETC)
– JJ (EETC): Enterprise Economy Tape Cartridge (EETC)
Empty: The count of physical volumes that are empty for the pool.
Filling: The count of physical volumes filling for the pool. This field is blank for Pool 0.
Full: The count of physical volumes that are full for the pool. This field is blank for Pool 0.
Queued for Erase: The count of physical volumes that are reclaimed but need to be
erased before they can become empty. This field is blank for Pool 0.
ROR (Read Only Recovery): The count of physical volumes that are in the Read Only
Recovery state. This does not imply that there is an error on the physical stacked volume.
The ROR is activated for the reclamation process.
Unavailable: The count of physical volumes that are in the unavailable or destroyed state.
Clarification: Pool 0 is the common scratch pool and all cartridges are empty. These
can be used as required by the pools 1-32
Figure 8-15 TS7700 Virtualization Engine management interface Tape Volume Cache
– Preference Group 1 size: The size of preference group one. Volumes in this group are
preferred to be retained in cache over other volumes.
– Pre migration size: The amount of data that needs to be copied to a physical volume.
– Copy size: The amount of data that needs to be copied to another cluster.
– Migration throttling time: Indicates whether migration throttling is active for the partition.
If throttling is active, a value in milliseconds is displayed. If throttling is not active, zero
is displayed.
– Copy throttling time: Indicates whether copy throttling is active for the partition. If
throttling is active, a value in milliseconds is displayed. If it is not active, zero is
displayed.
Remember: If the cluster does not possess a physical library, as in a TS7720 Virtualization
Engine, the Physical Library window will not be visible on the TS7700 Virtualization Engine
management interface.
The first table displays information for the currently running tasks on the TS7700
Virtualization Engine Cluster. The second table displays information about completed or failed
tasks. You can print the table data by clicking Print Report (next to the Select Action menu).
A comma separated value (.csv) file of the table data can be downloaded by clicking
Download spreadsheet.
Tip: A stand-alone mount or demount can fail and return an Error Recovery Action (ERA)
code and modifier. To determine what this ERA code means, select Help from the top left
of the window and a link to an ERA code table is displayed.
Historical Summary presents a graphical view of different aspects seen from a cluster in the
grid. The view has introduced ways to present 24 hours of performance data, all based on the
Historical Statistics that are gathered in the Hardware. A sample view is shown in
Figure 8-18.
The summary allows you to gather performance data in several ways. You can also save the
data as a .csv file for later reference.
Figure 8-19 TS7700 Virtualization Engine management interface Incoming Copy Queue
To obtain a report of each individual queued logical volume copy in a comma-separated file
format (CSV), click the Download button.
Excel removes leading zeros from downloaded .csv (comma-separated value) files, which
can lead to errors when data is displayed in a spreadsheet format. For example, if a VOLSER
in a downloaded list has a value of 012345, it will appear in Excel as 12345.
To open the downloaded file in Excel and retain any leading zeros, perform the following
steps:
1. Open Excel.
2. Click File Open and browse to the .txt file you downloaded. Highlight the .txt file and
click Open. The Text Import Wizard displays.
3. In Step 1 of the Text Import Wizard, select Delimited as the original data type and click
Next.
4. In Step 2 of the Text Import Wizard, select Comma as the delimiter and click Next.
5. In Step 3 of the Text Import Wizard, select Text as the column data format and click
Finish. You will now be able to save the downloaded file with a .txt or .xls extension
while retaining any leading zeros.
The following information is available about the logical volume copy queue:
Copy Type: Possible values are:
– RUN: The copy will occur during rewind-unload and before the rewind-unload operation
completes at the host.
– Deferred: The copy will occur some time after the rewind-unload operation at the host.
Quantity: The total number of queued logical volume copies of the indicated copy type.
Size: The sum of the size of all volumes in the copy queue, displayed in gibibytes (GiB).
Remember: If the cluster does not possess a physical library, the Recall Queue page will
not be visible on the TS7700 Virtualization Engine management interface.
A recall of a logical volume retrieves the logical volume from physical tape and places it in the
cache. A queue is used to process these requests.
However, only the logical volumes in the Recall Queue table will permit you to modify the
Recall Queue by moving one or more logical volumes to the top of the queue. Move a logical
volume to the top of the queue by selecting it from the Logical volumes in Recall Queue table,
then click Action Move To Head Of Queue, and click Go. You will be asked to confirm
your decision to move a logical volume to the top of the queue. Click Yes to move the selected
logical volume to the top of the queue. Click No to abandon the operation.
To obtain details about a logical volume, enter the logical volume's VOLSER in the available
text field, as shown in Figure 8-21, and then click the Get Details. The VOLSER must be six
alphanumeric characters.
You can view details and the current status of a logical volume in a cluster by using the
Logical volume summary image or the Logical volume details table. You can also use the
Physical locations of requesting logical volume table to view information about requesting
logical volumes on each cluster.
Tip: If the cluster does not possess a physical library, physical resources are not shown in
the Logical volume summary, Logical volume details table, or the Physical locations of
requesting logical volume table.
The logical volume summary image is divided into virtual and physical sections.
The Logical volume details table shows details and the current status of a logical volume in a
cluster. These are logical volume properties that are consistent across the grid.
The virtual section of Logical Volume Details in Figure 8-22 on page 485 includes:
Volser: The VOLSER of the logical volume. This is a six-character number that uniquely
represents the logical volume in the virtual library.
Media Type: The media type of the logical volume. Possible values are:
– Cartridge System Tape (400 MiB).
– Enhanced Capacity Cartridge System Tape (800 MiB).
Compressed Data Size: The compressed file size of the logical volume in cache
expressed in bytes (B), kibibytes (KiB), mebibytes (MiB), or gibibytes (GiB).
Maximum Volume Capacity: The maximum size of the logical volume is expressed in bytes
(B), kibibytes (KiB), mebibytes (MiB), or gibibytes (GiB). This capacity is set by the
volume's Data Class.
Current owner: The name of the cluster that currently owns the latest version of the logical
volume.
Currently Mounted: Whether the logical volume is currently mounted in a virtual drive.
vNode: The name of the vNode on which the logical volume is mounted.
Virtual Drive: The ID of the virtual drive on which the logical volume is mounted.
Cache Copy Used for Mount: The cluster name of the cache chosen for I/O operations for
mount based on consistency policy, volume validity, residency, performance, and cluster
mode.
Mount State: The current mount state of the logical volume. Possible values are:
– Mounted: The volume is mounted.
– Mount Pending: A mount request has been received and is in progress.
– Recall Queued: A mount request has been received and requires a recall.
– Recalling: A mount request has been received and a recall from physical tape is in
progress.
Cache Management Preference Group: The Preference Group (PG) for the logical
volume. The PG determines how soon volumes are removed from cache following their
copy to tape. This information is only displayed if a physical library exists in the grid.
Possible values are:
– 0: Volumes in this group are preferred to be removed from cache over other volumes.
The preference is to remove volumes by size, largest first.
– 1: Volumes in this group are preferred to be retained in cache over other volumes. A
“least recently used” algorithm is used to select volumes for removal from cache if
there are no volumes to remove in preference group 0.
– Unknown: The preference group cannot be determined.
Last Accessed by a Host: The date and time the logical volume was last accessed by a
host. The time recorded reflects the time zone in which the user's browser is located.
Remember: The volume can be accessed for mount or category change before the
automatic deletion and therefore the deletion might not be completed.
– Pending Deletion with Hold: The volume is currently located within a fast ready
(scratch) category configured with hold and the earliest deletion time has not yet
passed. The volume is not accessible by any host operation until the volume has left
the hold state. After the earliest deletion time has passed, the volume will become a
candidate for deletion and move to the Pending Deletion state. While in this state, the
volume is accessible by all legal host operations.
– Deleted: The volume is either currently within a fast ready category or has previously
been in a fast ready category where it become a candidate for automatic deletion and
the deletion was carried out successfully. Any mount operation to this volume will be
treated as a fast ready (scratch) mount because no data is present.
Earliest Delete On: The date and time when the logical volume will be deleted. The time
recorded reflects the time zone in which the user's browser is located. If there is no
expiration date set, this value displays as “-”.
Logical WORM: Whether the logical volume is formatted as a write-once, read many
(WORM) volume. Possible values are Yes and No.
The cluster-specific Logical Volume Properties table displays information about the specified
logical volumes on each cluster. These are properties that are specific to each cluster. The
logical volume details and status displayed include:
Cluster: The cluster location of the logical volume copy.
In Cache: Whether the logical volume is in cache for this cluster.
The table shown at the top of Figure 8-23 on page 489 shows current information about the
number of logical volumes in the TS7700 Virtualization Engine:
Currently Inserted The total number of logical volumes inserted into the composite library
Maximum Allowed The total maximum number of logical volumes that can be inserted
into the composite library
Available Slots The available slots remaining for logical volumes to be inserted, which
are obtained by subtracting the Currently Inserted logical volumes
from the Maximum Allowed
To view the current list of logical volume ranges in the TS7700 Virtualization Engine Grid,
enter a logical volume range and click Show.
The following information is useful if you want to perform an Insert a new logical volume range
function:
Starting VOLSER: The first logical volume to be inserted. The range for inserting logical
volumes begins with this VOLSER number.
Quantity: Select this option to insert a set amount of logical volumes beginning with the
Starting VOLSER. Enter the quantity of logical volumes to be inserted in the adjacent text
field. You can insert up to 10,000 logical volumes at one time.
Ending VOLSER: Select this option to insert a range of logical volumes. Enter the ending
VOLSER number in the adjacent text field.
Initially owned by: The name of the cluster that will own the new logical volumes. Select a
cluster from the drop-down menu.
Media type: Media type of the logical volume (or volumes). Possible values are:
– Cartridge System Tape (400 MiB)
– Enhanced Capacity Cartridge System Tape (800 MiB)
Set Constructs: Select this check box to specify constructs for the new logical volume (or
volumes). Then use the drop-down menu below each construct to select a predefined
construct name. You can specify any or all of the following constructs:
To insert a range of logical volumes, complete the fields listed and click Insert. You are
prompted to confirm your decision to insert logical volumes. To continue with the insert
operation, click Yes. To abandon the insert operation without inserting any new logical
volumes, click No.
To delete unused logical volumes, select one of the options described below and click the
Delete Volumes button. A confirmation window will be displayed. Click Yes to delete or No to
cancel. To view the current list of unused logical volume ranges in the TS7700 Virtualization
Engine Grid, enter a logical volume range at the bottom of the window and click Show. A
logical volume range deletion can be cancelled while in progress at the Cluster Operation
History window.
Important: A volume can only be deleted from the insert category if the volume has never
been moved out of the insert category after initial insert and has never received write data
from a host.
To display a range of existing logical volumes, enter the starting and ending VOLSERs in the
fields at the top of the page and click Show.
To modify constructs for a range of logical volumes, identify a Volume Range, then use the
Constructs drop-down menus described as follows to select construct values and click
Modify:
Volume Range: The range of logical volumes to be modified.
– From: The first VOLSER in the range.
– To: The last VOLSER in the range.
Constructs: Use the following drop-down menus to change one or more constructs for the
identified Volume Range. From each drop-down menu, you can select a predefined
construct to apply to the Volume Range, No Change to retain the current construct value,
or dashes (--------) to restore the default construct value.
– Storage Groups: Changes the Storage Group for the identified Volume Range.
– Storage Classes: Changes the Storage Class for the identified Volume Range.
You are asked to confirm your decision to modify logical volume constructs. To continue with
the operation, click Yes. To abandon the operation without modifying any logical volume
constructs, click No.
If a move operation is already in progress, a warning message will be displayed. You can view
move operations already in progress from the Operation History window.
To cancel a move request, select the Cancel Move Requests link. The available options to
cancel a move request are:
Cancel All Moves: Cancels all move requests.
Cancel Priority Moves Only: Cancels only priority move requests.
If you want to move logical volumes, you must define a volume range or select an existing
range, select a target pool, and identify a move type:
Physical Volume Range: The range of physical volumes to move. You can use either this
option or Existing Ranges to define the range of volumes to move, but not both.
– From: VOLSER of the last physical volume in the range to move.
– To: VOLSER of the first physical volume in the range to move.
Existing Ranges: The list of existing physical volume ranges. You can use either this option
or Volume Range to define the range of volumes to move, but not both.
Media Type: The media type of the physical volumes in the range to move. If no available
physical stacked volume of the given media type is in the range specified, no logical
volume is moved.
Target Pool: The number (1-32) of the source pool from which logical volumes are moved.
Move Type: Used to determine when the move operation will occur. Possible values are:
– Deferred: Move operation will occur in the future as schedules permit.
– Priority: Move operation will occur as soon as possible.
– Honor Inhibit Reclaim Schedule: An option of the Priority Move Type, it specifies that
the move schedule will occur in conjunction with Inhibit Reclaim schedule. If this option
is selected, the move operation will not occur when Reclaim is inhibited.
After you define your move operation parameters and click the Move button, you will be asked
to confirm your request to move physical volumes. If you select No, you will return to the Move
Logical Volumes window.
You can view the results of a previous query, or create a new query to search for logical
volumes.
Tip: Only one search can be executed at a time. If a search is in progress, an information
message will occur at the top of the Logical volumes search window. You can cancel a
search in progress by clicking the Cancel Search button.
To view the results of a previous search query, select the Previous Searches hyperlink to see
a table containing a list of previous queries. Click a query name to display a list of logical
volumes that match the search criteria.
Up to 10 previously named search queries can be saved. To clear the list of saved queries,
check the check box adjacent to one or more queries to be removed, select Clear from the
drop-down menu, and click Go. This operation will not clear a search query already in
To create a new search query, enter a name for the new query. Enter a value for any of the
fields and select Search to initiate a new logical volume search. The query name, criteria,
start time, and end time are saved along with the search results.
To search for a specific Volser, enter your parameters in the New Search Name and Volser
fields and then click Search (Figure 8-28).
If you are looking for the results of earlier searches, click Previous Searches on the Logical
Volume Search window, shown in Figure 8-27 on page 495, and the window shown in
Figure 8-30 opens.
The meanings of the entry fields in Figure 8-27 on page 495, where you can enter your
Logical Volume search, are:
VOLSER: The volume's serial number. This field can be left blank. You can also use the
following wild card characters in this field:
% (percent): Represents zero or more characters.
* (asterisk): Translated to % (percent). Represents zero or more characters.
. (period): Translated to _ (single underscore). Represents one character.
_ (single underscore): Represents one character.
? (question mark): Translated to _ (single underscore). Represents one character.
Category: The name of the category to which the logical volume belongs. This value is a
four-character hexadecimal string. This field can be left blank. See Table 8-2 for possible
values.
FF00 Insert
Media Type: The type of media on which the volume resides. Use the drop-down menu to
select from the available media types. This field can be left blank.
Expire Time: The amount of time in which logical volume data will expire. Enter a number.
This field is qualified by the values Equal to, Less than, or Greater than in the preceding
drop-down menu and defined by the succeeding drop-down menu under the heading Time
Units. This field may be left blank.
Residency: The residency state of the logical volume. Possible values are:
– Resident: Includes only logical volumes classified as resident.
– Ignore: Ignores any values in the Residency field. This is the default selection.
– Removed Before: Only logical volumes removed before a given date and time. If you
select this value, you must also complete the fields for Removed Date and Time.
– Removed After: Only logical volumes removed after a given date and time. If you select
this value, you must also complete the fields for Removed Date and Time.
– Retained: Only logical volumes classified as retained.
– Deferred: Only logical volumes classified as deferred.
Removed Date and Time: The date and time that informs the Removed Before or
Removed After search queries. These values reflect the time zone in which your browser
is located.
– Date: The calendar date including month (M), day (D), and year (Y), using the format
MM/DD/YYYY. This field is accompanied by a date chooser calendar icon. You can
enter the month, day, and year manually, or you can use the calendar chooser to select
a specific date. The default for this field is blank.
Remember: You can print or download the results of a search query using the Print
Report or Download Spreadsheet buttons on the Volumes found: table at the end of
Search Results window, as shown in Figure 8-31 on page 498.
A category is a logical grouping of physical volumes for a predefined use. A Fast Ready
category groups logical volumes for quick, non-specific use. This grouping enables quick
mount times because the TS7700 Virtualization Engine can order category mounts without
recalling data from a stacked volume.
The Fast Ready Categories Table lists all Fast Ready categories and their attributes. If no
Fast Ready categories are defined, this table will not be visible.
Important: Fast Ready categories and Data Classes work at the system level and are
unique for all clusters in a grid. This means that if you modify them in one cluster it will
apply to all clusters on that grid.
You can use the Fast Ready Categories table to create a new or modify or delete an existing
Fast Ready category.
Restriction: You may not use category name 0000 or “FFxx”, where xx equals 0–9 or
A-F. 0000 represents a null value, and “FFxx” is reserved for hardware.
Use the drop-down menu on the Fast Ready Categories Table to add a new category, or
modify or delete an existing category.
To add a new category, select Add from the drop-down menu. Complete the fields for the
information that will be displayed in the Fast Ready Categories Table, as defined previously.
Tip: You must choose either the No expiration or Set expiration radio button before the
new category can be created. If you select Set expiration, you must complete the Expire
Time and Time Units fields.
To modify an existing Fast Ready category, click the radio button from the Select column that
appears adjacent to the number of the category you want to modify. Select Modify from the
drop-down menu. You will be able to change any field displayed on the Fast Ready
Categories table.
To delete an existing Fast Ready category, click the radio button from the Select column that
appears adjacent to the number of the category you want to delete. Select Delete from the
drop-down menu.
To watch a tutorial showing how to modify pool encryption settings, click the View tutorial
link.
Tip: Pools 1-32 are preinstalled. Pool 1 functions as the default pool and will be used if no
other pool is selected. All other pools must be defined before they can be selected.
To modify pool properties, check the check box next to one or more pools listed in the
Physical Volume Pool Properties table and select Properties from the drop-down menu. The
Pool Properties table will be displayed. You will be able to modify the fields Media Class and
First Media, defined previously, and the following fields:
Second Media (Secondary): The second choice of media type that the pool can borrow
from. The options shown will exclude the media type chosen for First Media. Possible
values are:
– Any 3592: Any available 3592 cartridge can be used.
– JA: Enterprise Tape Cartridge (ETC).
– JB: Enterprise Extended-Length Tape Cartridge (ETCL).
– JJ: Enterprise Economy Tape Cartridge (EETC).
– None: The only option available if the Primary Media type is Any 3592.
Borrow Indicator: Defines how the pool is populated with scratch cartridges. Possible
values are:
– Borrow, Return: A cartridge is borrowed from the common scratch pool and returned
when emptied.
– Borrow, Keep: A cartridge is borrowed from the common scratch pool and retained,
even after being emptied.
– No Borrow, Return: A cartridge may not be borrowed from common scratch pool, but
an emptied cartridge will be placed in common scratch pool. This setting is used for an
empty pool.
– No Borrow, Keep: A cartridge may not be borrowed from common scratch pool, and an
emptied cartridge will be retained.
Reclaim Pool: The pool to which logical volumes are assigned when reclamation occurs
for the stacked volume on the selected pool.
Maximum Devices: The maximum number of physical tape drives that the pool can use for
premigration.
Export Pool: The type of export supported if the pool is defined as an Export Pool (the pool
from which physical volumes are exported). Possible values are:
– Not Defined: The pool is not defined as an Export pool.
– Copy Export: The pool is defined as a Copy Export pool.
Days Before Secure Data Erase: The number of days a physical volume that is a
candidate for Secure Data Erase can remain in the pool without access to a physical
stacked volume. Each stacked physical volume possesses a timer for this purpose, which
is reset when a logical volume on the stacked physical volume is accessed. Secure Data
Erase occurs at a later time, based on an internal schedule. Secure Data Erase renders all
Tip: The Use Encryption Key Manager default key check box occurs before both Key
Label 1 and Key Label 2 fields. You must select this check box for each label to be
defined using the default key.
Restriction: The Use Encryption Key Manager default key check box occurs before
both Key Label 1 and Key Label 2 fields. You must check this box for each label to be
defined using the default key.
Key Label 2: The pool's current encryption key Label 2. The label must consist of ASCII
characters and cannot exceed 64 characters. No heading or trailing blank is permitted,
although an internal space is allowed. Lower-case characters are internally converted to
upper case upon storage, so key labels are reported using upper-case characters.
If the encryption state is disabled, this field is blank.
Key Mode 2: Encryption Mode used with Key Label 2. When Key Label 2 is disabled, this
field is invalidated. Possible values for this field are:
– Clear Label: The data key is specified by the key label in clear text.
– Hash Label: The data key is referenced by a computed value corresponding to its
associated public key.
To obtain details about a physical stacked volume, enter the volume's VOLSER number in the
available text field and select the Get Details button. The VOLSER number must be six
characters in length.
The window shown in Figure 8-36 is displayed when details for a physical stacked volume are
retrieved. The example is for volume JA7705.
To obtain details about the specific logical volumes on the physical stacked volume, click the
Download List of Logical Volumes button below the table. If the window for the download
does not appear and the machine attempting the download is running Windows XP SP2, click
Important: Excel will remove leading zeros from downloaded CSV files. To preserve
leading zeroes in files your download, see “Incoming Copy Queue window” on page 480.
The Select Move Action Menu provides options for moving physical volumes to a target pool.
The options available to move physical volumes to a target pool are:
Move Range of Physical Volumes: Moves physical volumes in the range specified to the
target pool.
Move Range of Scratch Only Volumes: Moves scratch volumes in the range specified to
the target pool.
Move Quantity of Scratch Only Volumes: Moves a specified quantity of physical volumes
from the source pool to the target pool.
Cancel Move Requests: Cancels any previous move request.
If you select Move Range of Physical Volumes from the Select Move Action menu, you will
be asked to define a volume range or select an existing range, select a target pool, and
identify a move type:
Volume Range: The range of physical volumes to move. You can use either this option or
Existing Ranges to define the range of volumes to move, but not both.
– To: VOLSER of the first physical volume in the range to move.
– From: VOLSER of the last physical volume in the range to move.
Existing Ranges: The list of existing physical volume ranges. You can use either this option
or Volume Range to define the range of volumes to move, but not both.
Target Pool: The number (0-32) of the source pool from which physical volumes are
moved.
If you select Move Range of Scratch Only Volumes from the Select Move Action menu, you
will be asked to define a volume range or select an existing range, and select a target pool, as
defined previously.
If you select Move Quantity of Scratch Only Volumes from the Select Move Action menu,
you will be asked to define the number of volumes to be moved, identify a source and target
pool, and select a media type:
Number of Volumes: The number of physical volumes to be moved.
Source Pool: The number (0-32) of the source pool from which physical volumes are
moved.
Target Pool: The number (0-32) of the target pool to which physical volumes are moved.
Media Type: Specifies the media type of the physical volumes in the range to be moved.
The physical volumes in the range specified to move must be of the media type
designated by this field, or else the move operation will fail.
After you define your move operation parameters and click the Move button, you will be asked
to confirm your request to move physical volumes. If you select No, you will return to the Move
Physical Volumes window. To cancel a previous move request, select Cancel Move
Requests from the Select Move Action menu. The available options to cancel a move request
are:
Cancel All Moves: Cancels all move requests.
Cancel Priority Moves Only: Cancels only priority move requests.
Cancel Deferred Moves Only: Cancels only deferred move requests.
Select a Pool: Cancels move requests from the designated source pool (0-32), or from all
source pools.
The Select Eject Action menu provides options for ejecting physical volumes. The options
available to eject physical volumes are:
Eject Range of Physical Volumes: Ejects physical volumes in the range specified.
Eject Range of Scratch Only Volumes: Ejects scratch volumes in the range specified.
Eject Quantity of Scratch Only Volumes: Ejects a specified quantity of physical volumes.
Eject Export Hold Volumes: Ejects a subset of the volumes in the Export/Hold Category.
Cancel Eject Requests: Cancels any previous eject request.
If you select Eject Range of Physical Volumes from the Select Eject Action menu, you will
be asked to define a volume range, or select an existing range and identify an eject type:
Volume Range: The range of physical volumes to eject. You can use either this option or
Existing Ranges to define the range of volumes to eject, but not both.
– To: VOLSER of the first physical volume in the range to eject.
– From: VOLSER of the last physical volume in the range to eject.
Existing Ranges: The list of existing physical volume ranges. You can use either this option
or Volume Range to define the range of volumes to eject, but not both.
Eject Type: Used to determine when the eject operation will occur. Possible values are:
– Deferred: The eject operation will occur in the future as schedules permit.
– Priority: The eject operation will occur as soon as possible.
If you select Eject Range of Scratch Only Volumes from the Select Eject Action menu, you
will be asked to define a volume range or select an existing range, as defined previously.
If you select Eject Quantity of Scratch Only Volumes from the Select Eject Action menu,
you will be asked to define the number of volumes to be ejected, identify a source pool, and
select a media type:
Number of Volumes: The number of physical volumes to be ejected.
Source Pool: The number (0-32) of the source pool from which physical volumes are
ejected.
Media Type: Specifies the media type of the physical volumes in the range to be ejected.
The physical volumes in the range specified to eject must be of the media type designated
by this field, or else the eject operation will fail.
If you select Eject Export Hold Volumes from the Select Eject Action menu, you will be
asked to select the VOLSER (or VOLSERs) of the volumes to be ejected. To select all
VOLSERs in the Export Hold category, select Select All from the drop-down menu.
After you define your eject operation parameters and click the Eject button, you will be asked
to confirm your request to eject physical volumes. If you select No, you will return to the Eject
Physical Volumes window.
To cancel a previous eject request, select Cancel Eject Requests from the Select Eject
Action menu. The available options to cancel an eject request are:
Cancel All Ejects: Cancels all eject requests.
Cancel Priority Ejects Only: Cancels only priority eject requests.
Cancel Deferred Ejects Only: Cancels only deferred eject requests.
Click the Inventory Upload button to synchronize the physical cartridge inventory from the
attached tape library with the TS7740 Virtualization Engine database.
The VOLSER Ranges Table displays the list of defined VOLSER ranges for a given
component.
You can use the VOLSER Ranges Table to create a new VOLSER range, or modify or delete
a predefined VOLSER range.
Tip: If a physical volume's VOLSER is defined within a TS3500 Tape Library Cartridge
Assignment Policy (CAP) range, upon being inserted, that volume can be inventoried
according to its VOLSER definition. If the physical volume's VOLSER occurs outside a
defined range, operator intervention will be required to dedicate the inserted physical
volume to TS7740 Virtualization Engine resources.
In Figure 8-39, status information displayed in the VOLSER Ranges Table includes:
From: The first VOLSER in a defined range.
To: The last VOLSER in a defined range.
Media Type: The media type for all volumes in a given VOLSER range. Possible values
are:
– JB-ETCL: Enterprise Extended-Length Tape Cartridge
Use the drop-down menu in the VOLSER Ranges Table to add a new VOLSER range or
modify or delete a predefined range.
To add a new VOLSER range, select Add from the drop-down menu. Complete the fields for
information that will be displayed in the VOLSER Ranges Table, as defined previously.
To modify a predefined VOLSER range, select the radio button from the Select column that
appears in the same row as the name of the VOLSER range you want to modify. Select
Modify from the drop-down menu and make your desired changes to the information that will
be displayed in the VOLSER Ranges Table, as defined previously.
To delete a predefined VOLSER range, select the radio button from the Select column that
appears in the same row as the name of the VOLSER range you want to delete. Select
Delete from the drop-down menu. A confirmation window will appear; select Yes to delete the
indicated VOLSER range, or select No to cancel the request.
Referring to Figure 8-39 on page 514, the Unassigned Volsers table displays the list of
unassigned physical volumes for a given component that are pending ejection. A VOLSER is
removed from this table when a new range that contains the VOLSER is added.
You can use the Unassigned Volsers table to eject one or more physical volumes from a
library attached to a TS7740 Virtualization Engine.
To eject a physical volume, check the check box adjacent to the VOLSER associated with the
volume and then select Eject from the drop-down menu on the Unassigned Volsers table. To
eject multiple volumes, check multiple check boxes before selecting Eject from the drop-down
menu. Buttons on the tool bar of the Unassigned Volsers table will select all or de-select all
VOLSERs in the table. A confirmation page will appear after Eject is selected. Select Yes to
eject the indicated volume (or volumes), or No to cancel the request.
You can view the results of a previous query, or create a new query to search for physical
volumes.
To create a new search query, enter a name for the new query. Enter a value for any of the
fields defined below and select Search to initiate a new physical volume search. The query
name, criteria, start time, and end time are saved along with the search results.
VOLSER: The volume's serial number. This field can be left blank. You can also use the
following wild card characters in this field:
% (percent): Represents zero or more characters.
* (asterisk): Translated to % (percent). Represents zero or more characters.
. (period): Represents one character.
_ (single underscore): Translated to . (period). Represents one character.
? (question mark): Translated to . (period). Represents one character.
Media Type: The type of media on which the volume resides. Use the drop-down menu to
select from available media types. This field can be left blank.
Home Pool: The pool number (0-32) the physical volume was assigned to when it was
inserted into the library, or the pool it was moved to through the management interface
Move/Eject Stacked Volumes function. This field can be left blank.
Current Pool: The number of the storage pool (0-32) in which the physical volume currently
resides. This field can be left blank.
Encryption Key: The encryption key label designated when the volume was encrypted.
This is a text field. Possible values include:
– A name identical to the first or second key label on a physical volume. Any physical
volume encrypted using the designated key label is included in the search.
– Search for the default key. Select this check box to search for all physical volumes
encrypted using the default key label.
Pending Eject: This option is used to decide whether to include physical volumes pending
an eject in the search query. Possible values include:
– All Ejects: All physical volumes pending eject are included in the search.
– Priority Ejects: Only physical volumes classified as priority eject are included in the
search.
– Deferred Ejects: Only physical volumes classified as deferred eject are included in the
search.
Pending Move to Pool: Whether to include physical volumes pending a move in the search
query. Possible values are:
– All Moves: All physical volumes pending a move are included in the search.
– Priority Moves: Only physical volumes classified as priority move are included in the
search.
– Deferred Moves: Only physical volumes classified as deferred move are included in the
search.
Figure 8-41 shows a sample Physical Volume Search query for a VOLSER named 320031.
Up to 10 previously named search queries can be saved. To clear the list of saved queries,
check the check box adjacent to one or more queries to be removed, select Clear from the
drop-down menu, and click Go. This operation will not clear a search query already in
progress. You will be asked to confirm your decision to clear the query list. Select Yes to clear
the list of saved queries, or No to retain the list of queries.
In Figure 8-42, click a query name to display a list of physical volumes that match the search
criteria.
The Physical Volume Search Results window displays a list of physical volumes that meet the
criteria of an executed search query.
The table at the top of the Physical Volume Search Results window displays the name of the
completed search query and its start and end times. The table following the heading Search
Criteria Set displays the criteria used to generate the search query.
The Search Results table displays a list of VOLSERs belonging to the physical volumes that
meet the search criteria. Each VOLSER listed under the Physical Volumes heading is linked
to its physical volume details window. Click the VOLSER to open the details page and view
additional information about a specific physical volume, as shown in Figure 8-44 on page 521.
The Search Results table displays up to 10 VOLSERs per page. To view the results contained
on another window, select the right- or left-pointing arrow at the bottom of the table, or enter a
page number in the adjacent field and click Go. You can print or download the results of a
search query using the Print Report or Download Spreadsheet buttons at the top of the
Search Results table.
8.4.7 Constructs
The topics in this section present information that is related to TS7700 Virtualization Engine
storage constructs.
The Storage Groups table displays all existing storage groups available for a given cluster.
You can use the Storage Groups table to create a new storage group, modify an existing
storage group, or delete a storage group.
Use the drop-down menu in the Storage Groups table to add a new storage group or modify
or delete an existing storage group.
To add a new storage group, select Add from the drop-down menu. Complete the fields for
information that will be displayed in the Storage Groups table, as defined previously.
To modify an existing storage group, select the radio button from the Select column that
appears adjacent to the name of the storage group you want to modify. Select Modify from
the drop-down menu. Complete the fields for information that will be displayed in the Storage
Groups table, as defined previously.
To delete an existing storage group, select the radio button from the Select column that
appears adjacent to the name of the storage group you want to delete. Select Delete from the
drop-down menu. You will be prompted to confirm your decision to delete a storage group. If
you select Yes, the storage group will be deleted. If you select No, your request to delete will
be cancelled.
The Current Copy Policy table displays the copy policy in force for each component of the
grid. If no Management Class is selected, this table will not be visible. You must select a
Management Class from the Management Classes table to view copy policy details.
The Management Classes Table (Figure 8-46) displays defined Management Class copy
policies that can be applied to a cluster.
Use the drop-down menu on the Management Classes table to add a new Management
Class, modify an existing Management Class, or delete one or more existing Management
Classes.
To add a new Management Class, select Add from the drop-down menu and click Go.
Complete the fields for information that will be displayed in the Management Classes Table,
as defined previously. You can create up to 256 Management Classes per TS7700
Virtualization Engine Grid.
Remember: If the cluster does not possess a physical library, the Secondary Pool field will
not be available in the Add option.
The Copy Action drop-down menu appears adjacent to each cluster in the TS7700
Virtualization Engine Grid. The Copy Action menu allows you to select, for each component,
the copy mode to be used in volume duplication.
To modify an existing Management Class, check the check box from the Select column that
appears in the same row as the name of the Management Class you want to modify. You can
modify only one Management Class at a time. Select Modify from the drop-down menu and
click Go. Of the fields listed previously in the Management Classes Table, or available from
Remember: If the cluster does not possess a physical library, the Secondary Pool field will
not be available in the Modify option.
To delete one or more existing Management Classes, check the check box from the Select
column that appears in the same row as the name of the Management Class you want to
delete. Check multiple check boxes to delete multiple Management Classes. Select Delete
from the drop-down menu and click Go.
Restriction: You will not be permitted to delete the default Management Class.
The Storage Classes table displays the list of storage classes defined for each component of
the grid.
You can use the Storage Classes table to create a new storage class, or modify or delete an
existing storage class. The default storage class may be modified, but cannot be deleted. The
default Storage Class has dashes (--------) as the symbolic name.
Use the drop-down menu in the Storage Classes table to add a new storage class, or modify
or delete an existing storage class.
To add a new storage class, select Add from the drop-down menu. Complete the fields for the
information that will be displayed in the Storage Classes table as defined previously.
Tip: You can create up to 256 storage classes per TS7700 Virtualization Engine Grid.
To modify an existing storage class, select the radio button from the Select column that
appears in the same row as the name of the storage class you want to modify. Select Modify
from the drop-down menu. Of the fields listed in the Storage Classes table, you will be able to
change all of them except the storage class name.
To delete an existing storage class, select the radio button from the Select column that
appears in the same row as the name of the storage class you want to delete. Select Delete
from the drop-down menu. A window will appear to confirm the storage class deletion. Select
Yes to delete the storage class, or No to cancel the delete request.
The Storage Classes table for TS7720 displays the following status information:
Name: The name of the storage class. Valid characters for this field are A-Z, 0-9, $, @, *,
#, and %. The first character of this field may not be a number. The value in this field must
be between 1 and 8 characters in length.
Volume Copy Retention Group: Group provides additional options to remove data from a
disk-only TS7700 Virtualization Engine as the active data reaches full capacity.
– Prefer Remove: Removal candidates in this group are removed before removal
candidates in the Prefer Keep group.
– Prefer Keep: Removal candidates in this group are removed after removal candidates
in the Prefer Remove group.
– Pinned: Copies of volumes in this group are never removed from the accessing cluster
Volume Copy Retention Time: The minimum amount of time after a logical volume copy
was last accessed that the copy can be removed from cache.
– Value set in hours.
Description: A description of the storage class definition. The value in this field must be
between 1 and 70 characters in length.
Important: Fast Ready Categories and Data Classes work at the system level and are
unique for all clusters in a grid. This means that if you modify them on one cluster that they
will be applied to all clusters in the grid.
The Data Classes table (Figure 8-49) displays the list of Data Classes defined for each cluster
of the grid.
You can use the Data Classes Table to create a new data class or modify or delete an existing
Data Class. The default Data Class can be modified, but cannot be deleted. The default Data
Class has dashes (--------) as the symbolic name.
Use the drop-down menu in the Data Classes Table to add a new Data Class, or modify or
delete an existing Data Class.
To add a new Data Class, select Add from the drop-down menu and click Go. Complete the
fields for information that will be displayed in the Data Classes Table.
Tip: You can create up to 256 Data Classes per TS7700 Virtualization Engine Grid.
To modify an existing Data Class, select the radio button from the Select column that appears
in the same row as the name of the Data Class you want to modify. Select Modify from the
drop-down menu and click Go. Of the fields listed in the Data Classes table, you will be able
to change all of them except the default Data Class name.
To delete an existing Data Class, select the radio button from the Select column that appears
in the same row as the name of the Data Class you want to delete. Select Delete from the
drop-down menu and click Go. A window will appear to confirm the Data Class deletion.
Select Yes to delete the Data Class or No to cancel the delete request.
8.4.8 Configuration
This section describes the functions of the TS7700 Virtualization Engine MI that are related to
configuring the TS7700 Virtualization Engine.
The following information, related to grid identification, is displayed. To change the grid
identification properties, edit the available fields and click Submit Changes. The available
fields are as follows:
Grid nickname: The grid nickname must be one to eight characters in length and
composed of alphanumeric characters with no spaces. The characters @, ., -, and + are
also allowed.
Grid description: A short description of the grid. You can use up to 63 characters.
The following information related to cluster identification is displayed. To change the cluster
identification properties, edit the available fields and click Submit Changes. The available
fields are:
Cluster nickname: The cluster nickname must be one to eight characters in length and
composed of alphanumeric characters. Blank spaces and the characters @, ., -, and + are
also allowed. Blank spaces cannot be used in the first or last character positions.
Cluster description: A short description of the cluster. You can use up to 63 characters.
Data transfer speeds between TS7700 Virtualization Engine Clusters sometimes vary. The
cluster family configuration groups clusters so that microcode can optimize grid connection
performance between the grouped clusters.
To view or modify cluster family settings, first verify that these permissions are granted to your
assigned user role. If your user role includes cluster family permissions, select the Modify
button to perform the following actions:
Add a family: Click the Add button to create a new cluster family. A new cluster family
placeholder is created to the right of any existing cluster families. Enter the name of the
new cluster family in the active Name text box. Cluster family names must be one to eight
characters in length and composed of Unicode characters. Each family name must be
unique. Clusters are added to the new cluster family by relocating a cluster from the
Unassigned Clusters area using the method described in the Move a cluster function,
described next.
Delete a family: You can delete an existing cluster family. Click the X in the top right corner
of the cluster family you want to delete. If the cluster family you attempt to delete contains
any clusters, a warning message is displayed. Click OK to delete the cluster family and
return its clusters to the Unassigned Clusters area. Click Cancel to abandon the delete
action and retain the selected cluster family.
Save changes: Click the Save button to save any changes made to the Cluster Families
window and return it to read-only mode.
Remember: Each cluster family should contain at least one cluster. If you attempt to
save changes and a cluster family does not contain any clusters, an error message
displays and the Cluster Families window remains in edit mode.
To watch a tutorial that shows how to work with clusters, click the View tutorial link.
Table fields
The following fields are available:
Nickname: Nickname of the node followed by the library sequence number.
ID: Node ID.
Type: Type of the node. Possible values are vNode or hNode.
Operational State: Possible values are Online, Going Online, Going Offline, and Offline.
Tip: If the node type is hNode, the Operational State will also display as Active.
Table actions
There are several actions available for cluster nodes accessed through the Select Action
drop-down menu:
Details & Health: View details about the selected node. Details shown include general
node information, node state and health information, and networking settings.
Modify Node Nickname: Modify the nickname of the selected node.
When Write Protect Mode is enabled on a cluster, host commands fail if they are issued to
logical devices in that cluster and attempt to modify a volume's data or attributes. Meanwhile,
host commands that are issued to logical devices in peer clusters are allowed to continue with
full read and write access to all volumes in the library. Write Protect Mode is used primarily for
disaster recovery testing. In this scenario, a recovery host that is connected to a
non-production cluster must access and validate production data without any risk of modifying
it.
A cluster can be placed into Write Protect Mode only if the cluster is online. After it is set, the
mode is retained through intentional and unintentional outages, and can only be disabled
through the same management interface window used to enable the function. When a cluster
within a grid configuration has Write Protect Mode enabled, standard grid functions such as
logical volume replication and logical volume ownership transfer are unaffected.
Logical volume categories can be excluded from Write Protect Mode. Up to 16 categories can
be identified, and set to include or exclude from Write Protect Mode using the Category Write
Protect Properties table. Additionally, volumes in any Fast Ready category can be written to if
the Ignore fast ready characteristics of write protected categories check box is selected.
Use the Category Write Protect Properties table to add, modify, or delete categories to be
selectively excluded from Write Protect Mode. When Write Protect Mode is enabled, any
categories added to this table must display a value of Yes in the Excluded from Write Protect
field before the volumes in that category can be modified by an accessing host.
The following category fields are listed in the Category Write Protect Properties table:
Category: The identifier for a defined category. This is an alphanumeric hexadecimal value
between 0x0001 and 0xFEFF (0x0000 and 0xFFxx cannot be used). Letters used in the
category value must be capitalized.
Excluded from Write Protect: Whether the category is excluded from Write Protect Mode.
Possible values are:
– Yes: The category is excluded from Write Protect Mode. When Write Protect is
enabled, volumes in this category can be modified when accessed by a host.
– No: The category is not excluded from Write Protect Mode. When Write Protect is
enabled volumes in this category cannot be modified when accessed by a host.
Description: A descriptive definition of the category and its purpose.
Use the drop-down menu on the Category Write Protect Properties table to add a new
category or modify or delete an existing category. You must click the Submit Changes
button to save any changes made to Write Protect Mode settings.
The following settings allow a user to specify cluster overrides for certain I/O and copy
operations. These settings override the default TS7700 Virtualization Engine behavior and
can be different for every cluster in a grid.
Prefer local cache for fast ready mount requests: A fast ready mount will select a local
copy as long as a copy is available and a cluster copy consistency point is not specified as
No Copy in the Management Class for the mount. The cluster is not required to have a
valid copy of the data.
Prefer local cache for non-fast ready mount requests: This override will cause the local
cluster to satisfy the mount request as long as the cluster is available and the cluster has a
valid copy of the data, even if that data is only resident on physical tape. If the local cluster
does not have a valid copy of the data, then default cluster selection criteria applies.
Force volumes mounted on this cluster to be copied to the local cache: For a non-fast
ready mount, this override causes a copy to be performed on to the local cluster as part of
the mount processing. For a fast ready mount, this setting has the effect of overriding the
specified Management Class with a copy consistency point of Rewind/Unload for the
cluster. This does not change the definition of the Management Class, but serves to
influence the replication policy.
Allow fewer RUN consistent copies before reporting RUN command complete: If selected,
the value entered at Number of required RUN consistent copies, including the source
copy, will be used to determine the maximum number of RUN copies, including the source,
If the grid possesses a physical library, but the selected cluster does not, this page is visible
but disabled on the TS7700 Virtualization Engine management interface. If the grid does not
possess a physical library, this page is not visible on the TS7700 Virtualization Engine
management interface.
Reclamation can improve tape utilization by consolidating data on physical volumes, but it
consumes system resources and can affect host access performance. The Inhibit Reclaim
Schedules function can be used to disable reclamation in anticipation of increased host
access to physical volumes.
The Schedules table displays the list of Inhibit Reclaim schedules defined for each partition of
the grid. It displays the day, time, and duration of any scheduled reclamation interruption. All
inhibit reclaim dates and times are displayed first in Coordinated Universal Time (UTC) and
then in local time.
The Schedules table displays the list of Inhibit Reclaim schedules defined for each partition of
the grid. It displays the day, time, and duration of any scheduled reclamation interruption. All
inhibit reclaim dates and times are displayed first in Coordinated Universal Time (UTC) and
then in the local time of the browser. The conversion is automatic.
Use the drop-down menu on the Schedules table to add a new Reclaim Inhibit schedule, or
modify or delete an existing schedule.
To add a new schedule, select Add from the Select Action drop-down menu and click Go.
Complete the fields for information that will be displayed in the Schedules table. The Start
Time field is accompanied by a time chooser clock icon. You can enter hours and minutes
manually using 24 hour time designations, or you can use the time chooser to select a start
time based on a 12 hour (AM/PM) clock.
To modify an existing schedule, select the radio button from the Select column that appears in
the same row as the schedule you want to modify. Select Modify from the Select Action
drop-down menu and click Go. You can modify any field listed in the Schedules table.
To delete an existing schedule, select the radio button from the Select column that appears in
the same row as the schedule you want to delete. Select Delete from the Select Action
drop-down menu and click Go. You are prompted to confirm the schedule deletion. Click Yes
to delete the Inhibit Reclaim schedule, or click No to cancel the delete request.
Remember: All times are entered in reference to the local time of the browser and
automatically converted to UTC for use by the TS7700 Virtualization Engine.
To watch a tutorial showing the properties of the Encryption Key Server, click the View
tutorial link.
If the cluster does not possess a physical library, the Encryption Key Server addresses page
will not be visible on the TS7700 Virtualization Engine management interface.
The following settings are used to configure the TS7700 Virtualization Engine connection to
an Encryption Key Server:
Primary key manager address: The key manager server name or IP address that is
primarily used.
Primary key manager port: The port number of the primary key manager. The default
value is 3801. This field is only required if a primary key address is used.
Secondary key manager address: The key manager server name or IP address that is
used when the primary key manager is unavailable.
Secondary key manager port: The port number of the secondary key manager. The
default value is 3801. This field is only required if a secondary key address is used.
Preferred DNS server: The Domain Name Server (DNS) that is primarily used. DNS
addresses are only needed if you specify a symbolic domain name for one of the key
manager addresses rather than a numeric IP address. If you need to specify a DNS, be
sure to specify both a primary and an alternate so you do not lose access to your
Encryption Key Manager because of one of the DNS servers being down or inaccessible.
This address can be in IPv4 format.
Alternate DNS server: The Domain Name Server that is used in case the preferred DNS
server is unavailable. If a Preferred DNS server is specified, be sure to also specify an
alternate DNS. This address can be in IPv4 format.
Using the Ping Test: Use the Ping Test buttons to check component network connection
to a key manager after changing a component's address or port. If you change a key
Click the Submit button to save changes to any of the settings. To discard changes and
return the field settings to their current values, click the Reset button.
Important: Changing this setting can result in loss of connection to the management
interface.
Primary router address: The IP address of the primary router contained in the TS7700
Virtualization Engine.
Alternate router address: The IP address of the secondary router contained in the TS7700
Virtualization Engine.
Subnet mask: The subnet mask of the network on which the cluster resides
Default gateway: The gateway IP address of the network on which the cluster resides.
Use the drop-down menu on the Currently activated feature licenses table to activate or
remove a feature license. You can also use this drop-down menu to sort and filter feature
license details.
In the Feature Licenses window (Figure 8-59 on page 542), perform the following steps:
1. Select Activate New Feature License from the Select Action drop-down menu and click
Go.
2. The Activate New Feature License window opens. Enter the 32-character feature license
in the available text field and click Activate to continue, or Cancel to cancel the activation.
3. The Confirm Feature Activation window opens and shows the details of the feature
license. To activate the feature license, click Yes, and to cancel activation, click No.
This function is closely tied with IODF definitions and Logical Partitioning where one
Composite Library is used by several hosts. For more details, see 5.4, “Implementing
Selective Device Access Control” on page 323.
Use this window (Figure 8-60) to view, define, or update Access Groups and connection to
volume ranges.
In the Library Port Access Groups window, you have the following fields:
Access Groups
– Name: The Access Group name that can be tied together with Library Port IDs.
Defined per cluster in the grid. By default there are no restrictions. Eight Access
Groups can be defined (Nine including default).
– Library Port IDs: The number of Library Port IDs connected to an Access Group.
– Description: Use this field to record important characteristics of the Access Group.
Use the drop-down menu to Add, Modify, or Delete an Access Group. In the Figure 8-60
the Access Group named Prod have been defined to use only one range of devices (16
logical Units) through Library Port ID 0x10.
SNMP
This function enables SNMP traps based on a MIB file delivered by IBM. With SNMP, you can
get audit trails of actions within the cluster. Use the window in Figure 8-61 to implement the
SNMP settings. For more description of the windows and the setting, see 4.3.11, “Defining
SNMP” on page 252.
Two methods of managing user authentication policy are available. They provide the option to
use a locally administered authentication policy or the centralized Role-Base Access Control
(RBAC) method using an externally managed Storage Authentication Service policy.
Local Authentication Policy is managed and applied within the cluster or clusters participating
in a grid. In a multi-cluster grid configuration, user IDs and their associated roles are defined
through the management interface on one of the clusters. The user IDs and roles are then
propagated through the grid to all participating clusters.
Storage Authentication Service policy allows you to centrally manage user IDs and roles.
Storage Authentication Service policies store user and group data on a separate server and
map relationships between users, groups, and authorization roles when a user signs into a
cluster. Network connectivity to an external System Storage Productivity Center (SSPC)
server is required. Each cluster in a grid can operate its own Storage Authentication Service
policy
You can access the following options through the User Management link:
Security Settings: Use this window to view security settings for a TS7700 Virtualization
Engine Grid. From this page, you can also access windows to add, modify, assign, test,
and delete security settings.
Roles and Permissions: Use this window to set and control user roles and permissions for
a TS7700 Virtualization Engine Grid.
Change Password: Use this window to change your password for a TS7700 Virtualization
Engine Grid.
SSL Certificates: Use this window to view, import, or delete SSL certificates to support
connection to a Storage Authentication Service server from a TS7700 Virtualization
Engine Cluster.
InfoCenter Settings: Use this page to upload a new TS7700 Virtualization Engine
Information Center to the cluster's management interface.
Figure 8-62 shows the Security Settings summary window, which is the entry point to
enabling security policies.
Use the Session Timeout policy to specify the number of hours and minutes that the
management interface can be idle before the current session expires and the user is
redirected to the login page. This setting is propagated to all participating clusters in the grid.
To modify the maximum idle time, select values from the Hours and Minutes drop-down
menus and click Submit Changes. The parameters for Hours and Minutes are:
Hours: The number of hours the management interface can be idle before the current
session expires. Possible values for this field are 00 through 23.
Minutes: The number of minutes the management interface can be idle before the current
session expires. Possible values for this field are 00 through 55, selected in five-minute
increments.
The Authentication Policies table shown in Figure 8-62 lists the following information:
Policy Name: The name of the policy that defines the authentication settings. The policy
name is a unique value composed of one to 50 Unicode characters. Heading and trailing
blank spaces are trimmed, although internal blank spaces are permitted. After a new
authentication policy has been created, its policy name cannot be modified.
Type: The policy type which can be one of the following values:
– Local: A policy that replicates authorization based on user accounts and assigned
roles. It is the default authentication policy. When enabled, it is enforced for all clusters
in the grid. If Storage Authentication Service is enabled, the Local policy is disabled.
To add a user to the Local Authentication Policy for a TS7700 Virtualization Engine Grid,
perform the following steps:
1. On the TS7700 Virtualization Engine management interface, select User Management
Security Settings from the left navigation window.
2. Click the Select button next to the Local policy name on the Authentication Policies
table.
3. Select Modify from the Select Action drop-down menu and click Go.
4. On the Local Accounts table, select Add from the Select Action drop-down menu and
click Go.
5. In the Add User window, enter values for the following required fields:
– Username: The new user's login name. This value must be 1 - 128 characters in length
and composed of Unicode characters. Spaces and tabs are not allowed.
– Role: The role assigned to the user account. The role can a predefined role or a
user-defined role. Possible values are:
• Operator: The operator has access to monitoring information, but is restricted from
changing settings for performance, network configuration, feature licenses, user
accounts, and custom roles. The operator is also restricted from inserting and
deleting logical volumes.
• Lead Operator The lead operator has access to monitoring information and can
perform actions for a volume operation. The lead operator has nearly identical
permissions to the administrator, but may not change network configuration, feature
licenses, user accounts, or custom roles.
• Administrator: The administrator has the highest level of authority, and may view all
pages and perform any action, including addition and removal of user accounts.
The administrator has access to all service functions and TS7700 Virtualization
Engine resources.
• Manager: The manager has access to monitoring information, and performance
data and functions, and may perform actions for users, including adding, modifying,
and deleting user accounts. The manager is restricted from changing most other
settings, including those for logical volume management, network configuration,
feature licenses, and custom roles.
• Custom roles: The administrator can name and define two custom roles by
selecting the individual tasks permitted to each custom role. Tasks can be assigned
Figure 8-63 shows the first window for creating a new user. It is used for managing users with
the Local Authentication Policy method.
Tip: Passwords for the users are changed from this window also.
To modify a user account belonging to the Local Authentication Policy, perform these steps:
1. On the TS7700 Virtualization Engine management interface select User Management
Security Settings from the left navigation pane.
2. Click the Select button next to the Local policy name on the Authentication Policies table.
3. Select Modify from the Select Action drop-down menu and click Go.
4. On the Local Accounts table, click the Select radio button next to the username of the
policy you want to modify.
5. Select Modify from the Select Action drop-down menu and click Go.
6. Modify the values for the any of the following fields:
– Role: The role assigned to the user account. Possible values are as follows:
• Operator: The operator has access to monitoring information, but is restricted from
changing settings for performance, network configuration, feature licenses, user
accounts, and custom roles. The operator is also restricted from inserting and
deleting logical volumes.
• Lead Operator: The lead operator has access to monitoring information and can
perform actions for volume operation. The lead operator has nearly identical
permissions to the administrator, but may not change network configuration, feature
licenses, user accounts, and custom roles.
• Administrator: The administrator has the highest level of authority, and may view all
pages and perform any action, including addition and removal of user accounts.
The administrator has access to all service functions and TS7700 Virtualization
Engine resources.
Restriction: You cannot modify the User Name or Group Name. Only the role and the
clusters to which it is applied can be modified.
To add a new Storage Authentication Service Policy for a TS7700 Virtualization Engine Grid,
perform the following steps:
1. On the TS7700 Virtualization Engine management interface select User Management
Security Settings from the left navigation window.
2. On the Authentication Policies table select Add from the Select Action drop-down
menu.
3. Click Go to open the Add Storage Authentication Service Policy window shown in
Figure 8-66 on page 553. The following fields are available for completion:
a. Policy Name: The name of the policy that defines the authentication settings. The
policy name is a unique value composed of one to 50 Unicode characters. Heading
Remember: If the Primary or Alternate Server URL uses the HTTPS protocol, a
certificate for that address must be defined on the SSL Certificates page.
– Server Authentication: Values in the following fields are required if IBM WebSphere®
Application Server security is enabled on the WebSphere Application Server that is
hosting the Authentication Service. If WebSphere Application Server security is
disabled, the following fields are optional:
• Client User Name: The user name used with HTTP basic authentication for
authenticating to the Storage Authentication Service.
• Client Password: The password used with HTTP basic authentication for
authenticating to the Storage Authentication Service.
4. To complete the operation click OK. To abandon the operation and return to the Security
Settings page, click Cancel.
Figure 8-66 shows an example of a completed Add Storage Authentication Service Policy
window.
5. In the Modify Storage Authentication Service Policy window in Figure 8-69, navigate to the
Storage Authentication Service Users/Groups table at the bottom.
6. Select Add User from the Select Action drop-down menu.
7. Click Go to open the Add Storage Authentication Service User window shown in
Figure 8-70.
To add a new user to a Storage Authentication Service Policy for a TS7700 Virtualization
Engine Grid, perform the following steps:
1. On the TS7700 Virtualization Engine management interface, select User Management
Security Settings from the left navigation window.
3. Click Go to open the Assign Authentication Policy window shown in Figure 8-73.
Figure 8-73 Cluster Assignment Selection for Storage Authentication Service Policy
4. To apply the authentication policy to a cluster, select the check box next to the cluster’s
name.
To assign an authentication policy to one or more clusters you must have authorization to
modify authentication privileges under the new policy. To verify that you have sufficient
privileges with the new policy, enter a user name and password recognized by the new
authentication policy. Enter values for the following fields:
– User Name: Your user name for the TS7700 Virtualization Engine management
interface.
– Password: Your password for the TS7700 Virtualization Engine management interface.
5. To complete the operation, click OK. To abandon the operation and return to the Security
Settings window, click Cancel.
To watch a tutorial showing how to set up user access, click the View tutorial link.
Figure 8-77 shows the list of user roles and a summary of each role:
Operator: The operator has access to monitoring information, but is restricted from
changing settings for performance, network configuration, feature licenses, user accounts,
and custom roles. The operator is also restricted from inserting and deleting logical
volumes.
Lead Operator: The lead operator has access to monitoring information and can perform
actions for volume operation. The lead operator has nearly identical permissions to the
administrator, but may not change network configuration, feature licenses, user accounts,
and custom roles.
Administrator: The administrator has the highest level of authority, and may view all
windows and perform any action, including addition and removal of user accounts. The
administrator has access to all service functions and TS7700 Virtualization Engine
resources.
Manager: The manager has access to monitoring information, and performance data and
functions, and may perform actions for users. The manager is restricted from changing
most settings, including those for logical volume management, network configuration,
feature licenses, user accounts, and custom roles.
Custom roles: The administrator can name and define two custom roles by selecting the
individual tasks permitted to each custom role. Tasks can be assigned to a custom role in
the Roles and Assigned Permissions window.
To view the Roles and Assigned Permissions table, perform the following steps:
1. Select the check box to the left of the role to be displayed. You can select more than one
role to display a comparison of permissions.
2. Select Properties from the Select Action menu.
3. Click Go.
The first column of the Roles and Assigned Permissions table lists all the tasks available to
users of the TS7700 Virtualization Engine. Subsequent columns show the assigned
permissions for selected role (or roles). A check mark denotes permitted tasks for a user role.
A null dash (-) denotes prohibited tasks for a user role. Permissions for predefined user roles
cannot be modified. You can modify permissions for custom roles in the Roles and Assigned
Permissions table. You can modify only one custom role at a time.
Remember: You can apply the permissions of a predefined role to a custom role by
selecting a role from the Role Template drop-down menu and clicking Apply. You can
then customize the permissions by selecting or deselecting tasks.
Tip: This window is visible only when the current security policy is the Local policy.
Password rules
Adhere to the following rules when you set the password:
Passwords must be at least six alphanumeric characters in length, but no more than 16
alphanumeric characters in length.
Passwords must contain at least one number.
The first and last characters of a password cannot be numbers.
Your password cannot contain your user name.
The Certificates table displays the following identifying information for SSL certificates on the
cluster:
Alias: A unique name to identify the certificate on the machine.
Issued To: The distinguished name of the entity requesting the certificate.
Fingerprint: A number that specifies the Secure Hash Algorithm (SHA hash) of the
certificate. This number can be used to verify the hash for the certificate at another
location, such as the client side of a connection.
Expiration: The expiration date of the signer certificate for validation purposes.
Figure 8-81 MI Service Mode window for a hybrid TS7700 Virtualization Engine configuration
A hybrid TS7700 Virtualization Engine Grid is defined as a grid that combines TS7700
Virtualization Engine Clusters that both do and do not attach to a physical library. Additional
functions are provided for cache management to reduce the performance impact of
scheduled maintenance events to the TS7700 Virtualization Engine Grid.
For a TS7720 Virtualization Engine cluster in a hybrid grid, you can use the Lower
Threshold button to lower the required threshold at which logical volumes are removed from
cache. When the threshold is lowered, additional logical volumes already copied to another
cluster are removed, creating additional cache space for host operations. This step is
necessary to ensure that logical volume copies can be made and validated before a Service
mode event. The default Removal Threshold is equal to 95% of the cache Used Size minus 2
TB (see the Used Size field in the Tape Volume Cache window). You can lower the threshold
to any value between 4 TB and the remainder of the Used Size minus 2 TB. More technical
details regarding hybrid grid and cache thresholds can be found in 4.4.7, “TS7720 Cache
thresholds and removal policies” on page 270.
In a TS7700 Virtualization Engine Grid, Service Prep can occur on only one cluster at any
one time. If Service Prep is attempted on a second cluster at the same time, the attempt fails.
After Service Prep has completed for one cluster and that cluster is in Service mode, another
cluster can be placed in Service Prep. A cluster in Service Prep automatically cancels
Service Prep if its peer in the grid experiences an unexpected outage while the Service Prep
process is still active.
The following items are available when viewing the current operational mode of a cluster.
Also, you can set the cache’s Removal Threshold in preparation for maintenance events in a
TS7700 Virtualization Engine hybrid grid configuration. You can use the Service mode
window to put the TS7700 Virtualization Engine into Service mode or back into Normal mode.
Lower Threshold: For a TS7720 Virtualization Engine cluster in a hybrid grid, use this
option to lower the required threshold at which logical volumes are removed from cache.
Remember: The Lower Threshold button is visible only in a hybrid grid (one that
contains both a TS7720 Virtualization Engine and TS7740 Virtualization Engine).
The Lower Threshold button is disabled if the selected cluster is the only TS7720
Virtualization Engine cluster (not connected to a physical library) in the grid.
Depending on what mode the cluster is in, a different action is presented by the button below
the Cluster State display. You can use this button to place the TS7700 Virtualization Engine
into Service mode or back into Normal mode:
Prepare for Service Mode: This option puts the cluster into Service Prep mode and allows
the cluster to finish all current operations. If allowed to finish Service Prep, the cluster
enters Service mode. This option is only available when the cluster is in Normal mode. To
cancel Service Prep mode, click the Return to Normal Mode button.
Return to Normal Mode: Returns the cluster to Normal mode. This option is available if the
cluster is in Service Prep or Service mode. A cluster in Service Prep mode or Service
mode returns to Normal mode if the Return to Normal Mode button is selected.
You are prompted to confirm your decision to change the Cluster State. Click OK to change
the Cluster State, or Cancel to abandon the change operation.
For a cluster that is in a failed state, enabling Ownership Takeover mode allows other clusters
in the grid to obtain ownership of logical volumes that are owned by the failed cluster.
Normally, ownership is transferred from one cluster to another through communication
between the clusters. When a cluster fails or the communication path between clusters fails,
the normal means of transferring ownership is not available. Enabling a read/write or
read-only takeover mode must not be done if only the communication path between clusters
has failed. A mode must only be enabled for a cluster that is no longer operational. The
integrity of logical volumes in the grid can be compromised if a takeover mode is enabled for a
cluster that was not actually in a failed state.
The following Ownership Takeover mode information is available for failed clusters:
Cluster: The failed cluster’s name.
Ownership Takeover Mode. Possible values are as follows:
– No Ownership Takeover: The failed cluster is not in any Ownership Takeover mode.
– Read/Write Ownership Takeover: Logical volumes owned by this cluster can be
obtained for read and write operations.
– Read-only Ownership Takeover: Logical volumes owned by this cluster are available
for read and mount operations.
Figure 8-83 shows the three options in the Select Action drop-down menu.
The message HYDME0560I at the top of Figure 8-84 tells you that there are no unavailable
clusters, as confirmed by the FAILED clusters in grid table in the middle of window.
To print the table data, click Print Report next to the Select Action menu. To download a
comma-separated value (.csv) file of the table data, click Download spreadsheet.
The following information is presented in a table about the damaged logical volume and
should assist in selecting a cluster for a repair action if data is to be retained. The information
is presented for each cluster:
Cluster: The cluster name.
Last Modified: The date and time the logical volume was last modified.
Last Mounted: The date and time the logical volume was last mounted.
Data Exists: Possible values are as follows:
– Exists: The data exists, and is in the local cache or was migrated to physical tape.
– Does not exist: The data for this logical volume does not exist on this cluster.
Size/MiBs: The size of data on the logical volume in mibibytes.
Category: Category attribute of the logical volume. This is used to denote a grouping of
logical volumes.
Media Type: The media type of the logical volume.
Ownership Takeover Time: The date the last time an ownership takeover occurred for this
logical volume.
Data Level: Every time the logical volume is written to, this value increases. Inserted
logical volumes start with a data level of 100. This factor is secondary to Insert Level when
choosing a cluster for the repair policy.
Insert Level: When a group of logical volumes are inserted, they are assigned an insert
level. Later inserts are given a higher insert level. This factor is the most important when
choosing a cluster in the repair policy if data is will be retained. A higher value means
higher consistency for the logical volume's data.
Data Consistent: If the cluster’s logical volume copy's data level is considered the latest
data level, then data is consistent.
Figure 8-86 shows a list of cluster settings that are available for backup:
Fast Ready Categories: Check the check box adjacent to this setting to back up Fast
Ready Categories that are used to group logical volumes.
Physical Volume Pools: Check the check box adjacent to this setting to back up physical
volume pool definitions.
Restriction: If the cluster does not possess a physical library, physical volume pools
will not be available.
All constructs: To select all of the constructs for backup, select the check box adjacent to
this setting.
Storage Groups: Select the check box adjacent to this setting to back up defined storage
groups.
Management Classes: Check the check box adjacent to this setting to back up defined
Management Classes.
Storage Classes: Select the check box adjacent to this setting to back up defined storage
classes.
Data Classes: Select the check box adjacent to this setting to back up defined data
classes.
Tip: If the cluster does not possess a physical library, the option to Inhibit Reclaim
Schedules will not be available.
To back up cluster settings, select a check box adjacent to any of the settings and then click
the Backup button. The backup file ts7700_cluster<cluster ID>.xml is created. This file is
an XML Meta Interchange file. You are prompted to open the backup file or save it to a
directory. Save the file. Modify the file name before saving if you want to retain this backup file
for subsequent backup operations.
Important: Management Class settings are related to the number and order of clusters
in a grid. Take special care when restoring this setting. If a Management Class is
restored to a grid having more clusters than the grid had when the backup was
performed, the copy policy for the new cluster or clusters are set as No Copy. If a
Management Class is restored to a grid having fewer clusters than the grid had when
the backup was performed, the copy policy for the now-nonexistent clusters are
changed to No Copy. The copy policy for the first cluster will be changed to RUN to
ensure one copy exists in the cluster.
Storage Classes: Select the check box adjacent to this setting to restore defined storage
classes.
Data Classes: Select the check box adjacent to this setting to restore defined data
classes.
If this setting is selected and the cluster does not support logical WORM, the Logical
WORM setting is disabled for all data classes on the cluster.
Inhibit Reclaim Schedule: Select the check box adjacent to this setting to restore Inhibit
Reclaim Schedules used to postpone tape reclamation.
If the backup file was created by a cluster that did not possess a physical library, the
Inhibit Reclaim Schedules settings will be reset to their defaults.
Fast Ready Categories: Select the check box adjacent to this setting to restore the Fast
Ready Categories used to group logical volumes.
Physical Volume Pools: Select the check box adjacent to this setting to restore physical
volume pool definitions.
If the backup file was created by a cluster that did not possess a physical library, Physical
Volume Pool settings will be reset to their defaults.
Physical Volume Ranges: Select the check box adjacent to this setting to back up defined
physical volume ranges.
If the backup file was created by a cluster that did not possess a physical library, the
Physical Volume Range settings are reset to their defaults.
After clicking Show File, the name of the cluster from which the backup file was created is
displayed at the top of the window, along with the date and time the backup occurred. Select
A warning window opens and prompts you to confirm your decision to restore settings. Click
OK to restore the settings or Cancel to cancel the restore operation.
The restore cluster settings operation can take five minutes or longer and its progress can be
tracked in the Operation History window.
If a change in the cluster configuration is detected, the affected settings are as follows:
Inhibit Reclaim Schedule
Physical Volume Pools
Physical Volume Ranges
This window is visible from the TS7700 Virtualization Engine management interface whether
the TS7700 Virtualization Engine is online or offline.
You can shut down only the cluster to which you are logged in. To shut down another cluster,
you must log out of the current cluster and log into the cluster you want to shut down.
Before you shut down the TS7700 Virtualization Engine Cluster, you must decide whether
your circumstances provide adequate time to perform a clean shutdown. A clean shutdown is
not required, but is good practice to do for a TS7700 Virtualization Engine Grid configuration.
A clean shutdown requires you to first place the cluster in service mode to ensure that no jobs
are being processed during a shutdown operation. If you cannot place the cluster in service
mode, you can use this window to force a shutdown of the cluster.
Attention: A forced shutdown can result in lost access to data and to job failure.
A cluster shutdown operation initiated from the TS7700 Virtualization Engine management
interface also shuts down the cache. The cache must be restarted before any attempt is
made to restart the TS7700 Virtualization Engine.
If you select Shutdown from the left side menu for a cluster that is still online, as shown at the
top of the page in Figure 8-88, a message alerts you to first put the cluster in Service mode
before shutting down.
In Figure 8-88, the Cluster State field shows the operational status of the TS7700
Virtualization Engine and appears above the button used to force its shutdown. You have the
following options;
Cluster State. Possible values are as follows:
– Clicking the Shutdown button in Normal mode: If you click Shutdown while in Normal
mode, you receive a warning message recommending that you place the cluster in
Service mode before preceding. To place the cluster in Service mode, select Modify
Service Mode. To continue with the force shutdown operation, select Close Message.
If you opt to continue with the force shutdown operation, you are prompted to provide
your password to proceed. Enter your password and select Yes to continue or select
No to abandon the shutdown operation.
– Clicking the Shutdown button in Service mode: If you select the Shutdown button
while in Service mode, you will be asked to confirm your decision. Click OK to
continue, or click Cancel to abandon the shutdown operation.
When a shutdown operation is in progress, the Shutdown button is disabled and the status of
the operation is displayed in an information message. The shutdown sequence is as follows:
1. Going offline
2. Shutting down
3. Powering off
4. Shutdown completes
Verify that power to the TS7700 Virtualization Engine and to the cache is shut down before
attempting to restart the system.
A cluster shutdown operation initiated from the TS7700 Virtualization Engine management
interface also shuts down the cache. The cache must be restarted first and allowed to power
up to a operational state before any attempt is made to restart the TS7700 Virtualization
Engine.
If the grid possesses a physical library, but the selected cluster does not, this page is visible
but disabled on the TS7700 Virtualization Engine management interface, and the following
message is displayed:
The cluster is not attached to a physical tape library.
If the grid does not possess a physical library, this window is not visible on the TS7700
Virtualization Engine management interface.
Copy Export permits the export of all logical volumes and the logical volume database to
physical volumes, which can then be ejected and saved as part of a data retention policy for
disaster recovery. You can also use this function to test system recovery.
Further Resources: Copy Export recovery is described in more detail in IBM Virtualization
Engine TS7700 Series Copy Export Function User's Guide. This white paper and the most
recently published white papers about this topic are available at the Techdocs website by
searching for “TS7700” at the following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/Techdocs
During the Copy Export recovery, all current logical volumes and physical volumes will be
erased from the database and logical volumes are erased from the cache. Do not attempt this
operation on a cluster where current data will be saved.
Important: Copy Export recovery can only target a stand-alone cluster configuration for
TS7700 Virtualization Engine Grid volumes. The recovered cluster can be merged into an
existing TS7700 Virtualization Engine Grid with all participating clusters.
If you create a new secondary copy, the original secondary copy is deleted because it
becomes inactive data. For example, if you modify constructs for logical volumes that have
already been exported and the logical volumes are remounted, a new secondary physical
volume is created. The original physical volume copy is deleted without overwriting the logical
volumes. When the copy export operation is rerun, the new, active version of the data is used.
In other words, the secondary copy, when recreated, expires the copy on the exported volume
because it is no longer active data. When rewritten, the copy on the PVOL is no longer active
and is no longer associated with the original physical volume.
The following fields and options are presented to the user to assist in testing a recovery or
performing a recovery:
VOLSER of physical stacked volume for Recovery Test: The physical volume from which
the Copy Export recovery will attempt to recover the database.
Disaster Recovery Test Mode: This option determines whether a Copy Export recovery
will be run as a test or to recover a machine that has suffered a disaster. If this check box
is selected (contains a check mark, which is the default status), the Copy Export recovery
runs as a test. If the box is clear, the recovery process runs in “normal” mode, as when
recovering from an actual disaster.
When the recovery is run as a test, the content of exported tapes remains unchanged.
Additionally, primary physical copies remain unrestored and reclaim processing is
disabled to halt any movement of data from the exported tapes. Any new volumes written
to the machine are written to newly added scratch tapes and will not exist on the
previously exported volumes. This ensures that the data on the Copy Export tapes
remains unchanged during the test.
In contrast to a test recovery, a recovery performed in “normal” mode rewrites logical
volumes to physical storage if the constructs change, so that the logical volume’s data can
be put in the correct pools. With this type of recovery, reclaim processing remains enabled
and primary physical copies are restored, requiring the addition of scratch physical
volumes. A recovery run in this mode allows the data on the Copy Export tapes to expire in
a normal manner and those physical volumes to be reclaimed.
Erase all existing logical volumes during recovery: This check box is visible if logical
volume or physical volume data is present in the database. A Copy Export Recovery
operation erases any existing data. No option exists to retain existing data while
performing the recovery. You must select this check box to proceed with the Copy Export
Recovery operation.
Tip: The Cluster Nodes Online/Offline window is visible only on a stand-alone cluster
configuration when the cluster is not online.
This window displays information about a cluster's online status. Information or action
messages displayed at the top of the page indicate the cluster's status.
A cluster can take a long time to come online, especially if a merge operation is pending. If a
pending merge operation is preventing the cluster from coming online, you have the option to
skip the merge step to reduce the time needed for the cluster to come online. Click Skip Step
to skip the merge operation. This button is available only if the cluster is in a blocked state,
waiting to share pending updates with one or more unavailable clusters.
Remember: If you click the Skip Step button, pending updates against the local cluster
might remain undetected until the unavailable cluster or clusters become available.
A cluster might be stuck in a pending online state because it is not able to communicate with
its peers. While in a pending online state, the cluster is trying to discover if ownership
Important: ALMS is a requirement for IBM System z attachment. ALMS is always installed
and enabled in a TS7700 Virtualization Engine z/OS environment. Therefore, automatic
cleaning is enabled.
Tip: If virtual I/O slots are enabled, your library automatically imports cleaning cartridges.
Using the Tape Library Specialist Web interface to insert a cleaning cartridge
To use the Tape Library Specialist Web interface to insert a cleaning cartridge into the
TS3500 Tape Library, perform the following steps:
1. Open the door of the I/O station and insert the cartridge so that the bar code label faces
the interior of the library and the write-protect switch is on the right.
2. Close the door of the I/O station.
3. Type the Ethernet IP address on the URL line of the browser and press Enter. The System
Summary window opens.
To use the TS3500 Tape Library Specialist Web interface to remove a cleaning cartridge from
the tape library, perform the following steps:
1. Type the Ethernet IP address on the URL line of the browser and press Enter. The System
Summary window opens.
2. Select Cartridges Cleaning Cartridges. The Cleaning Cartridges window opens, as
shown in Figure 8-91 on page 585.
3. Select a cleaning cartridge, then from the Select Action drop-down menu, select
Remove, and then click Go.
4. Look at the Activity pane in the operator window to determine whether the I/O station that
you want to use is locked or unlocked. If the station is locked, use your application
software to unlock it.
5. Open the door of the I/O station and remove the cleaning cartridge.
6. Close the door of the I/O station.
Information from the TS3500 Tape Library itself is contained in some of the outputs. However,
you cannot switch the operational mode of the TS3500 Tape Library with z/OS commands.
Restriction: DFSMS and MVS commands apply only to SMS-defined libraries. The library
name defined during the definition of a library in ISMF is required for “libname” in the
DFSMS commands.
Clarification: This command does not change the operational mode of the TS3500
Tape Library itself. It only applies to the SMS-defined logical libraries.
VARY SMS,LIBRARY(libname),ONLINE
This command is required to bring the SMS-defined library back to operation after it has
been offline.
The processing for the LIBRARY LMPOLICY command invokes the LCS external services
FUNC=CUA function. Any errors that the CUA interface returns can also be returned for the
LIBRARY LMPOLICY command. If the change use attribute installation exit (CBRUXCUA) is
enabled, the CUA function calls the installation exit. This can override the policy names that
you set using the LIBRARY LMPOLICY command.
The results of this command are specified in the text section of message CBR1086I. To verify
the policy name settings and to see whether the CBRUXCUA installation exit changed the
policy names you set, display the status of the volume.
The syntax of the LIBRARY LMPOLICY command to assign or change volume policy names
is shown in Example 8-2.
The values you specify for the SG, SC, MC, and DC policy names must meet the Storage
Management Subsystem (SMS) naming convention standards:
Alphanumeric and national characters only
Name must begin with an alphabetic or national character ($, *, @, #, or %)
No leading or embedded blanks
Eight characters or less
The specified keywords are passed to the TS7700 identified by the library name to instruct it
on what type of information is being requested or which operation is to be performed. Based
on the operation requested through the command, the TS7700 then returns information to the
host that will be displayed as a multiline write to operator (WTO) message.
The Library Request command for Host Console Request is supported in z/OS V1R6 and
later. See OAM APAR OA20065 and device services APARs OA20066, OA20067, and
OA20313. A detailed description of the Host Console Request functions and responses is
available in IBM Virtualization Engine TS7700 Series z/OS Host Command Line Request
User’s Guide, which is available at the Techdocs website (search for the term “TS7700”):
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
The following parameters are optional. The optional parameters depend on the first keyword
specified. Based on the first keyword specified, zero or more of the additional keywords might
be appropriate.
keyword2 Specifies additional information in support of the operation
specified with the first keyword.
keyword3 Specifies additional information in support of the operation
specified with the first keyword.
keyword4 Specifies additional information in support of the operation
specified with the first keyword. Keyword4 is prepared for future
use.
Clarification: The specified v must be from one to eight characters in length and can
consist of alphanumeric (A-Z and 0-9) and the national character set ($*@#%). A keyword
cannot contain any blanks. The only checking performed by the host is to verify that the
specified keywords conform to the supported character set. The validity of the keywords
themselves and the keywords in combination with each other is verified by the TS7700
Virtualization Engine processing the command.
SETTING
The SETTING request provides information about many of the current workflow and
management settings of the cluster specified in the request and the ability to modify the
settings. It also allows alerts to be set for many of the resources managed by the cluster.
In response to the SETTING request, the cluster associated with the distributed library in the
request will modify its settings based on the additional keywords specified. If no additional
keywords are specified, the request will just return the current settings. See Example 8-13 on
page 612 for an example of the data returned after the rest of the keyword descriptions.
Remember: All settings are persistent across machine restarts, service actions, or code
updates. The settings are not carried forward as part of disaster recovery from Copy
Exported tapes or the recovery of a system.
If a second keyword or alert is specified, the cluster will set thresholds at which a message is
sent to all hosts attached to the cluster and, in a grid configuration, all hosts attached to all
clusters. The third keyword specifies which alert threshold will be set and the fourth specifies
the threshold value. All messages see the distributed library and will result in the following
z/OS host console message:
CBR3750I Message from library distributed library name: Message Text
Thresholds can be set for many of the resources managed by the cluster. For each resource,
two settings are provided. One warns that the resource is approaching a value that might
result in an impact to the operations of the attached hosts. A second provides a warning that
the resource has exceeded a value that might result in an impact to the operations of the
attached hosts. When the second warning is reached, the warning message is repeated
every 15 minutes.
The message text includes a message identifier that can be used to automate the capture
and routing of these messages. To assist in routing messages to the appropriate individuals
for handling, the messages that indicate that a resource is approaching an impact value will
use message identifiers in a range of AL0000-AL4999. Message identifiers in a range of
AL5000-AL9999 will be used for messages that indicate that a resource has exceeded an
impact value.
Remember: For messages where a variable is included (the setting), the value returned is
left-justified without leading zeroes or right padding. For example:
AL5000 Uncopied data of 1450 GB above high warning limit of 1050 GB.
CACHE settings
If second keyword of CACHE is specified, the cluster will modify how it controls the workflow
and content of the tape volume cache.
REMOVE ENABLE, Automatic removal starts when cache usage size crosses the removal
DISABLE threshold
When the ENABLE keyword is specified, the automatic removal will be
enabled on this disk-only cluster.
When the DISABLE keyword is specified, the automatic removal will be
disabled on this disk-only cluster.
The default value is enabled.
THROTTLE settings
If a second keyword of THROTTLE is specified, the cluster will modify how it controls the data
flow rates into and out to the cluster.
DEVALLOC settings
If a second keyword of DEVALLOC is specified, the cluster will modify how it does Scratch
Allocation Assistance (SAA) or Device Allocation Assistance (DAA) for Private tapes.
SAA is a new function on R2.0. It is an extension to device allocation assistance and works in
co-existence with your defined Management Class values where Candidate Clusters for
scratch mounts are entered. SAA must be enabled in the host operating system. SAA directs
new workloads to particular clusters. An example is to direct DFSMShsm ML2 workload to the
TS7720 in a hybrid grid, as shown in Figure 8-93.
TS7740
Drives/Library
Arc
hiv
e Wo
rklo
ad TS7740 Cluster
d
oa
or kl
r yW
ma
Pri
TS7720
TS7720 Cluster
Device allocation assistance (DAA) is a function that allows the host to query the TS7700
Virtualization Engine to determine which clusters should be preferred for a private (specific)
mount request. When enabled, DAA returns to the host a ranked list of clusters (the preferred
cluster is listed first) that determines for a specific VOLSER which cluster, either a TS7740 or
a TS7720, is best to use for device allocation.
Reclaim settings
If a second keyword of RECLAIM is specified, the cluster will modify how the Reclaim
background tasks controls the workflow and content of the tape volume cache.
Also note that if a valid RECLAIM request is received while reclaims are inhibited, that
request will take effect as soon as reclaims are no longer inhibited by the Inhibit Reclaim
schedule.
CPYCNT settings
If a second keyword of CPYCNT is specified, the domain will modify how many concurrent
threads are allowed to process either RUN or Deferred copies over the grid.
RUN value The number of concurrent copy threads for processing RUN copies
The allowed values for copy thread counts are from 5 to 128.
The default value is 20 for clusters with two 1 Gb Ethernet links, and 40
for clusters with four 1 Gb Ethernet links or two 10 Gb Ethernet links.
DEF value The number of concurrent copy threads for processing Deferred copies
The allowed values for copy thread counts are from 5 to 128.
The default value is 20 for clusters with two 1 Gb Ethernet links, and 40
for clusters with four 1 Gb Ethernet links or two 10 Gb Ethernet links.
CACHE command
Example 8-4 shows the CACHE command.
GRIDCNTL command
Example 8-5 shows the GRIDCNTL DISABLE command.
The response from GRIDCNTL DISABLE shows that copies have been stopped.
The response from GRIDCNTL DISABLE shows that copies have been restarted.
LVOL command
Example 8-7 shows the LVOL command.
The response from LVOL shows detailed information about the logical volume. The response
shows the following information:
Whether a logical volume is ECCST or CST, and the size of the volume
Number of copies and VOLSER of physical volumes where the logical volume resides
Copy policy used
PVOL command
Example 8-8 shows the PVOL command.
The response from PVOL shows detailed information of the logical volume. The response
shows the following information:
Media type, drive mode, format of the volume, and Volume State (read-write)
Capacity in MB, valid data in percent, and number of logical volumes
Shows whether the physical volume is exported and whether it is encrypted
POOLCNT command
Example 8-9 shows the POOLCNT command.
The response from POOLCNT shows detailed information about each physical volume pool.
The response shows the following information:
Shows detailed info from each pool
Details about media type, and volumes that are empty, filling, or full
Volumes eligible for erase
Volumes that are in the Read-only state, unavailable, or in Copy Export state
RECALLQ command
Example 8-10 shows the RECALLQ command.
The response from RECALLQ on distributed library shows detailed information about the
logical volume recall queue. The response shows the following information:
The recall is in progress for volume L00121 and L99356.
Volumes Y30458 and L54019 have a recall scheduled, which means a
RECALLQ,volser,PROMOTE has been issued.
Volume L67304 is in position 1 for recall and has been in the recall queue for 135 seconds.
Volume T09365 spans from physical volume AD5901 to P00167.
STATUS command
Example 8-11 shows the STATUS command.
The response from STATUS,GRID on Composite Library shows detailed information about
the multi-cluster grid. The response shows the following information:
The Composite Library shows Squint is in service mode and a queue of data needs to be
copied. Squint is in “service ownership takeover” mode. Therefore, the other two TS7700
Virtualization Engines must do recalls even though a logical volume resides in Squint.
As seen from the Distributed Library View, Squint is unknown, and Celeste has an
unavailable link.
SETTING command
Example 8-13 shows the SETTING command.
The response from SETTING on distributed library shows detailed information about the
ALERTS, CACHE, and THROTTLE controls.
Many of these commands could be subject to automation based on your own automation
products. You could create your own actions to be taken by periodically issuing the Library
Request commands to react to responses automatically and without operator interference.
This could be used for proactive handling.
DS QT,devnum,MED,nnn
This command displays information about the device type, media type, and the cartridge
volume serial number. devnum is the device address in hexadecimal. nnn is the number of
devices to query.
Example 8-15 shows the sample output of a DS QT system command.
VARY unit,ONLINE/OFFLINE
The VARY unit command in itself is no different from what it used to be. However, new
situations are seen when the affected unit is attached to a library.
When the library is offline, the tape units cannot be used. This is internally indicated in a
new status (offline for library reasons), which is separate from the normal unit offline
status. A unit can be offline for both library and single-unit reasons.
A unit that is offline for library reasons only cannot be taken online by running VARY
unit,ONLINE. Only VARY SMS,LIBRARY(...),ONLINE can do so.
You can bring a unit online that was individually varied offline and was offline for library
reasons by varying it online individually and varying its library online. The order of these
activities is not important, but both are necessary.
Currently, no display directly gives the reason why the unit is offline, and there is no display
that gives the name of the library to which this unit belongs.
DISPLAY U
The TS3500 Tape Library time can be set from specialist work items by selecting Library
Date and Time as shown in Figure 8-94.
During Pause mode, all recalls and physical mounts are held up and queued by the TS7700
Virtualization Engine for later processing when the library leaves the Pause mode.
Because both scratch mounts and private mounts with data in the cache are allowed to
execute, but not physical mounts, no more data can be moved out of the cache after the
currently mounted stacked volumes are completely filled. The cache is filling up with data that
has not been copied to stacked volumes. This results in significant throttling and finally in the
stopping of any mount activity in the library. For this reason, it is important to minimize the
amount of time spent with the library in Pause mode.
Tip: Before invoking service preparation at the TS7700 Virtualization Engine, all virtual
devices must be varied offline from the host. All logical volumes must be dismounted, all
devices associated with the cluster varied offline, and all jobs moved to other clusters in the
grid before service preparation is invoked. After service is complete, and when the TS7700
Virtualization Engine is ready for operation, you must vary the devices online at the host.
Here is the message posted to all hosts when the TS7700 Virtualization Engine Grid is in this
state:
CBR3788E Service preparation occurring in library library-name.
You can choose All Frames or a selected frame from the drop-down menu.
After clicking the Inventory/Audit tab, you will receive the message shown in Figure 8-96.
Important: As stated on the confirmation window (Figure 8-96), if you continue, all jobs in
the work queue might be delayed while the request is performed. The inventory will take up
to one minute per frame. The audit will take up to one hour per high density frame.
Click the Inventory Upload button to synchronize the physical cartridge inventory from the
attached tape library with the TS7740 Virtualization Engine database.
This section provides information about tape cartridges and labels, inserting and ejecting
stacked volumes, and exception conditions.
Remember: When 3592 J1A drives (or 3592 E05 Tape Drives in J1A emulation) are
replaced with 3592 E06 Tape Drives, the TS7740 Virtualization Engine marks the J1A
formatted tapes with active data FULL. By marking these tapes full, the TS7740
Virtualization Engine does not append more data because the 3592 E06 Tape Drive
cannot append data to a J1A formatted tape. As the active data on the J1A gets reclaimed
or expired, the tape goes back to the scratch pool, and then eventually gets reformatted to
the E06 data format.
If you have a tape that is written in E06 format, the capacity is 1 TB for JB/JX media, 640
GB for JA/JW media, and 128 GB for JJ/JR media.
The TS3500 Tape Library Cartridge Assignment Policy (CAP) defines which volumes are
assigned to which logical library partition. If the VOLSER is included in the TS7740
Virtualization Engine’s range, it will be assigned to the associated TS3500 Tape Library
logical library partition.
After the doors on the library are closed and the tape library has performed inventory, the
upload of the inventory to the TS7700 Virtualization Engine will be processed before the
TS3500 Tape Library reaches the READY state. The TS7700 Virtualization Engine updates
its database accordingly.
Tips:
The inventory is performed only on the frame where the door is opened and not on the
frames to either side. If you insert cartridges into a frame adjacent to the frame that you
opened, then you must perform a manual inventory of the adjacent frame using the
operator window on the TS3500 Tape Library itself.
For a TS7740 Virtualization Engine it is important to note that the external cartridge bar
code label and the internal VOLID label match or, as is the case for a new cartridge, the
internal VOLID label is blank. If the external label and the internal label do not meet the
aforementioned criteria, the cartridge will be rejected.
Under certain conditions, cartridges are not assigned to a logical library partition in the
TS3500 Tape Library. With TS7700 Virtualization Engine R1.5 and later, the TS3500 must
have a dedicated logical partition for the cluster. Therefore, in a library with more than one
partition, be sure that the Cartridge Assignment Policy is kept up to date with the cartridge
volume range (or ranges) in use. This minimizes conflicts by ensuring the cartridge is
accessible only by the intended partition.
Consideration: Unassigned cartridges can exist in the TS3500 Tape Library, but
“unassigned” can have different meanings and needs different actions. See IBM System
Storage TS3500 Tape Library with ALMS Operator Guide, GA32-0594 for more
information.
As part of normal processing, data is copied from cache to physical volumes in a primary pool
managed by the Virtualization Engine. A copy might also be made to a physical volume in a
secondary pool if the dual copy function is specified using Management Class. Empty
physical volumes are needed in a pool or, if a pool is enabled for borrowing, in the common
scratch pool, for operations to continue. If a pool runs out of empty physical volumes and
there are no volumes that can be borrowed, or borrowing is not enabled, operations that
might use that pool on the distributed library must be suspended. If one or more pools run out
of empty physical volumes, the distributed library enters the Out of Physical Scratch state.
The Out of Physical Scratch state is reported to all hosts attached to the cluster associated
with the distributed library and, if included in a grid configuration, to the other clusters in the
grid. The following MVS console message is generated to inform you of this condition:
CBR3789E VTS library library-name is out of empty stacked volumes.
Library-name is the name of the distributed library in the state. The CBR3789E message will
remain on the MVS console until empty physical volumes have been added to the library, or
the pool that is out has been enabled to borrow from the common scratch pool and there are
empty physical volumes to borrow. Intervention required conditions are also generated for the
out of empty stacked volume state and for the pool that is out of empty physical volumes.
If the option to send intervention conditions to attached hosts is set on the TS7700
Virtualization Engine that is associated with the distributed library, the following console
The 0P0138 message indicates the media type that is out in the common scratch pool. These
messages do not remain on the MVS console. The intervention conditions can be viewed
through the TS7700 Virtualization Engine management interface.
If the TS7740 Virtualization Engine is in a grid configuration, and if its associated distributed
library enters the out-of-empty-stacked-volume state, operations are affected in other ways:
All copy operations are immediately suspended in the cluster (regardless of which pool
has become empty).
If the cluster has a copy consistency point of RUN, the grid enters the Immediate Mode
Copy Operations Deferred state, and an MVS console message is generated:
CBR3787E One or more immediate mode copy operations deferred in library
library-name.
If another cluster attempts to copy a logical volume that is not resident in the cache, the
copy attempt fails.
In choosing a tape volume cache cluster, the grid prefers clusters that are not in the
out-of-empty- stacked-volume state, but could still select a remote tape volume cache
whose cluster is in that state. If the data needed is not in the remote cluster's tape volume
cache, the recall of the data will fail. If data is being written to the remote cluster's tape
volume cache, the writes will be allowed, but because there might not be any empty
physical volumes available to copy the data to, the cache might become full of data that
cannot be copied and all host I/O using that cluster's tape volume cache will become
throttled to prevent a cache overrun.
Monitor the number of empty stacked volumes in a library. If the library is close to running out
of a physical volume media type, action should be taken to either expedite the reclamation of
physical stacked volumes or add additional ones. You can use the Bulk Volume Information
Retrieval function to obtain the physical media counts for each library. The information
obtained includes the empty physical volume counts by media type for the common scratch
pool and each defined pool.
Because of the permanent nature of the EJECT, the TS7700 Virtualization Engine only allows
you to EJECT a logical volume that is in either the INSERT or SCRATCH (defined with
fast-ready attribute) categories. If a logical volume is in any other status, the EJECT fails. If
you eject a scratch volume, you will not be able to recover the data on that logical volume.
Tip: This fact has proven to be cumbersome for volumes that happen to be in the ERROR
category (000E). An easy way to eject such volumes is to use ISMF screens to set these
volumes to the PRIVATE status. The volume status is propagated to DFSMSrmm. You can
use DFSMSrmm to subsequently assign the volume to the Pending Release status, and
the next RMM HSKP run will return it to a SCRATCH status, allowing you to eject it.
Ejecting large numbers of logical volumes can have a performance impact on the host and
the library.
Tapes that are in INSERT status can be ejected by the resetting of the return code through
the CBRUXENT exit. This exit is usually provided by your tape management system vendor.
Another way to EJECT cartridges in the INSERT category is by using the MI. For more
information, see “Delete Logical Volumes window” on page 491.
After the tape is in SCRATCH status, follow the procedure for EJECT processing based on
whether your environment is system-managed tape or BTLS. You also must follow the
procedure that is specified by your tape management system vendor. For DFSMSrmm, issue
the RMM CHANGEVOLUME volser EJECT command.
The eject process fails if the tape is in another status or category. For libraries managed
under DFSMS system-managed tape, the system command LIBRARY EJECT,volser issued
to a logical volume in PRIVATE status fails with this message:
CBR3726I Function incompatible error code 6 from library <library-name> for volume
<volser>
Failed Eject Notification was added to OAM with APAR OW54054 and is currently in all
supported releases of DFSMS. Any tape management system supporting this notification
can use this function.
If your tape management system is DFSMSrmm, you can use the following commands to
clean up the RMM CDS for failed logical volume ejects and to resynchronize the TCDB and
RMM CDS:
RMM SEARCHVOLUME VOL(*) OWN(*) LIM(*) INTRANSIT(Y) LOCATION(vts) -
CLIST('RMM CHANGEVOLUME ',' LOC(vts)')
EXEC EXEC.RMM
The first RMM command asks for a list of volumes that RMM thinks it has ejected and writes a
record for each in a sequential data set called prefix.EXEC.RMM.CLIST. The CLIST then
checks that the volume is really still resident in the VTS library and, if so, it corrects the RMM
CDS.
Tip: Limiting the number of outstanding ejects to a couple of thousand total per system will
limit exposure to performance problems.
There are considerations to be aware of when ejecting large numbers of logical volumes.
OAM helps mitigate this situation by restricting the number of ejects sent to each library at a
given time and manages all the outstanding requests. This management requires storage on
the host, and a large number of ejects can force OAM to reserve large amounts of storage.
Additionally, there is a restriction on the number of eject requests on the device service’s
queue. All of these conditions can have an impact on the host’s performance.
Therefore, a good limit for the number of outstanding eject requests is no more than two
thousand per system. Additional ejects can be initiated when others complete. For additional
information, see APAR OW42068. The following commands can be used on the System z
hosts to list the outstanding and the active requests:
F OAM,QUERY,WAITING,SUM,ALL
F OAM,QUERY,ACTIVE,SUM,ALL
Although this is not specific to a TS7700 Virtualization Engine, the following critical,
action-related messages are now issued using the specified library console and routing
codes, providing maximum visibility:
CBR3759E Library x safety enclosure interlock open.
CBR3764E Library x all storage cells full.
CBR3765E No cleaner volumes available in library x.
CBR3753E All convenience output stations in library x are full.
CBR3754E High capacity output station in library x is full.
CBR3755E {Input|Output} door open in library x.
CBR3660A Enter {list of media inserts} scratch volumes into x.
Further Information: For the latest information about ALxxxx messages and all other
messages related to CBR3750I, see the IBM Virtualization Engine TS7700 Series
Operator Informational Messages White Paper available at the following address:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101689
If any TS7700 Virtualization Engine subsystems are defined to the system, the following
status line is displayed, reflecting the number of distributed libraries that are associated with
the composite libraries:
There are also numvdl-lib VTS distributed libraries defined.
----------------------------------------------------------------------------------
---------------------------
DISTRIBUTED LIBRARIES: HYDRAD
LIBRARY ID: 10001
OPERATIONAL STATE: AUTOMATED
ERROR CATEGORY SCRATCH COUNT: 0
CORRUPTED TOKEN VOLUME COUNT: 0
---------------------------------------------------------------------
Clarification: Library type VCL indicates the Composite Library, as opposed to VDL for the
Distributed Library.
Example 8-18 lists the complete message text for logical volume A01878.
There are two network links from each cluster participating in a GRID configuration. Every
five minutes, the Dynamic Link Load Balancing function evaluates the capabilities of each link
between the clusters. If performance in one of the links is 60% less than the other, a warning
message is displayed at the System Console in the following format:
CBR3796E Grid links degrade in library library_name
A detailed description of the Host Console Request functions and responses is available in
the IBM Virtualization Engine TS7700 Series z/OS Host Command Line Request User's
Guide white paper. The most recently published white papers are available at the Techdocs
website. Search for TS7700 Virtualization Engine at the following URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
When the grid network performance issue is resolved and the links are balanced, a message
is presented at the System Console in the following format:
CBR3797E Grid links in library_name are no longer degraded
When the cache situation is resolved, the following messages are shown:
CBR3793I Library library-name has left the limited cache free space warning state
CBR3795I Library library-name has left the out of cache resources critical state
Operation of the TS7700 Virtualization Engine continues with a reduced number of drives
until the repair action on the drive is complete. To recover, the SSR repairs the failed tape
drive and makes it available for the TS7700 Virtualization Engine to use it again.
Power failure
User data is protected in the event of a power failure, as it is stored on the TVC. Any host jobs
reading or writing to virtual tapes will fail as they would with a real IBM 3490E, and will need
to be restarted after the TS7700 Virtualization Engine is available again. When power is
restored and stable, the TS7700 Virtualization Engine must be powered up manually. The
TS7700 Virtualization Engine will recover access to the TVC using information available from
the TS7700 Virtualization Engine database and logs.
The TS7700 Virtualization Engine maintains a database of information about the location and
status of logical volumes on the stacked volumes it manages. When a stacked volume has
been filled with logical volumes, a backup of the entire database is placed at the end of the
filled stacked volume. The database contains a time and date stamp that identifies when the
backup was performed.
When the database copy operation is complete, a message is sent to the attached hosts:
CBR3750I MESSAGE FROM LIBRARY lib: VTS Database Backup written to Physical Tape
xxxxxx.
The disaster recovery process causes the TS7740 Virtualization Engine to load the stacked
volumes, locate the latest version of the database, and restore it. Any logical volumes written
after the last backup of the TS7740 Virtualization Engine database are lost. When the
restoration is complete, a message is displayed on the Library Manager console informing
you of the date and time when the TS7700 Virtualization Engine database was restored.
Important: Repairing a 3592 tape should only be done for data recovery. After the data
has been moved to a new volume, replace the repaired cartridge.
Broken tape
If a 3592 tape cartridge is physically damaged and unusable (the tape is crushed, or the
media is physically broken, for example), the TS7740 Virtualization Engine cannot recover the
contents. This is the same for any tape drive media cartridges. You can generate a list of
logical volumes that are on that stacked volume. Consult with your SSR to determine if IBM
services are available to attempt data recovery from a broken tape.
With the TS7700 Virtualization Engine subsystem, if an error condition is encountered during
the execution of the mount, instead of indicating that the mount was successful, the TS7700
Virtualization Engine returns completion and reason codes to the host indicating that a
problem was encountered.
Reason codes provide information about the condition that caused the mount to fail. The
reason codes that might be presented are as follows:
X'10' Internal Error Detected
X'11' Resend Special Case
X'20' Specific Volume In Use On Another Cluster
X'21' Scratch Volume Selected In Use On Another Cluster
X'22' Valid Volume Inaccessible
X'23' Local Cluster Path to Volume's Data No Longer Available
X'24' Remote Cluster Path To Volume's Data No Longer Available
X'25' Copy Required, but Cluster Copying Inhibited
X'30' Local Cluster Recall Failed, Stacked Volume Misplaced
X'31' Local Cluster Recall Failed, Stacked Volume Inaccessible
X'32' Local Cluster Recall Failed, Stacked Volume Unavailable
X'33' Local Cluster Recall Failed, Stacked Volume No Longer In Library
X'34' Local Cluster Recall Failed, Stacked Volume Load Failure
X'35' Local Cluster Recall Failed, Stacked Volume Access Error
X'38' Remote Cluster Recall Failed, Stacked Volume Misplaced
X'39' Remote Cluster Recall Failed, Stacked Volume Inaccessible
X'3A' Remote Cluster Recall Failed, Stacked Volume Unavailable
X'3B' Remote Cluster Recall Failed, Stacked Volume No Longer In Library
X'3C' Remote Cluster Recall Failed, Stacked Volume Load Failure
X'3D' Remote Cluster Recall Failed, Stacked Volume Access Error
When a mount is failed on a TS7700 Virtualization Engine, you can attempt to resolve the
underlying problem indicated by the reason code and then have the mount retried. For
example, if the failure was because a recall was required and the stacked volume was
unavailable because it was accidentally removed from the library, recovery involves returning
the volume to the library and then replying to the outstanding message with RETRY.
The host is notified that intervention-required conditions exist. Investigate the reason for the
mismatch. If possible, relabel the volume to use it again.
The TS7700 Virtualization Engine internal recovery procedures handle this situation and
restarts the TS7700 Virtualization Engine. See Chapter 10, “Failover and disaster recovery
scenarios” on page 749 for more details.
Comparing Figure 8-99 to 8.4.3, “Health & Monitoring” on page 462, you can see that all
selections regarding Physical Tape are missing (Physical Tape Drives, Physical Media
Inventory, and Physical Library).
For all the remaining and possible options, see 8.4, “TS7700 Virtualization Engine
Management Interface” on page 457.
Figure 8-100 TS7720 Virtualization Engine MI Physical Volume Pools shown on TS7720
That brings up an error message due to the fact that a TS7720 cluster is a disk product
without physical drives or physical volumes connected.
For all the remaining possible options, see 8.4, “TS7700 Virtualization Engine Management
Interface” on page 457.
Scenario’s are described showing the effect of the various algorithms in z/OS and the TS7700
Virtualization Engine R2.0 on device allocation. It helps you to understand how the settings
and definitions impacts device allocation.
TS7700 Virtualization Engine shared resources are also described so that you can
understand the impact that contention for these resources has on the performance of the
TS7700 Virtualization Engine.
The monitoring section can help you understand the performance related data recorded in the
TS7700 Virtualization Engine. It discusses the performance issues that might arise with the
TS7700 Virtualization Engine. This chapter can also help you recognize the symptoms that
indicate that the TS7700 Virtualization Engine configuration is at or near its maximum
performance capability. The information provided can help you evaluate the options available
to improve the throughput and performance of the TS7700 Virtualization Engine.
The capacity planning case study illustrates guidelines and techniques for the management
of virtual and stacked volumes associated with the TS7700 Virtualization Engine.
Based on initial modeling and measurements, and assuming a 2.66:1 compression ratio,
Figure 9-1 shows the evolution in the write performance with TS7700 family, which is also
described in more detail in the IBM Virtualization Engine TS7720 and TS7740 Releases 1.6,
1.7, and 2.0 Performance White Paper. This paper and the most recently published
performance white papers are available on the Techdocs website at the following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
Figure 9-1 shows the evolution of performance in the TS7700 IBM Virtualization Engine
family compared with the previous member of IBM Tape Virtualization, the IBM Virtual Tape
Server (VTS). The numbers were obtained from runs with 128 concurrent jobs, each job
writing 800 MB (uncompressed) using 32 KB blocks. The number of buffers (BUFNO) used by
QSAM was twenty (QSAM BUFNO = 20).
The numbers shown in Figure 9-2 were obtained with 128 concurrent jobs in all runs, each
job reading 800 MB (uncompressed) using 32 KB blocks, QSAM BUFNO = 20.
Compared with the VTS, the TS7700 has introduced faster performing hardware components
(such as faster FICON channel adapters, the more powerful TS7700 Virtualization Engine
controller, and faster disk cache) along with the new TS7700 Virtualization Engine
architecture, providing for improved performance and throughput characteristics of the
TS7700 Virtualization Engine. From a performance aspect, important characteristics of the
architecture are as follows:
With the selection of DB2 as the central repository, the TS7700 Virtualization Engine
provides a standard SQL interface to the data, and all data is stored and managed in one
place. DB2 also allows for more control over back-end performance.
The cluster design with vNode and hNode provides increased configuration flexibility over
the monolithic design of the Virtual Tape Server.
The use of TCP/IP instead of FICON for site-to-site communication eliminates the
requirement to use channel extenders.
Read-hit data rates are typically higher than recall data rates.
Summary
The two read performance metrics, along with peak and sustained write performance, are
sometimes referred to as the four corners of virtual tape performance. Recall performance is
dependent on several factors that can vary greatly from installation to installation, such as
number of physical tape drives, spread of requested logical volumes over physical volumes,
location of the logical volumes on the physical volumes, and length of the physical media.
Grid considerations
Up to six TS7700 clusters can be linked together to form a grid configuration. The connection
between these clusters is provided by two or four 1-Gbps TCP/IP links per cluster or two
10 Gbps links. Data written to one TS7700 cluster can be optionally copied to the other
cluster (or clusters). Up to six TS7700 clusters can be host-driven, depending on individual
requirements.
Deferred copies are controlled during heavy host I/O with a default setting of 125 ms for the
Deferred Copy Throttling (DCT). More priority can be given to deferred copying by lowering
the DCT value. A detailed description on DCT and how to modify the default value is
described in 9.7.3, “Adjusting the TS7700” on page 698.
This section discusses the effects on performance of the following shared resources:
TS7700 Virtualization Engine processor
Tape volume cache
TS7740 Virtualization Engine physical tape drives
TS7740 Virtualization Engine physical stacked volumes
CPU
Operating System
Host Read Host Read/Write from/to cache
Copy data to other clusters
Copy data from other clusters
Host Write
Remote mounts
HBA Copy Export
Data is compressed Pre-migrate
Or decompressed Recall
Reclaim
Management Interface
Compressed Cluster Communication
Host Read
Compressed
Host Write
Cache
Host Compressed Read/Write
Pre-Migrate Grid
Remote mounts
Recall Disk Grid Copies to/from
Copies to/from other clusters
Remote mounts Cache Copy to other clusters other clusters
Copy from other clusters
Copy Remote mounts
Export
Pre-Migrate Recall
Reclaim
Drives
Copy Export
Pre-Migrate
Recall
Reclaim
The new R2.0 server Model V07/VEB have more CPU power, thus providing better overall
performance. The additional CPU allows more processing such as pre-migration activity to
occur in parallel.
Clarification: Cross-cluster mounts to other clusters do not move data through local
cache. Also, Reclaim data does not move through the cache.
To make a good use of the tunable parameters available to you, you should have a good
understanding of the types of throttling within the TS7700, and the underlying mechanisms.
Throttling is the mechanism adopted to control and balance the several tasks which run at the
same time within the TS7700, prioritizing certain tasks over others. These mechanisms are
called upon only when the system reaches higher levels of utilization, and the components
are used near to the fullest of its capacity and the bottleneck starts to show up. The criteria
balances user needs and what is vital to the TS7700 functioning. This control is accomplished
by delaying the launch of new tasks, prioritizing more important tasks over the others. After
the tasks are dispatched and running, control over the execution is accomplished by slowing
down an specific functional area by introducing calculated amounts of delay in the operations.
This alleviates stress on an overloaded component or leaves extra CPU cycles to another
needed function, or simply waits for a slower operation to finish.
This section explains the throttling algorithms and the control knobs that can be used to
customize the TS7700 behavior to your particular needs.
The subsystem has a series of self regulatory mechanisms that try to optimize the shared
resources usage. Subsystem resources, such as CPU, cache bandwidth, cache size, host
channel bandwidth, grid network bandwidth, physical drives, and so forth are limited, and
must be shared by all tasks moving data throughout the subsystem.
The resources will implicitly throttle by themselves when reaching their limits. The TS7700
introduces a variety of explicitly throttling methods to give higher priority tasks more of the
shared resources. The following list prioritizes normally running tasks that move data:
1. Immediate copies
2. Recalls
3. Copy export
4. Host I/O
5. Reclaims
6. Premigration
7. Deferred copies
In certain situations the TS7700 will grant higher priority to activities in order to solve a
problem state. Examples are as follows:
Panic reclamation: The TS7740 detects the number of empty physical volumes has
dropped below the minimum value and reclaims need to be done immediately to increase
the count.
Cache fills with copy data: To protect from having un-copied volumes removed from cache,
the TS7740 throttles data coming into the cache.
Cache overfills: If no more data can be placed into the cache before data is removed,
other tasks trying to add to the cache are heavily throttled.
Throttling settings can be adjusted by use of Host Console Request commands. But you
should be careful when changing those settings to have a deep understanding of the
production setup. The same goes for using functions like Selective Device Assist Control
(SDAC), Device Allocation Assistance (DAA) and Scratch Allocation Assistance (SAA). All
these functions can be useful and might be needed in your installation. However, if not
correctly implemented, they might result in an imbalanced configuration of the grid.
You can also adjust settings for when alert handling should happen in terms of thresholds for
cache usage or other resources managed by the cluster. The messages that are presented to
the hosts when the thresholds are reached should be evaluated and automated through Your
existing Automation Tools. Host Console Request commands are described in 8.5.3, “Host
Console Request function” on page 589.
Figure 9-4 is a visual representation of the Host Write Throttle mechanism and where it
applies.
Compressed
Host Write
Disk Grid
Cache Cross Cluster
Mount for Write
Pre-Migrate
Recall
Compres sed
Host Write
Disk Grid
Cache
Pre-Migrate
Recall
Host write throttle and copy throttle are triggered by the same factors, as follows:
Full cache: Cache is full of data that needs to be copied to another cluster.
– Amount of data to be copied to another cluster is greater than 95% of cache size and
the TS7700 has been up more than 24 hours.
– Full Cache is reported as Write Throttle and Copy Throttle in VEHSTATS.
Immediate copy: Immediate copies to other clusters, where this cluster is the source, are
taking too long or are predicted to take too long.
– The TS7700 evaluates the need for this throttle every two minutes.
– The TS7700 examines the depth of the immediate copy queue and the amount of time
that the copies have been in the queue to determine if the throttle should be applied, as
follows.
The algorithm looks at age of oldest immediate copy in the queue:
• If oldest is 10 - 30 minutes old, the TS7700 sets the throttle n the range of 0.00166
seconds to two seconds, and linear ramp in the range of 10 - 30 minutes.
• The maximum throttle (2 seconds) is applied immediately if an immediate copy has
been in the queue for 30 minutes or longer.
Look at quantity of data, and calculate how long the transfer will take:
• If greater than 35 minutes, the TS7700 sets throttle to max (2 seconds).
• If 5 - 35 minutes, the TS7700 sets throttle from .01111 seconds to 2 seconds, and
the linear ramp from 5 to 35 minutes.
– Immediate copy is reported as Write Throttle in VEHSTATS.
Example: The time required for a 6000 MB immediate copy is 7.5 times longer than
an 800 MB immediate copy.
Compressed
Host Read
Compressed
Host Write
Disk Grid
Cache
Pre-Migrate
Recall
Deferred Copy Throttle is triggered by CPU usage. Usage and the host compressed
throughput are monitored and evaluated every 30 seconds. DCT is invoked whenever CPU
usage goes higher than 85% and the compressed host throughput is bigger than 100 MBps
by default.
Remember: The 100 MBps threshold is the default and can be changed through the Host
Console Request
Deferred Copy Throttle remains in effect for the subsequent 30 second interval, after which
the TS7700 will reevaluate the scenario.
This 125 ms of throttling, if applied, severely slows down deferred copy activity, translating to
125 ms being added between each block of 256 KB of data sent through the replication grid
for a volume copy.
The DCT and the DCT Threshold can be set by using Host Console Request function. For
details about setting DCT, see IBM Virtualization Engine TS7700 Series z/OS Host Command
Line Request User's Guide which is available on Techdocs website. Use the SETTING,
THROTTLE, DCOPYT keywords for the DCT and the SETTING, THROTTLE, DCTAVGTD
keywords for the DCT Threshold.
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
Premigration tasks
The TS7740 uses a variety of criteria to manage the number of premigration tasks. The
TS7700 looks at these criteria every five seconds to determine if one more premigration task
should be added. Adding a premigration task is based on the following factors and others:
Host-compressed write rate
CPU activity
How much data needs to be premigrated per pool
How much data needs to be premigrated in total
Host Read
Host Write
HBA
Data is compressed
Or decompressed
Compressed
Host Read
Compressed
Host Write
Disk Grid
Cache
Pre-Migrate
Recall
Reasons why a volume moves to the immediate-deferred state is contained in the Error
Recovery Action (ERA) 35 sense data. The codes are divided into unexpected and expected
reasons. From a z/OS host view, the ERA is part of message IOS000I (Example 9-1).
New failure content is introduced into the CCW(RUN) ERA35 sense data:
Byte 14 FSM Error: If set to 0x1C (Immediate Copy Failure), the additional new fields are
populated.
Byte 18 Bits 0:3 – Copies Expected: Indicates how many RUN copies were expected for
this volume.
Byte 18 Bits 4:7 – Copies Completed: Indicates how many RUN copies were actually
verified as successful before surfacing SNS.
Byte 19 – Immediate Copy Reason Code:
– Unexpected – 0x00 to 0x7F: The reasons are based on unexpected failures:
• 0x01 – A valid source to copy was unavailable
• 0x02 – Cluster targeted for a RUN copy is not available (unexpected outage).
• 0x03 – Forty minutes has passed and one or more copies have timed out.
• 0x04 – Is downgraded to immediate-deferred because of health/state of RUN target
clusters.
• 0x05 – Reason is unknown.
– Expected – 0x80 to 0xFF: The reasons are based on configuration or a result of
planned outages:
The additional data contained within the CCW(RUN) ERA35 sense data can be used within a
z/OS custom user exit to act on a job moving to the immediate-deferred state. Because the
requesting application that results in the mount has already received successful status before
the issuing of the CCW(RUN), it cannot act on the failed status. However, future jobs can be
suspended or other custom operator actions can be taken using the information provided
within the sense data.
The availability of TS7740 Virtualization Engine physical tape drives for certain functions can
significantly affect TS7740 Virtualization Engine performance. The TS7740 Virtualization
Engine manages the internal allocation of these drives as required for various functions, but it
always reserves at least one physical drive for recall and one drive for premigration.
The process of read and write to physical tapes consumes CPU power in the 3957-V07/VEB.
Compared to the earlier 3957-V06, the new processors provide better overall performance
regarding access to and from physical tape. The reclamation process caused up to 30%
degradation on host performance in Lab measurements earlier, but that is eliminated with
TS7700 Virtualization Engine R 2.0 with the V07/VEB models.
Tape volume cache management algorithms also influence the allocation of back-end
physical tape drives, as described in the following examples:
Cache freespace low: The TS7740 Virtualization Engine increases the number of drives
available to the premigration function and reduces the number of drives available for
recalls.
Premigration threshold crossed: The TS7740 Virtualization Engine reduces the number of
drives available for recall down to a minimum of one drive to make drives available for the
premigration function.
The number of drives available for recall or copy are also reduced during reclamation.
If the number of drives available for premigration is restricted, this can lead to limiting the
number of virtual volumes in the cache that are eligible to be migrated, which can then lead to
free space or copy queue throttling being applied.
If the number of drives for recall is restricted, this can lead to elongated virtual mount times for
logical volumes being recalled.
This algorithm chains several volumes together on the same stacked volume for the same
pool. This can change recall performance, sometimes making it better, sometimes making it
worse. Other than variations in performance because of differences in distribution over the
stacked volumes, recall performance should be constant.
Reclaim policies must be set in the management interface (MI) for each volume pool.
Reclamation occupies drives and can affect performance. The Inhibit Reclaim Schedule is
also set from the MI, and can prevent reclamation from running during specified time frames
during the week. If Secure Data Erase is used, fewer physical tape drives might be available
even during times when you use inhibited reclamation. If used, limit it to a specific group of
data. Inhibit Reclaim specifications only partially apply to Secure Data Erase. Secure Data
Erase does not honor your settings, and therefore can run erasure operations as long as
there are physical volumes to be erased.
The use of Copy Export and Selective Dual Copy also increases the use of physical tape
drives. Both are used to create two copies of a logical volume in a TS7740 Virtualization
Engine.
Figure 9-8 shows how the sustained write data rate can be affected by the back-end physical
tape drives. The chart shows sustained write data rates achieved in the laboratory for a
stand-alone TS7740 with various numbers of TS1130 backend physical tape drives. The data
for the chart was measured with no TS7740 activity other than the sustained writes and
premigration (host write balanced with premigration to tape).
Figure 9-8 TS7740 (3956 CC7/CC8) stand-alone sustained write versus number of online drives
All runs were made with 128 concurrent jobs. Each job wrote 800 MB (uncompressed) using
32 KB blocks, data compression 2.66 to 1, QSAM BUFNO = 20, and four 4-Gb FICON
Figure 9-9 shows the TS7740 premigration rates. The rates at which cache-resident data is
copied to physical tapes depends on the number of drives available for premigration.
Figure 9-9 TS7740 (3956 CC7/CX7) stand-alone premigration rate versus premigration drives
All runs were made with 128 concurrent jobs. Each job wrote 800 MB (uncompressed) using
32-KB blocks, data compression 2.66:1, QSAM BUFNO = 20, and four 4-Gb FICON channels
from a z10 LPAR.
This chapter will use a configuration with 2 three-cluster grid configurations, named GRID1
and GRID2. Each of these grids have a TS7720 Virtualization Engine (Cluster 0) and a
TS7740 Virtualization Engine (Cluster 1) at the primary Production Site, and a TS7740
Virtualization Engine (Cluster 2) at the Disaster Site. The TS7720 Cluster 0 in the Production
Site can be considered as a deep cache for the TS7740 Cluster 1 in the scenarios described
below.
Drives/Library
TS7740
GRID1 Cluster 1 Drives/Library
LAN/WAN TS7740
GRID1 Cluster 2
TS7720
GRID1 Cluster 0
FICON
Fabric
Drives/Library
TS7740
GRID2 Cluster 1
LAN/WAN
Drives/Library
TS7740
GRID2 Cluster 2
TS7720
GRID2 Cluster 0
In Figure 9-10, the host in the Production Site has direct access to the local clusters in the
Production Site, and has access over the extended FICON fabric to the remote clusters in the
Disaster Site. The extended FICON fabric can include DWDM connectivity, or can use FICON
tape acceleration technology over IP. Assume that connections to the remote clusters have a
limited capacity bandwidth.
Furthermore, there is an SMS Storage Group per GRID. Both groups are defined in the SMS
Storage Group routine as (GRID1,GRID2). SMS will equally manage storage groups: The
order in the definition statement does not influence the allocations.
The following scenarios are described in this section. Each scenario adds functions to the
previous scenario so you can better understand the effects of the added functions.
EQUAL Allocation
Describes the allocation characteristics of the default load balancing algorithm (EQUAL)
and its behavior across the sample TS7700 Virtualization Engine configuration with two
grids. See 9.4.1, “EQUAL Allocation” on page 655.
BYDEVICES Allocation
Adds the new BYDEVICES algorithm to the configuration. It explains how this algorithm
can be activated and the differences from the default EQUAL algorithm. See 9.4.2,
“BYDEVICES Allocation” on page 657.
Allocation and Copy Consistency Point Setting
Adds information about the effect of the CCP on the cache data placement. The various
TS7700 Virtualization override settings influence this data placement. See 9.4.3,
“Allocation and Copy Consistency Point setting” on page 659.
Remember: With EQUAL Allocation, the scratch allocations will randomize across the
libraries and is not influenced by the number of online devices in the libraries.
In this first scenario both Device Allocation Assistance (DAA) and Scratch Allocation
Assistance (SAA) are assumed to be disabled. With the TS7700 Virtualization Engine you
can control both assistance functions with the LIBRARY REQUEST command. DAA is
ENABLED by default and can be DISABLED with the command. SAA is DISABLED by default
and can be ENABLED with the command. Furthermore none of the TS7700 Virtualization
Engine override settings are used.
Assuming that the Management Class for the logical volumes will have a Copy Consistency
Point (CCP) of [R,R,R] in all clusters and that the number of available virtual drives are the
same in all clusters, the distribution of the allocation across the two grids (composite libraries)
will be evenly spread. The multi-cluster grids are running in BALANCED mode, so there is no
preference of one cluster above another cluster. However, the distribution across the six
clusters might or might not be equally spread.
With the default algorithm EQUAL, the distribution of allocations across the clusters (in a
multiple cluster grid) depends on the order in which the library port IDs were initialized during
IPL (or IODF activate) and whether the library port IDs in the list (returned by the DEVSERV
QTAPE,composite-library-id command) randomly represent each of the clusters or if the
library port IDs in the list tend to favor the library port IDs in one cluster first, followed by the
next cluster, and so on. The order in which the library port IDs are initialized and appear in
this DEVSERV list can vary across IPLs or IODF activates, and can influence the
randomness of the allocations across the clusters.
So with the default algorithm EQUAL, there might be times when device randomization within
the selected library (composite library) appears unbalanced across clusters in a TS7700
Virtualization Engine that have online devices. As the number of eligible library port IDs
increases, the likelihood of this imbalance occurring also increases. If this imbalance impacts
Remember: Exceptions to this can also be caused by z/OS JCL backward referencing
specifications (UNIT=REF and UNIT=AFF).
With z/OS V1R11 and later, as well as z/OS V1R8 through V1R10 with APAR OA26414
installed it is possible to change the selection algorithm to BYDEVICES. The algorithm
EQUAL, which is the default algorithm used by z/OS, can work well if the libraries (composite
libraries) under consideration have an equal number of online devices and the cluster
behavior above is understood.
Drives/Library
TS7740
GRID1 Cluster 1 Drives/Library
ns LAN/W AN TS7740
ti o
ca GRID1 Cluster 2
Allo
%
50
TS7720
GRID1 Cluster 0
• ALLOC “EQUAL”
FICON • DAA disabled
• SAA disabled
Fabric • CCP [R,R,R]
50
% Drives/Library
Al lo
ca TS7740
tio
ns
GRID2 Cluster 1
LAN/W AN
Drives/Library
TS7740
GRID2 Cluster 2
TS7720
GRID2 Cluster 0
For specific allocations (DAA DISABLED in this scenario) it is first determined which of the
Composite Libraries, GRID1 or GRID2, has the requested logical volume. That grid is
selected and the allocation can go to any of the clusters in the grid. If it is assumed that the
logical volumes were created with the EQUAL allocation setting (the default), it can be
expected that specific device allocation to these volumes will be distributed equally among
the two grids. However, how well the allocations are spread across the clusters depends on
the order in which the library port IDs were initialized (discussion above) and whether this
order was randomized across the clusters.
In a TS7740 Virtualization Engine multi-cluster grid configuration, only the original copy of the
volume will stay in cache, normally in the mounting cluster's tape volume cache for a CCP
setting of [R,R,R]. The copies of the logical volume in the other clusters will be managed as a
Note that there are a number of possibilities to influence the cache placement:
You can define a Storage Class for the volume with Preference level 0 (PG0). The logical
volume will not stay in the I/O tape volume cache cluster.
You can set the CACHE COPYFSC option, with a LIBRARY
REQUEST,GRID[1]/[2],SETTING,CACHE,COPYFSC,ENABLE command. When the
ENABLE keyword is specified, the logical volumes copied into the cache from a peer
TS7700 cluster are managed using the actions defined for the Storage Class construct
associated with the volume as defined at the TS7740 cluster receiving the copy.
Therefore, a copy of the logical volume will also stay in cache in each non-I/O tape volume
cache cluster where a Storage Class is defined as Preference Level 1 (PG1). However,
because the TS7720 is used as a deep cache, there no obvious reasons to do so.
In the Hybrid multi-cluster grid configuration used in the example, there are two cache
allocation schemes, depending on the I/O tape volume cache cluster selected when creating
the logical volume. Assume a Storage Class setting of Preference Level 1 (PG1) in the
TS7740 Cluster 1 and Cluster 2.
If the mounting cluster for the non-specific request is the TS7720 Cluster 0, only the copy
in that cluster stays. The copies in the TS7740 Clusters 1 and Cluster 2 will be managed
as Preference level 0 (PG0) and will be removed from cache after placement of the logical
volume on a stacked physical volume. A later specific request for that volume creates a
cross-cluster mount of the mount point, which is the vNode of Cluster 1 or Cluster 2.
If the mounting cluster for the non-specific request is the TS7740 Cluster 1 or Cluster 2,
not only the copy in that cluster stays, but also the copy in the TS7720 Cluster 0. Only the
copy in the other TS7740 cluster will be managed as Preference Level 0 (PG0) and will be
removed from cache after placement of the logical volume on a stacked physical volume.
Cache preferencing is not valid for the TS7720 cluster. A later specific request for that
logical volume creates only a cross-cluster mount if the mount point is the vNode of the
TS7740 cluster not used at data creation of that volume.
With EQUAL allocation algorithm used for specific mount requests, there will always be
cross-cluster mounts when the cluster where the device is allocated is not the cluster where
the data resides. Cache placement can limit the number of cross-cluster mounts but cannot
avoid them. Cross-cluster mounts over the extended fabric is likely not acceptable, so vary the
devices of Cluster 2 offline.
Clarification: With BYDEVICES, the scratch allocation will randomize across all devices in
the libraries and will be influenced by the number of online devices.
Restriction: The SETALLOC operator command support is available only in z/OS V1R11
or later releases. In earlier z/OS releases, BYDEVICES must be enabled through the
ALLOCxx PARMLIB member.
Lets assume now that GRID1 has in total 60 virtual devices online and GRID2 has 40 virtual
devices online. For each grid the distribution of online virtual drives is 50% for Cluster 0, 25%
for Cluster 1, and 25% for Cluster 2. The expected distribution of the scratch allocations will
be as shown in Figure 9-12.
Drives/Library
TS7740
GRID1 Cluster 1 Drives/Library
LAN/WAN
ns
TS7740
o
ati
GRID1 Cluster 2
oc
A ll
%
n s ns
15
ti o io
l oc
a at
Al TS7720 loc
% Al
30 GRID1 Cluster 0 %
15
• ALLOC “BYDEVICES”
FICON • DAA disabled
• SAA disabled
Fabric 10 • CCP [R,R,R]
10 %
% Al
Al l lo
oc ca
a ti tio
on ns
s
20
Drives/Library
%
Al
TS7740
loc
at
GRID2 Cluster 1
io
LAN/WAN
ns
Drives/Library
TS7740
GRID2 Cluster 2
TS7720
GRID2 Cluster 0
As stated in 9.4.1, “EQUAL Allocation” on page 655, DAA is ENABLED by default and was
DISABLED by using the LIBRARY REQUEST command. Furthermore, none of the TS7700
Virtualization Engine override settings are activated.
The logical volume cache placement possibilities and the two allocation schemes, both
described in 9.4.1, “EQUAL Allocation” on page 655, are also applicable for the BYDEVICES
allocation.
With the BYDEVICES allocation algorithm used for specific mount requests, there will always
be cross-cluster mounts when the cluster where the device is allocated is not the cluster
where the data resides. Cache placement can limit the number of cross-cluster mounts but
cannot avoid them. Cross-cluster mounts over the extended fabric is likely not acceptable, so
vary the devices of Cluster 2 offline.
For non-specific (scratch) allocations, the BYDEVICES algorithm will randomize across all
devices, resulting in allocations on all three clusters of each grid. I/O tape volume cache
selection will subsequently assign the tape volume cache of Cluster 0 as the I/O tape volume
cache due to the CCP setting. There are many factors that might influence this selection, as
explained in “I/O tape volume cache selection” on page 96, but normally the cluster with a
Copy Consistency Point of R(un) will get preference over other clusters. As consequence the
tape volume cache of Cluster 0 is selected as the I/O tape volume cache and cross-cluster
mounts will be issued from both Cluster 1 and Cluster 2.
By activating the override setting Prefer Local Cache for Fast Ready Mount Requests in both
Clusters 2 in the Disaster Site, cross-cluster mounts are avoided but the copy to Cluster 0 is
made before the job ends, caused by the R(un) CCP setting for this cluster. By further
defining a family for the Production Site clusters, the Cluster 1 will retrieve its copy from the
Cluster 0 in the Production Site location, thereby avoiding utilizing the remote links between
the locations.
The method to prevent device allocations at the Disaster Site, implemented mostly today, is
just varying offline all the remote virtual devices. The disadvantage is that in case of losing a
cluster in the Production Site, an operator action is required to vary online manually the virtual
devices of the Cluster 2 of the grid with the failing cluster.
With the TS7700 Virtualization Engine R2.0, an alternate solution is exploiting Scratch
Allocation Assistance (SAA), which will be described in 9.4.5, “Allocation and Scratch
Allocation Assistance” on page 664.
Cluster 0 is likely to have a valid copy of the logical volume in the cache due to the CCP
setting of [R,D,D]. If the vNodes of the Cluster 1 and Cluster 2 are selected as mount points, it
results in cross-cluster mounts. It might happen that this volume have been removed by a
policy in place for TS7720 Cluster 0, resulting in the mount point tape volume cache as the
I/O tape volume cache.
Activating in the TS7700 virtualization Engine the override Force Local TVC to have a copy of
the data will first result in a recall of the virtual volume from a stacked volume. If there is no
valid copy in the cluster or if it fails, a copy will be retrieved from one of the other clusters
before the mount completes. Activating the override setting Prefer Local Cache for non-Fast
Ready Mount Requests recalls a logical volume from tape instead of using the grid links for
retrieving the data of the logical volume from Cluster 0. This might result in longer mount
times.
With the TS7700 Virtualization Engine, an alternate solution can be considered by exploiting
Device Allocation Assistance (DAA) that will be described in 9.4.4, “Allocation and Device
Allocation Assistance” on page 661. DAA is enabled by default.
Drives/Library
TS7740
GRID1 Cluster 1 Drives/Library
LAN/WAN
ns
TS7740
tio
ca
GRID1 Cluster 2
o
Al l
%
s
on
15
a ti
l oc
Al TS7720
%
45 GRID1 Cluster 0 • ALLOC “BYDEVICES”
• DAA disabled
FICON • SAA disabled
Fabric • CCP [R,D,D]
10 • Cluster 2 no mount points
%
All
oc
ati
on
s
30
Drives/Library
%
Al
TS7740
loc
at
GRID2 Cluster 1
io
LAN/WAN
ns
Drives/Library
TS7740
GRID2 Cluster 2
TS7720
GRID2 Cluster 0
The selection algorithm orders the clusters first by those having the volume already in cache,
then by those having a valid copy on tape, and then by those without a valid copy.
Subsequently, host processing will attempt to allocate a device from the first cluster returned
in the list. If an online device is not available within that cluster, it will move to the next cluster
in the list and try again until a device is chosen. This allows the host to direct the mount
request to the cluster that would result in the fastest mount, typically the cluster that has the
logical volume resident in cache. If the mount is directed to a cluster without a valid copy, then
If the default allocation algorithm EQUAL is used, it supports an ordered list for the first seven
library port IDs returned in the list. After that, if an eligible device is not found, all of the
remaining library port IDs are considered equal. The alternate allocation algorithm
BYDEVICES removes the ordered library port ID limitation. With the TS7700 Virtualization
Engine, additional APAR OA30718 should also be installed before enabling the new
BYDEVICES algorithm. Without this APAR, the ordered library port ID list might not be
honored properly, causing specific allocations to appear randomized.
In the scenario as described in the previous chapter 9.4.3, “Allocation and Copy Consistency
Point setting” on page 659, if you enable DAA (this is the default) by issuing the command
LIBRARY REQUEST,GRID[1]/[2],SETTING,DEVALLOC,PRIVATE,ENABLE, it will influence
the specific requests as follows. The Copy Consistency Point is defined as [R,D,D]. It is
assumed that there are no mount points in Cluster 2. It is further assumed that the data is not
in the cache of the TS7740 Virtualization Engine Cluster 1 anymore because this data is
managed as Tape Volume Cache Preference Level 0 (PG0) by default. It is first determined
which of the Composite Libraries, GRID1 or GRID2, have the requested logical volume. That
grid is selected and the allocation over the clusters is subsequently determined by DAA. The
result is that all allocations will select the TS7720 Cluster 0 as the preferred cluster.
Remember that you can influence the placement in cache by setting the CACHE COPYFSC
option with the LIBRARY REQUEST,GRID[1]/[2],SETTING,CACHE,COPYFSC,ENABLE
command. When the ENABLE keyword is specified, the logical volumes copied into the cache
from a peer TS7700 cluster are managed using the actions defined for the Storage Class
construct associated with the volume as defined at the TS7740 cluster receiving the copy.
Therefore a copy of the logical volume will also stay in cache in each non-I/O tape volume
cache cluster where a Storage Class is defined as Preference Level 1 (PG1). However,
because the TS7720 is used as a deep cache, there are no obvious reasons to do so.
There are two major reasons why Cluster 0 might not be selected:
No online devices are available in Cluster 0, but are in Cluster 1.
The defined removal policies in the TS7720 caused Cluster 0 to not have a valid copy of
the logical volume anymore.
In both situations DAA will select the TS7740 Virtualization Engine Cluster 1 as the preferred
cluster.
When the TS7740 Cluster 1 is selected due to lack of online virtual devices on Cluster 0,
cross-cluster mounts might happen unless the TS7700 Virtualization Engine overrides
settings, as described in 9.4.3, “Allocation and Copy Consistency Point setting” on
page 659, are preventing this.
When the TS7740 Cluster 1 is selected because the logical volume is not in the TS7720
Cluster 0 cache anymore, its cache is selected for the I/O TVS and, because the CCP
setting is [R,D,D], a copy to the TS7720 Cluster 0 will be made as part of successful
Rewind/Unload processing.
Even when Device Allocation Assistance is enabled, there might be specific mounts for which
the device affinity call is not made. For example, DFSMShsm when appending a volume will
go to allocation requiring that a scratch volume be mounted. Then when a device is allocated
and a volume is to be mounted, it will select from the list of HSM-owned volumes. In this case,
because the allocation started as a scratch request, the device affinity is not made for this
specific mount. The MARKFULL option can be specified in DFSMShsm to mark migration
and backup tapes that are partially filled during tape output processing as full. This enforces a
scratch tape to be selected the next time the same function begins.
Drives/Library
TS7740
GRID1 Cluster 1 Drives/Library
LAN/WAN
s
TS7740
on
a ti
GRID1 Cluster 2
oc
A ll
ns
x%
o
ati
loc
Al TS7720
x%
%- GRID1 Cluster 0
60 • ALLOC “BYDEVICES”
• DAA enabled
FICON • SAA disabled
Fabric • CCP [R,D,D]
y% • Cluster 2 no mount points
Al l
oc
a tio
ns
40
%
Drives/Library
-y
%
TS7740
Al
loc
GRID2 Cluster 1
at
LAN/WAN
io
ns
Drives/Library
TS7740
GRID2 Cluster 2
TS7720
GRID2 Cluster 0
With DAA one can vary the devices in the Disaster Site Cluster 2 online without changing the
allocation preference for the TS7720 cache as long as the logical volumes exists in this
cluster and as long as this cluster is available. If these conditions are not met, DAA will
manage the local Cluster 1 and the remote Cluster 2 as equal and cross-cluster mounts over
the extended fabric will be issued in Cluster 2. A new copy of the logical volume will be
created due to the Management Class setting [R] for Cluster 0. This is likely not an acceptable
scenario and so, even with DAA ENABLED, you should vary the devices of Cluster 2 offline.
If you plan to have an alternate Management Class setup for the Disaster Site (perhaps for
the Disaster Test Lpars), you must plan carefully the Management Class settings, the device
ranges that should be online, and whether or not DAA will be enabled. You will probably read
production data and create test data using a separate category code. If you do not want the
grid links overloaded with test data, vary the devices of Cluster 0 and Cluster 1 offline and
activate the TS7700 Virtualization Engine Override Setting Force Local TVC to have a copy
of the data. Specific volume request enforces a mount in Cluster 2 even if there is a copy in
the deep cache of the TS7720 Cluster 0.
When a composite library supports/enables the SAA function, the host will issue a SAA
handshake to all SAA enabled composite libraries and provide the Management Class that
will be used for the upcoming scratch mount. A cluster is designated as a candidate for
scratch mounts using the Scratch Mount Candidate option on the Management Class
construct, accessible from the TS7700 Management Interface, as shown in Figure 9-15. By
default all clusters are considered candidates.
The targeted composite library will use the provided Management Class definition and the
availability of the clusters within the same composite library to filter down to a single list of
candidate clusters. Clusters that are unavailable or in service are excluded from the list. If the
resulting list has zero clusters present, the function will then view all clusters as candidates. In
addition, if the filtered list returns clusters that have no devices configured within z/OS, all
clusters in the grid become candidates. The candidate list is not ordered, meaning that all
candidate clusters are viewed as equals and all clusters excluded from the list are not
candidates.
Because this function introduces overhead into the z/OS scratch mount path, a new LIBRARY
REQUEST option is introduced to globally enable or disable the function across the entire
multi-cluster grid. SAA is disabled by default. When this option is enabled, the z/OS JES2
software will obtain the candidate list of mount clusters from a given composite library. Use
the LIBRARY REQUEST,GRID[1]/[2],SETTING,DEVALLOC,SCRATCH,ENABLE command
to enable SAA. All clusters in the multi-cluster grid must be at release R2.0 level before SAA
will be operational. A supporting z/OS APAR OA32957 is required to use Scratch Allocation
Assistance in a JES2 environment of z/OS. Any z/OS environment with earlier code can exist,
but will continue to function in the traditional way with respect to scratch allocations.
Assume that there are two main workloads. The Application workload consists of logical
volumes that are created and subsequently retrieved on a regular, daily, weekly, or monthly
basis. This workload can best be placed in the TS7720 deep cache. The Backup workload is
With above definitions, the scratch allocations for the Application Workload will be directed to
the TS7720 Cluster 0 and the scratch allocations for the Backup workload is directed to the
TS7740 Cluster 1. The devices of the remote clusters in the Disaster Site are not online.
Allocation “BY DEVICES” is used. GRID1 has in total 60 devices online and GRID2 has 40
devices online. For each grid, the distribution of online devices is now not determined within
the grid by the number of online device, as in the scenario BYDEVICES, but is determined by
the SAA setting of the Management Class.
Drives/Library
TS7740
GRID1 Cluster 1 Drives/Library
LAN/WAN TS7740
a ti up
s
oc ck
GRID1 Cluster 2
on
A ll Ba
%
60
n
tio
ca
p pli n s
A io TS7720
% cat
6 0 Al lo GRID1 Cluster 0 • ALLOC “BYDEVICES”
• DAA enabled
FICON • SAA enabled
• CCP Application [R,D,D]
40 Fabric • CCP Backup [D,R,D]
%
A ll Ba • Cluster 2 no mount points
o c ck
ati up
on
40 Allo
s
% ca
Ap tio
Drives/Library
pli ns
ca
TS7740
tio
GRID2 Cluster 1
n
LAN/WAN
Drives/Library
TS7740
GRID2 Cluster 2
TS7720
GRID2 Cluster 0
Clusters not included in the list are NEVER used for scratch mounts unless those clusters are
the only clusters known to be available and configured to the host. If all candidate clusters
have either all their devices varied offline to the host or have too few devices varied online,
z/OS will not revert to devices within non-candidate clusters. Instead, the host will go into
allocation recovery. In allocation recovery, the existing z/OS allocation options for device
allocation recovery (WTOR | WAITHOLD | WAITNOH | CANCEL) are used.
Any time a service outage of candidate clusters is expected, the SAA function should be
disabled during the entire outage using the LIBRARY REQUEST command. If left enabled,
the devices that are varied offline can result in zero candidate devices, causing z/OS to enter
the allocation recovery mode. After the cluster or clusters are again available and their
devices are varied back online to the host, SAA can be re-enabled.
If you vary offline too many devices within the candidate cluster list, z/OS might have too few
devices to contain all concurrent scratch allocations. When many devices are taken offline,
first disable SAA using the LIBRARY REQUEST command and then re-enable SAA after they
have been varied back on.
If you plan to have an alternate Management Class setup for the Disaster Site (perhaps for
the Disaster Test Lpars), carefully plan the Management Class settings, the device ranges
that should be online, and whether or not SAA will be used. Read production data and create
test data using a separate category code. If you use the same Management Class as in the
Production Lpar and if you define in Cluster 2 the Management Class with Scratch Allocation
Assistance for Cluster 2 and not for Cluster 0 or 1(as determined by the type of workload), it
might happen that Cluster 2 will be selected for allocations in the Production Lpars. SAA
Furthermore The CCP for the Management Classes in the Disaster Site can be defined as
[D,D.R] or even [N,N,R] if it is test only. If it is kept equal with the setting in the Production Site,
with an [R] for Cluster 0 or Cluster 1, cross-cluster mount might occur. If you do not want the
grid links overloaded with test data, update the CCP setting or use the TS7700 Virtualization
Override Setting Prefer Local Cache for Fast Ready Mount Requests in Cluster 2 in the
Disaster Site. Cross-cluster mounts are avoided but the copy to Cluster 0 or 1 is still be made
before the job ends, caused by Production R(un) CCP setting for these clusters. By further
defining a family for the Production Site clusters, these clusters will source its copy from the
other cluster in the Production Site location, thereby optimizing the usage of the remote links
between the locations.
Clarification: Reclaim data in a TS7740 is transferred from one tape drive to another, not
passed though the cache.
TS7720 data flow is the TS7700 basic configuration in terms of cache data flow.
For both the TS7720 and TS7740, the following data flows through the subsystem:
Uncompressed host data is compressed by the Host Bus Adapters (HBAs) and the
compressed data is written to cache.
Compressed data is read from the cache and decompressed by the HBA, as shown in
Figure 9-17.
Host Read
Write
Compressed Host Write
Host Write Read Cache Hit
HBA
Data is compressed Compressed Host Read
Or decompressed
Compressed
Host Read
Compressed
Host Write
Disk
Cache
Host Read
Write
Compressed Host Write
Host Write Pre-Migrate
HBA Read Cache Hit
Data is compressed Compressed Host Read
Or decompressed Read Cache Miss
Recall
Compressed Host Read
Compressed
Host Read
Compressed
Host Write
Disk
Cache
Pre-Migrate
Recall
Write No Copy
Compressed Host Write
Write with Copy
Compressed Host Write
Copy to cluster 1
Host Read Read Cache Hit
Compressed Host Read
Host Write Remote Read / Write
Compressed Read / Write around cache
HBA Copies from cluster 1
Data is compressed
Or decompressed
Compressed
Host Read
Compressed Remo
Host Write te read /
write
Disk Grid
Cache
Copy to cluster 1
Copy from cluster 1
For both the TS7720 and TS7740, the following data is moved through the subsystem:
Local write with no remote copy, that is, copy consistency point (CCP) of run-none (RN)
includes writing the compressed host data to cache.
Local write with remote copy (CCP of DD or RR) includes writing the compressed host
data to cache and to the grid:
– For a CCP of RR, the copy is immediate and must complete before Rewind/Unload
(RUN) is complete. Copies are placed in the immediate copy queue.
– For a CCP of DD, the copy is deferred where the completion of the RUN is not tied to
the completion of the copy operation. Copies are placed in the Deferred Copy Queue.
Remote write with no local copy (CCP of NR) includes writing compressed host data to the
grid. This is not shown in Figure 9-19.
Local read with a local cache hit. Here, the compressed host data is read from the local
cache.
Local read with a remote cache hit. Here, the compressed host data is read from the
remote cache through the grid link.
Immediate and deferred copies from the remote cluster. Here compressed host data is
received on the grid link and copied into the local cache.
The TS7740 has the back-end tape drives for recalls and premigrates.
Write No Copy
Compressed Host Write
Pre-Migrate
Host Read
Write with Copy
Compressed Host Write
Host Write Copy to cluster 1
HBA Pre- Migrate
Data is compressed Read Cache Hit
Or decompressed Compressed Host Read
Read Cache Miss
Recall
Compressed Compressed Host Read
Host Read Remote Read / Write
Compressed Read / Write around cache
Copies from cluster 1
Compressed Remo
Host Write te read /
write
Disk Grid
Cache
Copy to cluster 1
Copy from cluster 1
Pre-Migrate
Recall
The grid can be used in a preferred mode. Preferred mode means that only one cluster will
have the logical drives varied online. Host allocation will select a virtual device only from the
cluster with varied on virtual devices. Data movement through the cache in this mode is a
subset of the Balanced Mode model.
Write No Copy
Compressed Host Write
Write with Copy
Compressed Host Write
Copy to cluster 1
Copy to cluster 2
Read Cache Hit - Local
Compressed Host Read
Host Read Read Cache Hit - Remote
Compressed Host Read Around Cache
Copies from Cluster 1
Host Write
HBA
Data is compressed
Or decompressed
Compressed
Host Read
Compressed Remo
te read fr
Host Write om cluste
r 1 or
2
Disk Grid
Cache Copy to cluster 1
Copy from cluster 1
Copy to cluster 2
The TS7740 adds back-end tape drives for recalls and premigrates:
A write with no copy and a write with copy also includes the premigrate process.
A read with local cache miss has one of the following results:
– A cross-cluster mount without recall
– A recall into local cache from local stacked volume
– A cross-cluster mount requiring recall from remote stacked volume
Host data will be written from cache to the physical stacked volumes in a premigrate
process.
Write No Copy
Compressed Host Write
Pre-Migrate
Host Read
Write with Copy
Compressed Host Write
Host Write Copy to cluster 1
HBA Copy to cluster 2
Data is compressed Pre- Migrate
Or decompressed Read Cache Hit - Local
Compressed Host Read
Read Cache Miss
Compressed Recall
Host Read Compressed Host Read
or 2Cache Hit - Remote
Read
clus ter 1
d from Compressed Host Read Around Cache
te rea
Compressed Remo Copies from Cluster 1
Host Write
Disk Grid
Cache Copy to cluster 1
Copy from cluster 1
Copy to cluster 2
Pre-Migrate
Recall
Write No Copy
Compressed Host Write
Pre-Migrate
Write with Copy
Compressed Host Write
Copy to cluster 1
Copy to cluster 2
Copy to cluster 3
Host Read
Pre- Migrate
Read Cache Hit - Local
Host Write Compressed Host Read
HBA Read Cache Miss
Data is compressed Recall
Or decompressed Compressed Host Read
Read Cache Hit - Remote
Compressed Host Read Around Cache
Compressed Copies from cluster 1 and 2
Host Read
Compressed Remo
te read fr
Host Write om cluste
r 1 or
2
Disk Grid
Cache Copy to cluster 1
Copy from cluster 1
Copy to cluster 2
Copy from cluster 2
Pre-Migrate Copy to cluster 2
Recall
Copy to cluster 3
One possible configuration for a four-cluster grid is two local clusters (Cluster 0 and 1) and
two remote clusters (Cluster 2 and 3). Configure the Copy Consistency Points (CCPs) in a
way that Cluster 0 is replicated to Cluster 2, and data written to Cluster 1 is replicated to
Cluster 3. In this way, you will have two two-cluster grid within the four-cluster grid. With this
configuration, two copies of data are in the grid. All data is accessible by all clusters either
within the cluster or through the grid, which means that all data is available when one of the
clusters is not available. See Figure 9-24 for the four-cluster grid example.
Remember: The same configuration considerations apply for five-cluster grid and
six-cluster grid configurations. These configurations are available with an RPQ.
TS7720
TS7740
LAN/WAN
TS7720
TS7720 Production
Capacity of 210TB
Figure 9-25 Hybrid grid (three TS7720 production clusters, one TS7740 DR cluster)
Figure 9-26 illustrates how cooperative replication occurs with cluster families. Cooperative
replication is used for deferred copies only. When a cluster needs to pull a copy of a volume, it
will prefer a cluster within its family. The example uses CCPs of RRDD. With cooperative
replication one of the family B clusters at the DR site pulls a copy from one of the clusters in
production family A. The second cluster in family B waits for the other cluster in family B to
finish getting its copy, then pulls it from its family member. This way the volume travels only
once across the long grid distance.
Retain Copy Mode is an optional setting where a volume’s existing Copy Consistency Points
(CCPs) are honored instead of applying the CCPs defined at the mounting cluster. This
setting applies to private volume mounts for reads or write appends. It is used to prevent more
copies of a volume in the grid than desired. The example in Figure 9-27 is a four-cluster grid
where Cluster 0 replicates to Cluster 2, and Cluster 1 replicates to Cluster 3. The desired
result is that only two copies of data remain in the grid after the volume is accessed. Later, the
host wants to mount the volume written to Cluster 0. On a JES2 system Device Allocation
Assistance is used to determine which cluster is the best to request the mount from. Device
Allocation Assistance asks the grid which cluster to allocate a virtual drive from. The host will
then attempt to allocate a device from the best cluster, in this case, Cluster 0.
JES3 does not support Device Allocation Assist, so 50% of the time the host allocates to the
cluster that does not have a copy in its cache. Without Retain Copy Mode, three or four copies
of a volume will exist in the grid after the dismount instead of the desired two. In the case
where host allocation picks the cluster that does not have the volume in cache, one or two
additional copies are created on clusters 1 and 3 because the CCPs indicate the copies
should be made to clusters 1 and 3.
Figure 9-28 Four-cluster grid without Device Allocation Assist, Retain Copy Mode Disabled
With the Retain Copy Mode option set, the original CCPs of a volume are honored instead of
applying the CCPs of the mounting cluster. A mount of a volume to the cluster that does not
have a copy in its cache results in a cross-cluster (remote) mount instead. The cross-cluster
mount uses the cache of the cluster that contains the volume. The CCPs of the original mount
are used. In this case, the result is that Cluster 0 and 2 have the copies and Cluster 1 and 3
do not. This is illustrated in Figure 9-29.
Figure 9-29 Four-cluster grid without Device Allocation Assist, Retain Copy Mode enabled
Figure 9-30 Four-cluster grid, one production cluster down, Retain Copy disabled
The example in Figure 9-31 shows that the Retain Copy Mode is enabled and one of the
production clusters is down. In the scenario where the cluster containing the volume to be
mounted is down, the host will allocate to a device on the other cluster, in this case Cluster 1.
A cross-cluster mount using Cluster 2‘s cache occurs, the original two copies remain. If the
volume is appended to it is changed on Cluster 2 only. Cluster 0 will get a copy of the altered
volume when it rejoins the grid.
Figure 9-31 Four-cluster grid, one production cluster down, Retain Copy enabled
For more information, see the IBM Virtualization Engine TS7700 Series Best Practices -
TS7700 Hybrid Grid Usage white paper at the Techdocs web site:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101656
You can use the following interfaces, tools, and methods to monitor the TS7700 Virtualization
Engine:
IBM System Storage TS3500 Tape Library Specialist (TS7740 only)
TS7700 Virtualization Engine management interface
Bulk Volume Information Retrieval function (BVIR)
VEHSTATS and VEHGRXCL
The specialist and management interface are web-based. With the BVIR function, various
types of monitoring and performance-related information can be requested through a host
logical volume in the TS7700 Virtualization Engine. Finally, the VEHSTATS tools can be used
to format the BVIR responses, which are in a binary format, to create usable statistical
reports. All interfaces, tools, and methods to monitor the TS7700 Virtualization Engine are
explained in detail in the following sections. An overview of these interfaces, tools, and
methods are shown in Figure 9-32.
VEHSTATS Host
Tool
LIBRARY REQUEST
Command
FICON
el
nn
ha
reC
ib
ALS123
BVIR F
TS3500
TS7700
Figure 9-32 Interfaces, tools, and methods to monitor the TS7700 Virtualization Engine
Figure 9-33 shows the TS3500 Tape Library System Summary window.
Figure 9-34 TS3500 Tape Library Specialist Manage Access: Web Security window
Some information provided by the TS3500 Tape Library Specialist is in a display-only format
and there is no option to download data. Other windows provide a link for data that is
available only when downloaded to a workstation. The data, in comma-separate value (CSV)
format, can be downloaded directly to a computer and then used as input for snapshot
analysis for the TS3500. This information refers to the TS3500 and its physical drives usage
statistics from a TS3500 standpoint only.
For further information, including how to request and use this data, see IBM TS3500 Tape
Library with System z Attachment A Practical Guide to Enterprise Tape Drives and TS3500
Tape Automation, SG24-6789.
Restriction: This statistic does not provide information from the host to the TS7700
Virtualization Engine or host to controller.
The IBM Tape System Reporter application has a Windows-based GUI server and client
application, and a Java-based server application. The Java-based server application allows
you to monitor multiple TS3500 libraries. The Windows-based GUI software application
allows you to monitor and generate custom reports for multiple TS3500 tape cartridges, tape
drives, and tape libraries. The IBM Tape System Reporter application operates by collecting
information from the TS3500 tape libraries, aggregating the data in a database, and providing
you the ability to generate a General SQL Query or custom report on the utilization and
performance of tape cartridges, tape drives, and the tape library. The IBM Tape System
Reporter application is bundled with ALMS, which is installed on your TS3500 library.
The TS7700 Virtualization Engine MI is based on a web server that is installed in each
TS7700 Virtualization Engine. You can access this interface with any standard web browser.
A link is available to the TS3500 Tape Library Specialist from the TS7700 Virtualization
Engine management interface, as shown at the lower left corner of Figure 9-36 on page 685.
This link might not be available if not configured during TS7740 installation or for a TS7720
Virtualization Engine.
This section introduces the Performance and Statistics windows of the TS7700 Virtualization
Engine management interface.
The navigation pane is available on the left side of the MI, as shown in the Grid Summary
window shown in Figure 9-35.
Pending Updates
Use this window (Figure 9-40) to view the pending updates for the IBM Virtualization Engine
TS7700 Grid. The existence of pending updates indicates that updates occurred while a
cluster was offline, in Service Prep mode, or in Service mode. Before any existing pending
updates can take effect, all clusters must be online.
Remember: Pending immediate-deferred copies can also result from grid network
problems.
Figure 9-41 TS7700 Virtualization Engine management interface Logical Mounts window
The “Number of logical mounts during last 15 minutes” table has the following information:
Cluster The cluster name
Fast-Ready Number of logical mounts completed using the Fast-Ready method
Cache Hits Number of logical mounts completed from cache
Cache Misses Number of mount requests that were not fulfilled from cache
Total Total number of logical mounts
The “Average mount times (ms) during last 15 minutes” table has the following information:
Cluster The cluster name
Fast-Ready Average mount time for Fast-Ready scratch logical mounts
Cache Hits Average mount time for logical mounts completed from cache
Cache Misses Average mount time for requests that are not fulfilled from cache
Figure 9-42 TS7740 Virtualization Engine management interface Physical Mounts window
Use this window to view statistics for each cluster, vNode, host adapter, and host adapter port
in the grid. At the top of the window is a collapsible tree where you view statistics for a specific
level of the grid and cluster. Click the grid to view information for each cluster. Click the cluster
The example in Figure 9-43 is from a TS7700 Virtualization Engine Cluster that is part of a
multi-cluster grid configuration (four-cluster grid).
Figure 9-43 TS7700 Virtualization Engine management interface Host Throughput window
The host throughput data is displayed in two bar graphs and one table. The bar graphs are for
raw data coming from the host to the host bus adapter (HBA), and for compressed data going
from the HBA to the virtual drive on the vNode.
The letter next to the table heading corresponds with the letter in the diagram above the table.
Data is available for a cluster, vNode, host adapter, and host adapter port. The table cells
include the following items:
Cluster The cluster or cluster component for which data is being
displayed (vNode, Host Adapter, Host Adapter Port)
Compressed Read (A) Amount of data read between the virtual drive and HBA
Raw Read (B) Amount of data read between the HBA and host
Read Compression Ratio Ratio of compressed read data to raw read data
Figure 9-44 TS7700 Virtualization Engine management interface Cache Throttling window
Cache throttling is a time interval applied to TS7700 Virtualization Engine internal functions to
improve throughput performance to the host. The cache throttling statistics for each cluster in
regards to copy and write are displayed both in a bar graph form and in a table. The table
shows the following items:
Cluster The cluster name
Copy The amount of time inserted between internal copy operations
Write The amount of time inserted between internal write operations
Figure 9-45 TS7740 Virtualization Engine management interface Cache Utilization window
The cache utilization statistics can be selected for each individual cluster. Various aspects of
cache performance are displayed for each cluster. Select them from the Select cache
utilizations statistics drop-down menu. The data is displayed in both bar graph and table
form, and further by preference groups 0 and 1.
Restriction: The Grid Network Throughput option is not available in a stand-alone cluster.
This window presents information about cross-cluster data transfer rates. This selection will
be present only in a multi-cluster grid configuration. If the TS7700 Virtualization Engine Grid
only has one cluster, there is no cross-cluster data transfer through the Ethernet adapters.
Figure 9-46 TS7700 Virtualization Engine management interface Grid Network Throughput in a four-cluster grid
The table displays data for cross-cluster data transfer performance (MBps) during the last 15
minutes. The table cells show the following items:
Cluster The cluster name
Outbound Access Data transfer rate for host operations that move data from the specified
cluster into one or more remote clusters
Inbound Access Data transfer rate for host operations that move data into the specified
cluster from one or more remote clusters
Copy Outbound Data transfer rate for copy operations that pull data out of the specified
cluster into one or more remote clusters
Copy Inbound Data transfer rate for copy operations that pull data into the specified
cluster from one or more remote clusters
For an example, see Figure 9-47. This is cache throughput plotted from VEHSTATS.
The cache throughput graph stacks the data rates of data (MBps) that is moved through
cache to show the total amount of data moved through cache. Remember, all data passed
through cache is compressed data. The graph view helps you see the ebb and flow of various
tasks in the TS7700 including host IO, housekeeping tasks (copy to other clusters,
premigrate), and recall of data from tape into cache.
For consistency, be sure you use the same coloring scheme in the graphs so that they are
more easily recognizable and familiar. It is best to plot 24 hours or less in a single chart so the
data can be read easily. All data is available in VEHSTATS, in both the HOURFLAT form and
in the formatted reports. The descriptions below describe the columns used in both the
HOURFLAT file and the formatted reports.
The graph in Figure 9-48 on page 697 shows the cache throughput for Cluster 0, which is part
of a three-cluster grid. Clusters 0 and 1 are both attached to production hosts and make
deferred copies to each other. Cluster 2 receives copies of data from both clusters 0 and 1.
This is an HA and DR three-cluster grid. The rise and decline of data flow of the host IO can
be seen in both the uncompressed Host IO rate (red line) and the compressed host IO rate
(bright green bar). When Cluster 0 turns on the DCT, to slow the copies to other clusters, both
of the cyan/light blue bars are smaller. When Cluster 1 turns on the DCT, the light purple bar
becomes smaller in size. These cyan/light blue and light purple bars increase in size when the
DCT is turned off or is only turned on sporadically during an interval. The premigration activity
is shown with the yellow bars.
False-end-of-batch phenomenon
The false-end-of-batch phenomenon (circled at the right in Figure 9-48) is triggered when the
compressed host I/O rate drops off and the Deferred Copy Throttle (DCT) is turned off. In the
graph the host I/O drops off during the 20:45 interval, after a high host I/O rate during the
20:15 and 20:30 intervals. The lower host I/O rate triggers the TS7700 to turn off the DCT,
unleashing the deferred copies. The cyan bars account for over 250 MBps of cache
throughput at 20:45 through 21:15. When the host wants to push more I/O, the TS7700 is
slow to react to reapply the DCT. This is a chicken and the egg scenario: The host wants to
send more I/O but the TS7700 sees low activity so it concentrates on housekeeping.
Eventually the Host I/O increases enough to allow the thresholds and controls to moderate
the housekeeping tasks. You can see the host I/O rate ramp up starting about 21:45 and the
deferred copies diminish. Again, at 22:45 the host I/O falls off and the DCT is turned off. This
has been labeled the false-end-of-batch indication.
You can adjust the DCT threshold (default of 100 MBps). The algorithm that decides when to
turn off the DCT is enhanced so that it waits for a sustained period of lower Host IO activity
before turning off the DCT. This causes the DCT to remain in effect during brief dips in host IO
activity.
In addition, the DCT threshold adjustment allows the threshold to be lowered, keeping the
DCT applied longer. This should help to allow the false-end-of-batch to occur, but keep the
bandwidth open for host IO longer. The Host Console Request can be used to set the
Deferred Copy Throttling (DCT) Threshold using the SETTING, THROTTLE, DCTAVGTD
keywords. Be careful when adjusting the DCT threshold because setting the threshold lower
can cause a larger build-up in deferred copies. You might want to adjust the Preferred
Pre-Migration threshold and Pre-Migration Throttling threshold to a higher value. Also, setting
As the DCT value is lowered towards 30 ms, the host throughput is affected somewhat and
deferred copy performance improves somewhat. Around 30 ms the host throughput is
affected more significantly as well as deferred copy performance. If the DCT needs to be
adjusted from the default value, the initial recommended DCT plus network latency value is
70 - 85 ms. Favor the value to 70 ms if you are more concerned with deferred copy
performance or towards 85 ms if you are concerned about sacrificing host throughput. The
DCT value should take into account the grid one-way latency. For example, with a one-way
latency of 5 ms, and the desired latency is 70 ms, set the DCT value to 65 ms. After adjusting
the DCT, the host throughput and Deferred Copy Queue must be monitored to see if the
desired balance of host throughput and deferred copy performance has been achieved.
Lowering the DCT can improve deferred copy performance at the expense of host throughput.
The DCT value can be set using the Host Console Request command. The setting of this
throttle is discussed in detail in the IBM Virtualization Engine TS7700 Series z/OS Host
Command Line Request User's Guide which is available from the Techdocs website using the
keywords SETTING, THROTTLE, DCOPYT.
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
The TS7700 historical statistics, which are available through BVIR and VEHSTATS, show the
amount of non premigrated data at the end of each reporting interval. This value is also
available on the TS7700 management interface as a point-in-time statistic. Two host warning
messages, low and high, can be configured for the TS7700 using the Host Console Request
function. See IBM Virtualization Engine TS7700 Series z/OS Host Command Line Request
User's Guide, which is available on the Techdocs website. Use the following keywords:
SETTING, ALERT, RESDHIGH
SETTING, ALERT, RESDLOW
You might want to adjust this threshold lower to provide a larger gap between this threshold
and the Pre-Migration Throttling threshold. Do this if you want the gap to be larger but do not
want to raise the Premigration Throttling Threshold. This threshold can be raised, along with
the Pre-Migration Throttling Threshold, to defer premigration until after a peak period. This
can improve the host IO rate because the premigration tasks are not ramped up as soon with
lower threshold. This trades off an increased amount of un-premigrated data for a higher host
IO rate during heavy production periods. The Preferred Pre-Migration Threshold is set using
the Host Console Request function. The setting of this threshold is discussed in detail in the
IBM Virtualization Engine TS7700 Series z/OS Host Command Line Request User's Guide,
which is available on the Techdocs website. Use the keywords SETTING, CACHE,
PMPRIOR.
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
You might decide to turn off Host Write Throttling because of immediate copies taking too
long (if having the immediate copies take longer is acceptable). However, avoid the 40-minute
limit where the immediate copies are changed to immediate-deferred. In grids where a large
portion of the copies are immediate, better overall performance has been seen when the Host
Write Throttle because of immediate copies is turned off. You are trading off host IO for length
of time required to complete an immediate copy. The enabling and disabling of the host write
throttle because of immediate copies is discussed in detail in the IBM Virtualization Engine
TS7700 Series z/OS Host Command Line Request User's Guide, which is available on
Techdocs. Use the keywords SETTING, THROTTLE, ICOPYT.
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
Back-End drives
Ensuring enough back-end drives are available is important. Below are general guidelines for
the number of back-end drives versus the number of performance increments installed in the
TS7740. If there are insufficient back-end drives, the performance of the TS7740 will suffer.
As a guideline, use the ranges listed in Table 9-1 of back-end drives based on the host
throughput configured for the TS7740. The lower number of drives in the ranges is for
scenarios that have few recalls, whereas the upper number is for scenarios that have
numerous recalls. Remember, these are guidelines, not rules.
Grid links
This section describes the grid link performance and the setting of the performance warning
threshold.
The TS7700 uses the TCP/IP protocol for moving data between each cluster. In addition to
the bandwidth, there are other key factors that impact the throughput which the TS7700 can
achieve. Factors that directly affect performance are as follows:
Latency between the TS7700s
Network efficiency (packet loss, packet sequencing, and bit error rates)
Network switch capabilities
Flow control to pace the data from the TS7700s
Inter-switch link (ISL) capabilities such as flow control, buffering, and performance
The TS7700s attempt to drive the network links at the full 1-Gb rate for the two or four 1-Gbps
links, or at the highest possible load at the two 10-Gbps links, which might be much higher
than the network infrastructure is able to handle. The TS7700 supports the IP flow control
frames to have the network pace the rate at which the TS7700 attempts to drive the network.
The best performance is achieved when the TS7700 is able to match the capabilities of the
underlying network, resulting in fewer dropped packets.
Important: When the system attempts to give the network more data than it can handle, it
begins to throw away the packets it cannot handle. This process causes TCP to stop,
re-synchronize, and re-send amounts of data, resulting in a much less efficient use of the
network.
To maximize network throughput, you must ensure the following items regarding the
underlying network:
The underlying network must have sufficient bandwidth to account for all network traffic
expected to be driven through the system - eliminate network contention.
The underlying network must be able to support flow control between the TS7700s and
the switches, allowing the switch to pace the TS7700 to the WAN's capability.
Flow control between the switches is also a potential factor, to ensure that they themselves
are able to pace with each other's rate.
Be sure that performance of the switch is capable of handling the data rates expected from
all of the network traffic.
The Grid Link Degraded threshold also includes two other values that can be set by the SSR:
Number of degraded iterations: The number of consecutive five-minute intervals link
degradation was detected before reporting an attention message. The Default value is 9.
Generate Call Home iterations: The number of consecutive 5 minute intervals link
degradation was detected before generating a Call Home. The default value is 12.
Use the default values unless you are receiving intermittent warnings. If you receive
intermittent warnings, change the threshold and have the SSR change these values. These
default values might not be appropriate. For example, clusters in a two-cluster grid are 2000
miles apart with a round trip latency of approximately 45 ms. The normal variation seen is
20 - 40%. In this example the threshold value is at 50% and the iterations set to 12 and 15.
Also, if you have limited network bandwidth (e.g. less the 100Mbps), decreasing the number
of concurrent copy tasks can reduce the overhead allocated and consumed simultaneously.
Eventually this could prevent a 3 hours copy timed-out without repeating the deferred copy
task forever.
Values can be set for the number of concurrent RUN copy threads and the number of
Deferred copy threads. The allowed values for the copy thread count is from 5 to 128. The
default value is 20 for clusters with two 1 Gbps Ethernet links, and 40 for clusters with four
1 Gbps or two 10 Gbps Ethernet links. Use the following parameters with the LIBRARY
command:
SETTING, COPYCNT, RUN
SETTING, COPYCNT, DEF
Reclaim threshold
The reclaim threshold directly affects how much data is moved during each reclaim operation.
The default setting is 10% for each pool. Customers tend to raise this threshold too high
because they want to store more data on their stacked volumes. The result is that reclaim
operations must move larger amounts of data and consume back-end drive resources that
are needed for recalls and premigration. After a reclaim task is started, it does not free up its
back-end drives until the volume being reclaimed is empty. Table 9-2 shows the reclaim
threshold and the amount of data that must be moved, depending on the stacked tape
capacity and the reclaim percentage. When the threshold is reduced from 40% to 20% only
half of the data needs to be reclaimed, thus cutting the time and resources needed for reclaim
in half.
300 GB 30 GB 60 GB 90 GB 120 GB
3 1
4 1
5 1
6 2
7 2
8 3
9 3
10 4
11 4
12 5
13 5
14 6
15 6
16 7
The limit setting is in the TS7740 MI. For Copy Export pools it is advisable that the maximum
number of premigration drives be set appropriately. If you are exporting a small amount of
data each day (one or two cartridges worth of data) you should limit the premigration drives to
two. If more data is being exported, set the maximum to four. This setting limits the number of
partially filled export volumes. Look at MB/GB written to a pool, compute MBps, compute
maximum and average, and compute the number of premigration drives per pool. Base the
number of drives by using 50 - 70 MBps per drive.
Both point-in-time statistics and historical statistics are recorded. The PIT records present
data from the most recent interval, providing speedometer like statistics. The historical
statistics provide data that can be used to observe historical trends.
These statistical records are available to a host through the BVIR facility. See 9.9, “Bulk
Volume Information Retrieval” on page 711 for more information about how to retrieve the
statistics records and 9.10, “IBM Tape Tools” on page 727 for more information about how to
format and analyze these records.
Each cluster in a grid has its own set of PIT and historical statistics for both the vNode and
hNode.
For a complete description of the records see The IBM Virtualization Engine TS7700 Series
Statistical Data Format White Paper Version 2.0., which is available on the Techdocs website
at the following URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/Techdocs
Point-in-time statistics
The data provided by this type of statistics is a snapshot of the TS7700 Virtualization Engine
activity over the last 15-second interval. Each new 15-second interval data overlays the prior
interval’s data. But not all data is updated every 15 seconds (primarily hNode data). Those
statistics contain Single and Multi Cluster Grid information.
A variable number of data records are returned depending on the number of vNodes and
hNodes (if in a multi-cluster grid configuration).
Each record has a common header that contains the following information:
Record length
Record version
Record type
Node and distributed library ID
Time stamp
Machine type, model, and hardware serial number
Code level
Remember:
hNode HSM does not establish a relationship to the Hierarchical Storage Manager
(DFSMShsm), the z/OS storage management software.
For TS7720 Virtualization Engine statistics, because physical tape drives are not used,
there is no information provided for functions related to back-end drive activity such as
migration, recall and reclamation.
Historical statistics
The data provided by this type of statistics is captured over a 15-minute interval in the TS7700
Virtualization Engine. Each new 15-minute interval data does not overlay the prior interval’s
data. However, not all data is updated every 15 minutes. Those statistics contain single and
multi-cluster grid information. Up to 96 intervals per day can be acquired, and each day a file
is generated that contains the historical statistics for that day. The historical statistics are kept
in the TS7700 Virtualization Engine for 90 rolling days.
You can obtain the complete set or a subset of these historical statistics through the
appropriate BVIR request (for more details, see “Historical statistics” on page 708). The
request will specify the day or the days of needed historical statistics. For the current day,
records up to the last 15-minute interval are returned. The data returned is not in a human
readable format: It is primarily binary data. Therefore, use the following parameter:
DCB=(RECFM=U,BLKSIZE=24000)
The record length depends on the record type. For more information about the format of the
statistics records, see the white paper IBM Virtualization Engine TS7700 Series Statistical
Data Format White Paper Version 2.0 on the Techdocs website at the following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/Techdocs
VEHSTATS is a tool that is available for formatting these records into reports. For more
information about using VEHSTATS, see 9.10, “IBM Tape Tools” on page 727.
The number of records returned varies depending on the number of vNodes and hNodes (if in
a multi-cluster grid configuration).
Each record has a common header that contains the following information:
Record length
Record type
Node and distributed library ID
Time stamp
Machine type, model, and hardware serial number
Code level
Both the state and usage of each adapter port are provided:
Port interface ID
Interface data transfer rate setting (capable and actual)
Data transferred before and after compression
Selective and system reset counts
You also see the state and status of the tape volume cache. The provided statistics contain
the following information:
Usable size in GB
Throttling values
State of each cache partition
– Partition size
– Number of fast-ready, cache hit, or cache miss mounts
– Average fast-ready, cache hit, or cache miss mount times
– Number of volumes in cache by preference group
– Space occupied by the volumes in cache by preference group
Remember:
The TS7720 Virtualization Engine does not use a physical tape library. Therefore,
hNode physical library information is not available from a TS7720 Virtualization
Engine.
With a TS7740 Virtualization Engine, the attached physical tape library might have
multiple subsystems sharing the TS3500 Tape Library through ALMS partitioning.
Only data related to the partition attached to this TS7740 Virtualization Engine is
provided in the hNode library report.
With BVIR, you are able to obtain information about all of the logical volumes managed by a
TS7700 Virtualization Engine. The following data is available from a TS7700 Virtualization
Engine:
Volume Status Information
Cache Contents Information
Physical Volume to Logical Volume Mapping Information
Point-in-Time Statistics
Historical Statistics
Physical Media Pools
Physical Volume Status
Copy Audit
For more information, see the IBM Virtualization Engine TS7700 Series Bulk Volume
Information Retrieval Function User's Guide at the following URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101094
Historical statistics for a multi-cluster grid can be obtained from any of the clusters.
2. The request volume is again mounted, this time as a specific mount. Seeing that the
volume was primed for a data request, the TS7700 Virtualization Engine appends the
requested information to the data set. The process of obtaining the information and
creating the records to append can take up to several minutes, depending on the request
and, from a host's viewpoint, is part of the mount processing time. After the TS7700
Virtualization Engine has completed appending to the data set, the host is notified that the
mount has completed. The requested data can then be accessed like any other tape data
set.
In a JES2 environment, the JCL to perform the two steps can be combined into a single
job. However, in a JES3 environment, they must be run in separate jobs. This is because
the volume will not be demounted and remounted between job steps in a JES3
environment.
After the response data set has been written to the request logical volume, that logical volume
functions identically to any other logical volume in the subsystem. Subsequent mount
requests and read accesses to the logical volume should have no effect on its contents. Write
accesses to the logical volume will overwrite its contents. It can be returned to scratch status
and reused by any application.
Specific
Mount a mount
logical BVIR
scratch volume
volume
Requested Data
Write BVIR is appended to the
logical volume
Request data
Processing occurs
during mount time
R/UN
BVIR
volume
Read BVIR data
from logical
volume
Demount
BVIR Return
volume Keep BVIR
BVIR or volume to
volume scratch
The building of the response information requires a small amount of resources from the
TS7700 Virtualization Engine. Do not use the BVIR function to “poll” for a specific set of
information and only issues one request at a time. Certain requests, for example, the volume
map, might take several minutes to complete, and to prevent “locking” out another request
during that time, the TS7700 Virtualization Engine is designed to handle two concurrent
requests. If more than two concurrent requests are issued, they will be processed as previous
requests are completed.
Although the requested data is always in a human readable format, depending on the
request, the data returned from the TS7700 Virtualization Engine can be in human readable
or binary form. See the response sections for the specifics of the returned data.
The general format for the request/response data set is shown in Example 9-3.
Example 9-3 BVIR output format
123456789012345678901234567890123456789012345
VTS BULK VOLUME DATA REQUEST
VOLUME MAP
11/20/2008 12:27:00 VERSION 02
S/N: 0F16F LIB ID: DA01A
Record 0 is identical for all requests and not part of the output; it is for just supporting
Records 1 through 5. Records 6 and above contain the requested output, which differs
depending on the request:
Records 1 and 2 contain the data request commands.
Record 3 contains the date and time when the report was created and the version of
BVIR.
Record 4 contains both the hardware serial number and the Distributed Library ID of the
TS7700 Virtualization Engine.
Record 5 contains all blanks.
Records 6-N and above contain the requested data. The information is described in general
terms. Detailed information about these record can be found in the IBM Virtualization Engine
TS7700 Series Bulk Volume Information Retrieval Function User's Guide at the following
URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101094
9.9.2 Prerequisites
Any logical volume defined to a TS7700 Virtualization Engine can be used as the
request/response volume. Logical volumes in a TS7700 Virtualization Engine are formatted
as IBM Standard Labeled volumes. Although a user can reformat a logical volume with an
ANSI Standard Label or as an unlabeled tape volume, those formats are not supported for
use as a request/response volume.
There are no restrictions regarding the prior use of a volume used as a request/response
volume and no restrictions regarding its subsequent use for any other application. Use normal
scratch allocation methods for each request (that is, use the DISP=(NEW,CATLG)
parameter). In this way, any of the available scratch logical volumes in the TS7700
Virtualization Engine can be used. Likewise, when the response volume's data is no longer
needed, the logical volume should be returned to scratch status through the normal methods
(typically by deletion of the data set on the volume and a return-to-scratch policy based on
data set deletion).
Although the request data format uses fixed records, not all response records are fixed. For
the point-in-time and historical statistics responses, the data records are of variable length
and the record format used to read them is the Undefined (U) format. See Appendix F,
“Sample JCL” on page 881 for more information.
In a multi-site TS7700 Virtualization Engine Grid configuration, the request volume must be
created on the cluster for which the data is being requested. The Management Class
assigned to the volume needs to specify the particular cluster that is to have the copy of the
request volume.
The format for the request data set records are listed in the following sections.
Record 1
Record 1 must contain the command exactly as shown in Example 9-4.
The format for the request’s data set records is shown in Table 9-4.
Table 9-4 BVIR Request Record 1
Record 1: Request Identifier
Record 2
With Record 2, you can specify which data you want to obtain. The following options are
available:
VOLUME STATUS zzzzzz
CACHE CONTENTS
VOLUME MAP
POINT IN TIME STATISTICS
HISTORICAL STATISTICS FOR xxx
HISTORICAL STATISTICS FOR xxx-yyy
PHYSICAL MEDIA POOLS
PHYSICAL VOLUME STATUS VOLUME zzzzzz
PHYSICAL VOLUME STATUS POOL xx
COPY AUDIT COPYMODE INCLUDE/EXCLUDE libids
For the Volume Status and Physical Volume Status Volume requests, ‘zzzzzz’ specifies the
volume serial number mask to be used. By using the mask, one to thousands of volume
records can be retrieved for the request. The mask must be six characters in length, with the
underscore character ( _ ) representing a positional wildcard mask.
For example, assuming that volumes in the range of ABC000 through ABC999 have been
defined to the cluster, a request of VOLUME STATUS ABC1_0 would return database records
that exist for ABC100, ABC110, ABC120, ABC130, ABC140, ABC150, ABC160, ABC170,
ABC180, and ABC190.
For the Historical Statistics request, ‘xxx’ specifies the Julian day being requested. Optionally,
‘-yyy’ can also be specified and indicates that historical statistics from xxx through yyy are
being requested. Valid days are 001 through 366 (to account for leap year). For leap years,
February 29 is Julian day 060 and December 31 is Julian day 366, for other years, Julian day
060 is March 1, and December 31 is Julian day 365. If historical statistics do not exist for the
day or days requested, that will be indicated in the response record (this can occur if a
request is issued for a day before the day the system was installed, day or days the system
was powered off, or after the current day before a rolling year has been accumulated). If a
request spans the end of the year, for example a request that specified as HISTORICAL
STATISTICS FOR 364-002, responses are provided for days 364, 365, 366, 001 and 002,
regardless of whether the year was a leap year.
For Copy Audit, INCLUDE or EXCLUDE is specified to indicate which TS7700s clusters in a
grid configuration are to be included or excluded from the audit. COPYMODE is an option for
taking a volume’s copy mode for a cluster into consideration. If COPYMODE is specified, a
single space must separate it from INCLUDE or EXCLUDE. The libid parameter specifies the
library sequence numbers of the distributed libraries associated with each of the TS7700
clusters either to include or exclude in the audit. The parameters are separated by a comma.
At least one libid parameter must be specified.
For the Physical Volume Status Pool request, ‘xx’ specifies the pool for which the data is to be
returned. If there are no physical volumes currently assigned to the specified pool, that will be
indicated in the response record. Data might be requested for pools 0 through 32.
For point-in-time and historical statistics requests, any additional characters provided in the
request record past the request itself are retained in the response data, but otherwise
ignored. In a TS7700 grid configuration, the request volume must be valid only on the specific
cluster the data is to be obtained from. Use a specific Management Class that has a copy
policy defined to indicate that only the desired cluster is to have a copy of the data. By
Human readable appended records can vary in length, depending on the reports requested
and can vary between 80 bytes and 640 bytes in length. Binary data appended records can
be variable in length of up to 24000 bytes. The data set is now a response data set. The
appropriate block counts in the end of file (EOF) records will be updated to reflect the total
number of records written to the volume.
These records contain the specific response records based on the request. If the request
could not be understood or was invalid, that will be indicated. The record length of each
response data is listed in Table 9-6.
CACHE CONTENTS 80
VOLUME MAP 80
After appending the records and updating the EOF records, the host that requested the
mount is signaled that the mount is complete and can read the contents of the volume. If the
contents of the request volume is not valid, either one or more error description records will
be appended to the data set or the data set will be unmodified before signaling the host that
the mount completed, depending on the problem encountered.
All human readable response records begin in the first character position of the record and
are padded with blank characters on the right to fill out the record. All binary records are
variable in length and are not padded.
The response data set contains both request records that described in 9.9.3, “Request data
format” on page 714, and the response data set contains three explanatory records (Records
3 - 5) and, starting with Record 6, the actual response to the data request.
The detailed description of the record formats of the Response Record can be found in the
following white papers:
IBM Virtualization Engine TS7700 Series Bulk Volume Information Retrieval Function
User's Guide
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101094
IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100829
Clarification: When records are listed in this chapter, an initial record shows
“1234567890123...”. This record does not exist, but is provided to improve readability.
The volume status information returned represents the status of the volume on the cluster the
requested volume was written to. In a TS7700 Virtualization Engine Grid configuration,
separate requests must be issued to each cluster to obtain the volume status information for
the individual clusters. Using the volume serial number mask specified in the request, a
response record is written for each matching logical volume that exists in the cluster. A
response record consists of the database fields defined as described in the White Paper.
Fields are presented in the order defined in the table and are comma-separated. The overall
length of each record is 640 bytes with blank padding after the last field as needed. The first
few fields of the record returned for VOLSER ABC123 would be as shown in Example 9-5.
Clarification: Volumes that are assigned to PG0 can also be removed from the cache,
independent of the need for cache space, as a background task within the TS7700
Virtualization Engine.
Tip: The order of removal of a volume from cache might also be influenced by other
storage constructs settings for a volume, so the order that is presented in the response
data should not be relied on to be exact.
The contents of the cache that are associated with the specific cluster that the request
volume is written to are returned in the response records. In a TS7700 grid configuration,
separate requests must be issued to each cluster to obtain the cache contents.
Remember:
The generation of the response might take several minutes to complete depending on
the number of volumes in the cache and how busy the TS7700 cluster is at the time of
the request.
The contents of the cache typically are all private volumes. However, some might have
been returned to scratch status soon after being written. The TS7700 does not filter the
cache contents based on the private or scratch status of a volume
Even with inconsistencies, the mapping data is useful if you want to design jobs that recall
data efficiently off of physical volumes. If the logical volumes reported on a physical volume
are recalled together, the efficiency of the recalls will be increased. If a logical volume with an
inconsistent mapping relationship is recalled, it will recall correctly, but an additional mount of
a separate physical volume might be required.
The physical volume to logical volume mapping associated with the physical volumes
managed by the specific cluster the request volume is written to are returned in the response
records. In a TS7700 grid configuration, separate requests must be issued to each cluster to
obtain the mapping for all physical volumes.
Tip: The generation of the response can take several minutes to complete depending on
the number of active logical volumes in the library and how busy the TS7700 cluster is at
the time of the request
Other than an information header, Point In Time statistics are provided in a mixture of
character and binary format fields. The record sizes and format of the statistical records are
defined in the IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper
Version 2.0 available at the following URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100829
The response records are written in binary undefined (U) format of maximum 24000 bytes.
Tips:
If a cluster or node is not available at the time the point in time statistics are recorded,
except for the headers, all the data fields for that cluster or node will be zeroes.
The request records are written in FB format. To read the response records, use the
Undefined (U) format with a maximum blocksize of 24000. The response records are
variable in length.
Historical statistics
A TS7700 Virtualization Engine is continually logging information regarding the activities
within it. The logged information is referred to as statistical information and is recorded in two
forms, point-in-time (PIT) and historical. Historical statistics indicate the operational aspects
of the TS7700 Virtualization Engine accumulated over a 15-minute interval of time. The data
from each 15-minute interval is maintained and logged within the TS7700 Virtualization
Engine. A request for historical statistics results in a response file that contains all of the data
logged up to that point for the requested julian day.
Other than an information header, historical statistics are provided in character and binary
format fields. The sizes and format of the statistical records are defined in the IIBM
Virtualization Engine TS7700 Series Statistical Data Format White Paper Version 2.0.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100829
The historical statistics for all clusters are returned in the response records. In a TS7700 grid
configuration, this way means that the request volume can be written to any cluster to obtain
the information for the entire configuration.
The response records are written in binary undefined (U) format of maximum 24000 bytes.
Tips:
If a cluster or node is not available at the time the historical statistics are recorded,
except for the headers, all the data fields for that cluster or node will be zeroes
The TS7700 Virtualization Engine retains 90 days worth of historical statistics. If you
want to keep statistics for a longer period of time, be sure you retain the logical volumes
that are used to obtain the statistics.
The request records are written in fixed block (FB) format. To read the response
records, use the undefined (U) format with a maximum blocksize of 24000. The
response records are variable in length.
For pool 0 (Common Scratch Pool), because it only contains empty volume, only the empty
count is returned. Volumes that have been borrowed from the common pool are not included.
Information is returned for the common pool and all other pool that are defined and have
physical volumes associated with them.
The physical media pool information managed by the specific cluster the request volume is
written to are returned in the response records. In a TS7700 grid configuration, separate
requests must be issued to each cluster to obtain the physical media pool information for all
clusters.
The response records are written in 80 byte fixed block format. Counts are provided for each
media type associated with the pool (up to a maximum of 8).
The overall length of each record is 400 bytes with blank padding after the last field as
needed. For example, the first few fields of the record returned for VOLSER A03599 would be
as follows:
A03599,2,FULL,READ-WRITE,2007-05-05-06.40.08.030061,2007-05-04-13.45.15.918473,...
Tip: The generation of the response might take several minutes to complete depending on
the number of volumes requested and how busy the TS7700 cluster is at the time of the
request.
This request performs an audit of the databases on a set of specified TS7700 Virtualization
Engine distributed libraries to determine if there are any volumes that do not have a valid copy
on at least one of them. If the COPYMODE option is specified, whether or not the volume is
supposed to have a copy on the distributed library is taken into account when determining
whether that distributed library has a valid copy. If COPYMODE is specified and the copy
policy for a volume on a specific cluster is “R” or “D”, then that cluster is considered during the
audit. If COPYMODE is specified and the copy policy for a volume on a specific cluster is “N”,
then the volume’s validity state is ignored because that cluster does not need to have a valid
copy. The request then returns a list of any volumes that do not have a valid copy, subject to
The specified clusters might not have a copy for several reasons:
The copy policy associated with the volume did not specify that any of the clusters
specified in the request were to have a copy and the COPYMODE option was not
specified. This might be because of a mistake in defining the copy policy or because it was
intended. For example, volumes used in a disaster recovery test only need to reside on the
disaster recovery TS7700 Virtualization Engine and not on the production TS7700
Virtualization Engines. If the request specified only the production TS7700 Virtualization
Engines, all of the volumes used in the test would be returned in the list.
The copies have not yet been made from a source TS7700 Virtualization Engine to one or
more of the specified clusters. This could be because the source TS7700 Virtualization
Engine or the links to it are unavailable, or because a copy policy of deferred was specified
and a copy had not been completed when the audit was performed. In addition, one or
more of the specified clusters might have completed their copy and then had their copy
automatically removed as part of the TS7700 Virtualization Engine hybrid automated
removal policy function. Automatic removal can only take place on TS7720 Virtualization
Engine clusters in a hybrid configuration.
Each of the specified clusters contained a valid copy at one time but has since removed
them as part of the TS7700 Virtualization Engine hybrid automated removal policy
function. Automatic removal can only take place on TS7720 Virtualization Engine clusters
in a hybrid configuration.
The Copy Audit request is intended to be used for the following situations:
A TS7700 Virtualization Engine is to be removed from a grid configuration. before its
removal, you want to ensure that the TS7700 Virtualization Engines that are to remain in
the grid configuration have a copy of all the important volumes that were created on the
TS7700 Virtualization Engine that is to be removed.
A condition has occurred (because of a site disaster or as part of a test procedure) where
one of the TS7700 Virtualization Engines in a grid configuration is no longer available and
you want to determine which, if any, volumes on the remaining TS7700 Virtualization
Engines do not have a valid copy.
In the Copy Audit request, you need to specify which TS7700 Virtualization Engine clusters
are to be audited. The clusters are specified by using their associated distributed library ID
(this is the unique 5 character library sequence number defined when the TS7700
Virtualization Engine Cluster was installed). If more than one distributed library ID is
specified, they are separated by a comma. The following rules determine which TS7700
Virtualization Engine clusters are to be included in the audit:
When the INCLUDE parameter is specified, all specified distributed library IDs will be
included in the audit. All clusters associated with these IDs must be available or the audit
will be failed.
When the EXCLUDE parameter is specified, all specified distributed library IDs will be
excluded from the audit. All other clusters in the grid configuration must be available or the
audit will fail.
Distributed library IDs specified are checked for being valid in the grid configuration. If one
or more of the specified distributed library IDs are invalid, the Copy Audit is failed and the
response will indicate the IDs that are considered invalid.
Distributed library IDs must be specified or the Copy Audit fails.
Here are examples of valid requests (assume a three-cluster grid configuration with
distributed library IDs of DA01A, DA01B, and DA01C).
On completion of the audit, a response record is written for each logical volume that did not
have a valid copy on any of the specified clusters. Volumes that have never been used, have
had their associated data deleted, or have been returned to scratch, are not included in the
response records. The record includes the volume serial number and the copy policy
definition for the volume. The VOLSER and the copy policy definitions are comma separated,
as shown in Example 9-6.
Tips:
The output for Copy Audit includes copy consistency points for up to eight TS7700
Virtualization Engine clusters. This is to provide for future expansion of the number of
clusters supported in a TS7700 Virtualization Engine Grid to the architected maximum.
Copy Audit might take more than an hour to complete depending on the number of
logical volumes that have been defined, how many clusters are configured in the grid
configuration, and how busy the TS7700 Virtualization Engines are at the time of the
request.
If the request file contains the correct number of records and the first record is correct but the
second is not, the response file will indicate in Record 6 that the request is unknown, as
shown in Example 9-7.
If the request file contains the correct number of records, the first record is correct, the second
is recognized but includes a variable that is not within the range supported for the request,
and the response file will indicate in record 6 that the request is invalid, as shown in
Example 9-8.
Example 9-9 lists the content of the Readme.txt file that provides basic information about the
tape tools.
IMPORTANT IMPORTANT
Program enhancements will be made to handle data format changes when they occur.
If you try to run new data with old program versions, results will be
unpredictable. To avoid this situation, you need to be informed of these
enhancements so you can stay current. To be informed of major changes to any of
the tools distributed via this ftp site, send an email message to:
In the subject, specify NOTIFY. Nothing else is required in the body of the note.
This will add you to our change distribution list.
The UPDATES.TXT file will contain a chronological history of all changes made to
the tools. You should review that file on a regular basis, at least monthly,
perhaps weekly, so you can see if any changes apply to you.
If you feel that the JCL or report output needs more explanation, please send an
mail to the address above indicating the area needing attention.
BVIRHSTx Get historical stats Creates U, VB, SMF TS7700 Statistics file
from TS7700 format
BVIRPOOL Identify available Reports all pools at once BVIR file Physical media by pool
scratch by pool
BVIRPRPT Reclaim copy export Based on active GB, not BVIR file Detail report of data on
volumes % volumes
BVIRRPT Identify VTS virtual Determine which BVIR data & CA1, Logical volumes by
volumes by owner applications or users TLMS, RMM, ZARA, jobname or dsname,
have virtual volumes CTLT logical to physical rpts
BVPITRPT Point in Time stats as Immediately available TS7700 Point in Time stats as WTO
WTO
COPYVTS Copy lvols from old Recall lvols based on BVIR data & CA1, IEBGENER to recall lvols
VTS selected application(s) TLMS, RMM, ZARA, and copy to new VTS
CTLT
DIFFEXP Identify multi-file Prevent single file from CA1, TLMS, RMM, List of files not matching file
volumes with different not allowing volume to ZARA, CTLT 1 expiration date
expiration dates return to scratch
FSRMATCH Replace Allows TapeWise and FSR records plus SMF Updated SMF 14s plus all
*.HMIGTAPE.DATASE other tools using SMF 14,15,21,30,40,etc other SMF records as they
T in SMF 14 with 14/15 data to report were
actual recalled actual recalled dataset
dsname
GETVOLS Get volsers from list of Automate input to CA1, TLMS, RMM, Volsers for requested dsns
dsns PRESTAGE ZARA, CTLT
IOSTATS Report job elapsed Show runtime SMF 30 records Job step detail reporting
times improvements
MOUNTMON Monitor mount Determine accurate Samples tape UCBs detail, summary,
pending and volume mount times and distribution, hourly,
allocations concurrent drive TGROUP, system reporting
allocations
ORPHANS Identify orphan data Clean up tool CA1, TLMS, RMM, Listing file showing all multi
sets in Tape ZARA, CTLT occurrence GDGs that
Management Catalog have not been created in
the last nn days.
PRESTAGE Recall lvols to VTS Ordered and efficient BVIR VOLUME MAP Jobs submitted to recall
lvols
SMFILTER IFASMFDP exit or E15 Filters SMF records to SMF data Records for tape activity
exit keep just tape activity. plus optional TMM or
Generates “tape” optical activity.
records to simulate
optical activity
TAPECOMP Show current tape See how well data would Logrec MDR or EREP Shift and hourly reports
compression ratios compress in VTS history file showing current read and
write compression ratios.
TAPEWISE Identify tape usage Shows UNIT=AFF, early SMF 14,15,21,30,40 Detail, summary,
improvement close, UNIT=(TAPE,2), distributions, hourly,
opportunities multi-mount, TGROUP, system reporting
DISP=MOD, recalls
TMCREUSE Identify data sets with Get candidate list for CA1, TLMS, RMM, Filter list of potential PG0
create date equal to VTS PG0 ZARA, CTLTF candidates
last ref date
VEHGRXCL Graphing package Graphs TS7700 activity VEHSTATS flat files Many graph s of TS7700
activity
VEHSCAN Dump fields in Individual field dump BVIR stats file DTLRPT for selected
historical statistics file interval
VEHSTATS TS7700 historical Show activity on and BVIRHSTx file Reports showing mounts,
performance reporting performance of TS7700 data transfer, box usage
VEPSTATS TS7700 Point in Time Snapshot of last 15 BVIRPIT data file Reports showing current
statistics seconds of activity plus activity and status
current volume status
VESYNC Synchronize TS7700 Identify lvols that need BVIR data & CA1, List of all volsers to recall
after new cluster copies TLMS, RMM, ZARA, by application
added CTLT
VOLLIST Show all active volsers Used to get a picture of CA1, TLMS, RMM, Dsname, volser, create
from tape user data set naming ZARA, CTLT date, volseq. Group name,
management catalog. conventions. See how counts by media type.
Also get volume many volumes are
counts by group, size, allocated to different
and media. application
For most tools, a text file is available. In addition, each job to run a tool contains a detailed
description on the function of the tool and parameters that need to be specified.
Important: For the IBM Tape Tools, there are no warranties, expressed or implied,
including the warranties of merchantability and fitness for a particular purpose.
To obtain the tape tools, download the ibmtools.exe file to your computer or use FTP from
Time Sharing Option (TSO) on your z/OS system to directly upload the files contained in the
ibmtools.exe file.
The ibmtools.exe file is a self-extracting .zip file that is expand into four separate files:
IBMJCL.XMI Contains the execution JCL for current tape analysis tools.
IBMCNTL.XMI Contains parameters needed for job execution, but that do not need to
be modified by the user.
IBMLOAD.XMI Contains the load library for executable load modules.
IBMPAT.XMI Contains the data pattern library, which is only needed if you will be
running the QSAMDRVR utility.
The ibmtools.txt file contains detailed information about how to download and install the
tools libraries.
The updates.txt file contains all fixes and enhancements made to the tools. Review this file
regularly to determine whether any of the programs you use have been modified.
To ensure that you are not working with outdated tools, the tools are controlled through an
EXPIRE member. Every three months, a new EXPIRE value will be issued that is good for the
next 12 months. When you download the latest tools package any time during the year, you
have at least nine months remaining on the EXPIRE value. New values are issued in the
middle of January, April, July, and October.
If your IBM tools jobs stop running because the expiration date has passed, download the
ibmtools.exe file again to get the latest IBMTOOLS.JCL(EXPIRE) member.
IOSTATS
IOSTATS tool is part of the ibmtools.exe file, which is available at the following URL:
ftp://ftp.software.ibm.com/storage/tapetool/
You can use IOSTATS tool to measure job execution times. For example, you might want to
compare the TS7700 Virtualization Engine performance before and after configuration
changes.
IOSTATS can be run for a subset of job names for a certain period of time before the
hardware installation. SMF type 30 records are required as input. The reports list the number
of disk and tape I/O operations that were done for each job step, and the elapsed job
execution time.
With the TS7700 Virtualization Engine running in a multi-cluster grid configuration, IOSTATS
can be used for the following purposes:
To evaluate the effect of the multi-cluster grid environment, compare job execution times
before implementation of the multi-cluster grid to those after migration, especially if you
are operating in immediate copy (RUN, RUN data consistency point) mode.
To evaluate the effect of hardware upgrades, compare job execution times before and after
upgrading components of the TS7700 Virtualization Engine. For example, you might want
to verify the performance impact of a larger tape volume cache capacity or the number of
TS1130/TS1120/3592 Tape Drives.
To evaluate the effect of changing the copy mode of operation on elapsed job execution
time.
MOUNTMON
As with IOSTATS, MOUNTMON is available from the IBM Tape Tools FTP site. MOUNTMON
runs as a started task or batch job and monitors all tape activity on the system. The program
must be APF-authorized and, if it runs continuously, it writes statistics for each tape volume
allocation to SMF or to a flat file.
Based on data that is gathered from MOUNTMON, the MOUNTRPT program can report on
the following information:
How many tape mounts are necessary
How many are scratch
How many are private
How many by host system
How many by device type
How much time is needed to mount a tape
How long are tapes allocated
How many drives are being used at any given time
What is the most accurate report of concurrent drive usage
Which jobs are allocating too many drives
To convert the binary response record from BVIR data to address your requirements, you can
use the IBM tool VEHSTATS when working with historical statistics. When working with
point-in-time statistics, you can use the IBM tool VEPSTATS. See 9.10.2, “Tools download
and installation” on page 729 for specifics about where to obtain these tools. Details about
using BVIR can also be found in the IBM Virtualization Engine TS7700 Series Bulk Volume
Information Retrieval Function User's Guide. The most recently published white papers are
available at the Techdocs website by searching for TS7700 Virtualization Engine at the
following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
With the record layout of the binary BVIR response data, you can decode the binary file or
you can use the record layout to program your own tool for creating statistical reports.
Both sets of statistics can be obtained through the BVIR functions (see 9.9, “Bulk Volume
Information Retrieval” on page 711).
Because both types of statistical data are delivered in binary format from the BVIR functions,
you must translate the content into a readable format. You can do this task either manually by
using the information provided in the following documents:
IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper Version
2.0.r
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100829
IBM Virtualization Engine TS7700 Series VEHSTATS Decoder
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD105477
Or you can use an existing automation tool. IBM provides a historical statistics tool called
VEHSTATS. Like the other IBM Tape Tools, the program is provided as-is, without official
support, for the single purpose of showing how the data might be reported. There is no
guarantee of its accuracy, and there is no additional documentation available for this tool.
Guidance for interpretation of the reports is available in 9.11.3, “VEHSTATS reports” on
page 735.
You can use VEHSTATS to monitor TS7700 Virtualization Engine virtual and physical
back-end tape drives, and tape volume cache activity to do trend analysis reports, based on
BVIR binary response data. The tool summarizes TS7700 Virtualization Engine activity on a
specified time basis, up to 90 days in time sample intervals of 15 minutes or one hour,
depending upon the data reported.
Figure 9-50 on page 730 highlights three files that might be helpful in reading and interpreting
VEHSTATS reports:
The TS7700.VEHSTATS.Decoder.V10.pdf file contains a description of the fields listed in the
various VEHSTATS reports.
The VEHGRXCL.txt file contains the description for the graphical package contained in
VEHGRXCL.EXE.
The VEHGRXCL.EXE file contains VEHSTATS_Model.ppt and VEHSTATS_Model.xls. You
can use these files to create graphs of cluster activity based on the flat files created with
VEHSTATS. Follow the instructions in the VEHSTATS_Model.xls file to create these graphs.
In addition to the VEHSTATS tool, sample BVIR jobs are included in the IBMTOOLS libraries.
These jobs help you obtain the input data from the TS7700 Virtualization Engine. With these
jobs, you can control where the historical statistics are accumulated for long term retention.
The TS7700 Virtualization Engine still maintains historical statistics for the previous 90 days,
but you can have the pulled statistics recorded directly to the SMF log file or continue to use
the disk flat file method. The flat files can be recorded as either RECFM=U or RECFB=VB.
Three specific jobs in IBMTOOLS.JCL are designed to fit your particular needs:
BVIRHSTS To write statistics to the SMF log file
BVIRHSTU To write statistics to a RECFM=U disk file
BVIRHSTV To write statistics to a RECFM=VB disk file
The VEHSTATS reporting program accepts any or all of the various formats of BVIR input.
Define which input is to be used through a DD statement in the VEHSTATS job. The three
input DD statements are optional, but at least one of the statements shown in Example 9-11
must be specified.
The SMF input file can contain all SMF record types kept by the user. The SMFNUM
parameter defines which record number is processed when you specify the STATSMF
statement.
The fields shown in the various reports depend on which ORDER member in IBMTOOLS.JCL
is being used. Use the following steps to ensure that the reports and the flat file contain the
complete information that you want to be in the reports:
1. Review which member is defined in the ORDER= parameter in the VEHSTATS job member.
2. Verify that none of the fields that you want to see have been deactivated by an asterisk in
the first column. Example 9-12 shows sample active and inactive definitions in the
ORDERV12 member of IBMTOOL.JCL. The sample statements define whether you want
the amount of data in cache be displayed in MB or in GB.
VEHSTATS gives you a huge amount of information. The following lists the most important
reports available for the TS7700 Virtualization Engine, and the results and analysis that can
help you understand the reports better:
H20VIRT: Virtual Device Historical Records
H21ADP00: vNode Adapter Historical Activity
H21ADPXX: vNode Adapter Historical Activity combined (by adapter)
H21ADPSU: vNode Adapter Historical Activity combined (total)
H30TVC1: hNode HSM Historical Cache Partition
H31IMEX: hNode Export/Import Historical Activity
H32TDU12: hNode Library Historical Drive Activity
H32CSP: hNode Library Hist Scratch Pool Activity
H32GUPXX: General Use Pools 01/02 through General Use Pools 31/32
H33GRID: hNode Historical Peer-to-Peer Activity
AVGRDST: Hrs Interval Average Recall Mount Pending Distribution
DAYMRY: Daily Summary
MONMRY: Monthly Summary
COMPARE: Interval Cluster Comparison
HOURFLAT: Hourly
DAYHSMRY: Daily flat file
Clarification: The report is provided per cluster in the grid. The report title includes the
cluster number in the DIST_LIB_ID field.
The most important fields in this report are CHANNEL BLOCKS WRITTEN FOR
BLOCKSIZES. In general, the largest amount of blocks are written at 32768 or higher
blocksize, but this is not a fixed rule. For example, DFSMShsm writes a 16384 blocksize and
DB2 writes a 4096 blocksize. From an I/O point of view, analysis of blocksize on performance
is outside the scope of this book.
C) IBM REPORT=H21ADP03(10210) VNODE ADAPTOR HISTORICAL ACTIVITY RUN ON 18AUG2010 @ 8:04:29 PAGE 30
GRID#=CC001 DIST_LIB_ID= 0 VNODE_ID= 0 NODE_SERIAL=CL0ABCDE VE_CODE_LEVEL=008.006.000.0110 UTC NOT CHG
ADAPTOR 3 FICON-2 (ONLINE ) R DRAWER SLOT# 6
19JUL10MO PORT 0 MiB is 1024 based, MB is 1000 based PORT 1
RECORD GBS MB/ ----CHANNEL-------------- ----------DEVICE--------- GBS MB/ ---------CHANNEL------- ----------DEVICE-----
TIME RTE sec RD_MB MB/s WR_MB MB/s RD_MB COMP WR_MB COMP RTE sec RD_MB MB/s WR_MB MB/s RD_MB COMP WR_MB COMP
01:00:00 4 20 25827 7 49676 13 7741 3.33 19634 2.53 0 0 0 0 0 0 0 0
02:00:00 4 7 9204 2 18030 5 2100 4.38 6480 2.78 0 0 0 0 0 0 0 0
03:00:00 4 1 2248 0 4550 1 699 3.21 1154 3.94 0 0 0 0 0 0 0 0
04:00:00 4 0 0 0 69 0 0 24 2.87 0 0 0 0 0 0 0 0
05:00:00 4 0 1696 0 1655 0 550 3.08 540 3.06 0 0 0 0 0 0 0 0
06:00:00 4 9 8645 2 24001 6 3653 2.36 13589 1.76 0 0 0 0 0 0 0 0
07:00:00 4 4 6371 1 10227 2 2283 2.79 3503 2.91 0 0 0 0 0 0 0 0
08:00:00 4 2 5128 1 4950 1 2048 2.50 1985 2.49 0 0 0 0 0 0 0 0
09:00:00 4 3 6270 1 7272 2 2530 2.47 3406 2.13 0 0 0 0 0 0 0 0
The host adapter activity is summarized per adapter and as a total of all adapters. This result
is also shown in the vNode Adaptor Throughput Distribution report shown in Example 9-15.
This report summarizes the overall host throughput and shows how many one hour intervals
have shown which throughput, for example, if you look at the second line of the report data:
The throughput was 51 - 100 MBps in 191 intervals
191 intervals are 25.8% of the entire measurement period
In 90.2% of the measurement intervals, the throughput was below 100 MBps
For the duration of the report you can identify, in 15-minute increments, the following items:
The number of logical volumes to be copied (valid only for a multi-cluster grid
configuration)
The amount of data to be copied (MB)
The average age of Copy Jobs on the deferred and immediate copy queue
The amount of data (in MB) to and from the tape volume cache driven by copy activity
The amount of data (MB) copied from other clusters (inbound data) to the cluster on which
the report was executed
Tip: Analyzing the report shown in Example 9-19, you see three active clusters with
write operations from a host. This might not be a common configuration, but it is an
example of a scenario to show the possibility of having three copies of a logical volume
in a multi-cluster grid.
Summary reports
In addition to daily and monthly summary reports per cluster, VEHSTATS also provides a
side-by-side comparison of all clusters for the entire measurement interval. Examine this
report for an overall view of the grid, and for significant or unexpected differences between the
clusters.
The following steps describe the sequence of actions in general to take to produce the graphs
of your grid environment:
1. Run the BVIRHSTV program to collect the TS7700 BVIR History data for a selected
period (recommended 31 days). Run the VEHSTATS program for the period to be
analyzed (a maximum of 31 days is used).
2. Select one day during the analysis period to analyze in detail, and run the VEHSTATS
hourly report for that day. You can import the hourly data for all days and then select the
day later in the process. You also decide which cluster will be reported by importing the
hourly data of that cluster.
3. File transfer the two space-separated files from VEHSTATS (one daily and one hourly) to
your workstation.
4. Start MS-Excel and open this workbook, which must be in the directory C:\VEHSTATS.
5. Import the VEHSTATS daily file into the "Daily data" sheet, using a special parsing option.
6. Import the VEHSTATS hourly file into the "Hourly data" sheet, using a special parsing
option. Copy 24 hours of data for your selected day and cluster and paste into the top
section of the "Hourly data" sheet.
7. Open the accompanying VEHSTATS_MODEL.PPT Powerpoint presentation and ensure
that automatic links are updated.
8. Save the presentation with a new name so as not to modify the original
VEHSTATS_MODEL.PPT.
9. Verify that the Powerpoint presentation is correct, or make any corrections necessary.
10.Break the links between the workbook and the presentation.
11.Edit/modify the saved presentation to remove blank or un-needed charts. Save
presentation with the links broken.
The following examples of Powerpoint slides give an impression of the type of information that
is provided with the tool. You can easily update these slides and include them in your own
capacity management reports.
Agenda
• This presentation contains the following sections: In PowerPoint, right click
on the section name and then “Open Hyperlink” to go directly to the
beginning of that section.
– Overview
– Data transfer
– Virtual mounts
– Virtual mount times
– Virtual Drive and Physical Drive usage
– Physical mounts
– Physical mount times
– Data compression ratios
– Blocksizes
– Tape Volume Cache performance
– Throttling
– Multi cluster configuration (Grid)
– Import/Export Usage
– Capacities: Active Volumes and GB stored
– Capacities: Cartridges used
– Pools (Common Scratch Pool and up to 4 Storage Pools )
3
Figure 9-51 Sample VEHGRXCL: Agenda
Overview
Customers Grid February
450
400
350
300
250
200
150
100
50
0
1-Feb 3-Feb 5-Feb 7-Feb 9-Feb 11-Feb 13-Feb 15-Feb 17-Feb 19-Feb 21-Feb 23-Feb 25-Feb 27-Feb
Date
Comment:
12
Figure 9-53 Sample VEHGRXCL: Maximum and Average Throughput
1000
900
800
700
600
500
400
300
200
100
0
1-Feb 3-Feb 5-Feb 7-Feb 9-Feb 11-Feb 13-Feb 15-Feb 17-Feb 19-Feb 21-Feb 23-Feb 25-Feb 27-Feb
Date
Comment:
52
Several of these commands (shown in bold) and their responses are listed in Example 9-20,
separated by a dashed line.
For more information, see Chapter 8, “Operation” on page 451 and z/OS DFSMS Object
Access Method Planning, Installation, and Storage Administration Guide for Tape Libraries,
SC35-0427.
LI DD,ATVIGA
RESPONSE=EGZB
CBR1220I Tape drive status: 338
DRIVE DEVICE LIBRARY ON OFFREASN LM ICL ICL MOUNT
NUM TYPE NAME LI OP PT AV CATEGRY LOAD VOLUME
5F00 3490 ATVIGA ......N N Y N A NONE N
5F01 3490 ATVIGA ......N N Y N A NONE N
5F02 3490 ATVIGA ...... N N Y N A NONE N
5F03 3490 ATVIGA ,,,,,,N N Y N A NONE N
5F04 3490 ATVIGA ,,,,,,N N Y N A NONE N
5F05 3490 ATVIGA ......N N Y N A NONE N
5F06 3490 ATVIGA ......N N Y N A NONE N
5F07 3490 ATVIGA ......N N Y N A NONE N
5F08 3490 ATVIGA ......N N Y N A NONE N
5F09 3490 ATVIGA ......N N Y N A NONE N
5F0A 3490 ATVIGA ..... N N Y N A NONE N
5F0B 3490 ATVIGA ..... N N Y N A NONE N
5F0C 3490 ATVIGA ..... N N Y N A NONE N
5F0D 3490 ATVIGA ..... N N Y N A NONE N
5F0E 3490 ATVIGA ..... N N Y N A NONE N
5F0F 3490 ATVIGA ..... N N Y N A NONE N
5F10 3490 ATVIGA ..... N N Y N A NONE N
5F11 3490 ATVIGA ..... N N Y N A NONE N
5F12 3490 ATVIGA ..... N N Y N A NONE N
5F13 3490 ATVIGA ..... N N Y N A NONE N
5FFA 3490 ATVIGA ..... N N Y N A NONE N
5FFB 3490 ATVIGA ..... N N Y N A NONE N
5FFC 3490 ATVIGA ..... N N Y N A NONE N
5FFD 3490 ATVIGA ..... N N Y N A NONE N
5FFE 3490 ATVIGA ..... N N Y N A NONE N
5FFF 3490 ATVIGA ,,,,, N N Y N A NONE N
For more information about the LIBRARY command, see Chapter 8, “Operation” on page 451
and z/OS DFSMS Object Access Method Planning, Installation, and Storage Administration
Guide for Tape Libraries, SC35-0427.
Most checks that you should make in each shift ensure that the TS7700 environment is
operating as expected. The checks that are made daily or weekly are intended for tuning and
longer-term trend analysis.
The information in this table are intended as a basis for monitoring. You can tailor them to
best fit your needs.
All virtual drives LI DD,libname Display each Each shift Report or act on any
online Composite Library missing drive
and each system
Library online D SMS, LIB(ALL), Display each Each shift Verify availability to
and operational DETAIL Composite Library systems
and each system
Exits enabled D SMS,OAM Display each Each shift Report any disabled
system exits
Virtual scratch D SMS, LIB(ALL), Display each Each shift Report each shift
volumes DETAIL Composite Library
Physical scratch D SMS, LIB(ALL), Display each Each shift Report each shift
tapes DETAIL Composite Library
Interventions D SMS, LIB(ALL), Display each Each shift Report or act on any
DETAIL Composite Library interventions
Grid Link Status LI REQ,libname, Display each Each shift Report any errors or
STATUS, Composite Library elevated Retransmit%
GRIDLINK
Number of TS7700 MI Display for each Each shift Report and watch for
volumes on the Logical cluster in the grid gradual or sudden
deferred copy Volumes increases
queue Incoming Copy
Queue
Copy queue TS7700 MI Display for each Each shift Report if queue depth
depths system is higher than usual
Data BVIRPOOL Job Watch for healthy Weekly Use for reclaim tuning
Distribution distribution
The volume removal policy for hybrid grid configurations is available in any grid
configuration that contains at least one TS7720 cluster. Because the TS7720 “Disk-Only”
solution has a maximum storage capacity that is the size of its tape volume cache, after
cache fills, this policy allows logical volumes to be automatically removed from cache while
a copy is retained within one or more peer clusters in the grid. When the auto-removal
starts, all volumes in the fast-ready (scratch) category are removed first because these
volumes are intended to hold temporary data. This mechanism could remove old volumes
in a private category from the cache to meet predefined cache usage threshold if a copy of
the volume is retained on one of the remaining clusters. A TS7740 cluster failure can affect
the availability of old volumes (no logical volumes are removed from a TS7740 cluster).
If a logical volume is written on one of the TS7700 Virtualization Engine Clusters in the
grid configuration and copied to the other TS7700 Virtualization Engine Cluster, the copy
can be accessed through the other TS7700 Virtualization Engine Cluster. This is subject
to the so-called volume ownership.
At any time, a logical volume is “owned” by a cluster. The owning cluster has control over
access to the volume and changes to the attributes associated with the volume (such as
category or storage constructs). The cluster that has ownership of a logical volume can
surrender it dynamically to another cluster in the grid configuration that is requesting a
mount of the volume.
When a mount request is received on a virtual device address, the TS7700 Virtualization
Engine Cluster for that virtual device must have ownership of the volume to be mounted or
must obtain the ownership from the cluster that currently owns it. If the TS7700
Virtualization Engine Clusters in a grid configuration and the communication paths
between them are operational (Grid Network), the change of ownership and the
processing of logical volume related commands are transparent in regards to the
operation of the TS7700 Virtualization Engine Cluster.
Guidance: The links between the TSSCs must not be the same physical links that are
also used by cluster grid Gigabit links. AOTM should have a different network to be able
to detect that a missing cluster is actually down, and that the problem is not caused by
a failure in the grid Gigabit WAN links.
If enabled by the SSR, if a TS7700 Virtualization Engine Cluster cannot obtain ownership
from the other TS7700 Virtualization Engine Cluster because it does not get a response to
an ownership request, a check is made through the TSSCs to determine whether the
owning TS7700 Virtualization Engine Cluster is inoperable or just that the communication
paths to it are not functioning. If the TSSCs have determined that the owning TS7700
Virtualization Engine Cluster is inoperable, they will enable either read or write ownership
takeover, depending on what was set by the CE.
AOTM enables an ownership takeover mode after a grace period, and can only be
configured by an IBM SSR. Therefore, jobs can intermediately fail with an option to retry
The scenarios described in the following sections are from the scenarios described in the IBM
Virtualization Engine TS7700 Series Grid Failover Scenarios white paper, which was written
in an effort to assist IBM specialists and customers in developing such testing plans. The
white paper is available at the following address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100831
The white paper documents a series of TS7700 Virtualization Engine Grid failover test
scenarios for z/OS that were run in an IBM laboratory environment. Single failures of all major
components and communication links and some multiple failures are simulated.
TS7700-0
z/OS Host
TSSC
TSSC
TS7700-1
Figure 10-2 Grid test configuration for a two-cluster grid
For the Automatic Takeover scenarios, a TSSC attached to each of the TS7700 Virtualization
Engine Clusters is required and an Ethernet connection between the TSSCs. Although all the
components tested were local, the results of the tests should be similar, if not the same, for
remote configurations. All FICON connections were direct, but again, the results should be
valid for configurations utilizing FICON directors. Any supported level of z/OS software, and
current levels of TS7700 Virtualization Engine and TS3500 Tape Library microcode, should all
provide similar results. The test environment was MVS/JES2. Failover capabilities are the
same for all supported host platforms, although host messages differ and host recovery
capabilities might not be supported in all environments.
For the tests, all host jobs are routed to the virtual device addresses associated with TS7700
Virtualization Engine Cluster 0. The host connections to the virtual device addresses in
TS7700 Virtualization Engine Cluster 1 are used in testing recovery for a failure of TS7700
Virtualization Engine Cluster 0.
An IBM Support team should be involved in the planning and execution of any failover tests. In
certain scenarios, intervention by a service support representative (SSR) might be needed to
initiate failures or restore “failed” components to operational status.
Clarification: The following scenarios were tested using TS7740 Virtualization Engine
Clusters with attached TS3500 Tape Libraries. The scenarios also apply to the TS7720
Virtualization Engine as long as they are limited to virtual volume management and grid
communication.
TS7700-0
z/OS Host
TSSC
TSSC
TS7700-1
TS7700-0
z/OS Host
TSSC
TSSC
TS7700-1
Figure 10-4 Failure of both links between the TS7700 Virtualization Engine Clusters
TS7700-0
z/OS Host
TSSC
TSSC
TS7700-1
Figure 10-5 Failure of a link between TS7700 Virtualization Engine Clusters with remote mounts
TS7700-0
z/OS Host
TSSC
TSSC
TS7700-1
Figure 10-6 Failure of both links between TS7700 Virtualization Engine Clusters with remote mounts
Tip: Although the data resides on TS7700-1, if it was mounted on TS7700-0 when the
failure occurred, it is not accessible through the virtual device addresses on TS7700-1
because ownership transfer cannot occur.
TS7700-0
z/OS Host
TSSC
TSSC
TS7700-1
Figure 10-7 Failure of the local TS7700 Virtualization Engine Cluster 0
TS7700-0
z/OS Host
TSSC
TSSC
TS7700-1
Figure 10-8 Failure of both links between TS7700 Virtualization Engine Clusters with Automatic
Takeover
TSSC
TSSC
TS7700-1
Failures related to Cluster 0 and Cluster 1 are already described in the previous scenarios of
this chapter. This scenario considers what to do when both links to Cluster 2 have failed and
the only shared component from Cluster 0 and Cluster 1 to Cluster 2 is the network.
Virtual volumes are written on one cluster at the local site and copied to one cluster at the
remote site, so that a copy of a volume will exist both in Cluster 0 and Cluster 2, and in Cluster
1 and Cluster 3.
In the scenario, shown in Figure 10-10, the remote site fails. The grid WAN is operational.
TSSC
TSSC
TS7720-1
TS7740-3 TSSC
TS7740-2
TSSC
With a stand-alone system, a single TS7700 Virtualization Engine Cluster is installed. If the
site at which that system is installed is destroyed, the data that is associated with the TS7700
Virtualization Engine might also have been destroyed. If a TS7700 Virtualization Engine is not
usable because of interruption of utility or communication services to the site, or significant
physical damage to the site or the TS7700 Virtualization Engine itself, access to the data that
is managed by the TS7700 Virtualization Engine is restored through automated processes
designed into the product.
The recovery process assumes that the only elements available for recovery are the stacked
volumes themselves. It further assumes that only a subset of the volumes is undamaged after
the event. If the physical cartridges have been destroyed or irreparably damaged, recovery is
not possible, as with any other cartridge types. It is important that you integrate the TS7700
Virtualization Engine recovery procedure into your current disaster recovery procedures.
Remember: The disaster recovery process is a joint exercise that requires your
involvement and your IBM SSR to make it as comprehensive as possible.
For many customers, the potential data loss or the recovery time required with a stand-alone
TS7700 Virtualization Engine is not acceptable. For those customers, the TS7700
Virtualization Engine Grid provides a near-zero data loss and expedited recovery time
solution. With a TS7700 Virtualization Engine multi-cluster grid configuration, two, three, or
four TS7700 Virtualization Engine Clusters are installed, typically at two or three sites, and
interconnected so that data is replicated between them. The way the two or three sites are
used then differs, depending on requirements.
In a two-cluster grid, the typical use will be that one of the sites is the local production center
and the other is a backup or disaster recovery center, separated by a distance dictated by
your company’s requirements for disaster recovery.
In a four-cluster grid, disaster recovery and high availability can be achieved, ensuring that
two local clusters keep RUN volume copies and both clusters are attached to the host. The
third and fourth remote clusters hold deferred volume copies for disaster recover. This way
can be configured in a crossed way, meaning that you can run two production data centers,
each serving as a backup for the other.
The only connection between the production sites and the disaster recovery site is the grid
interconnection. There is normally no host connectivity between the production hosts and the
disaster recovery site’s TS7700 Virtualization Engine. When client data is created at the
production sites, it is replicated to the disaster recovery site as defined through outboard
policy management definitions and SMS settings.
LAN/WAN
TS7700 Cluster 0
TS7700 Cluster 1
Restriction: You can only execute a Copy Export Recovery process in a stand-alone
cluster. After recovered, you can create a multi-cluster grid by joining the grid with another
stand-alone cluster.
The following instructions for how to implement and execute Copy Export Recovery also
apply if you are running a DR test. If it is a test, it is specified in each step. For more details
and error messages related to the Copy Export function, see the IBM Virtualization Engine
TS7700 Series Copy Export Function User’s Guide white paper, which is available at the
following URL:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101092
Remember: Copy Export is not applicable to the TS7720 Virtualization Engine. In a hybrid
cluster, data can be moved from the TS7720 Virtualization Engine to the TS7740
Virtualization Engine Grid, and then you can run Copy Export on the data.
This allows data that might have existed only within a TS7740 in a hybrid configuration to be
restored while maintaining access to the still existing TS7720 clusters. This form of extended
recovery must be carried out by IBM support personal.
For more details and error messages related to the Copy Export function, see the IBM
Virtualization Engine TS7700 Series Copy Export Function User’s Guide white paper
available at the following URL:
http://www-03.ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs
You will only see the Copy Export Recovery menu item if you have been given access to
that function by the overall system administrator on the TS7700. It is not displayed if the
TS7700 is configured in a grid configuration. Contact your IBM SSR if you must recover a
TS7740 that is a member of a grid.
2. If the TS7740 determines that data or database entries exist in the cache, Copy Export
Recovery cannot be performed until the TS7740 is empty. Figure 10-13 shows the window
that opens, informing you that the TS7740 contains data that must be erased.
Figure 10-13 Copy Export Recovery window with erase volume option
3. Ensure that you are using the correct TS7740 (check to make sure you are logged in to
the correct TS7740) and if so, select the Erase all existing volumes before the recovery
check box and click Submit.
The TS7740 begins the process of erasing the data and all database records. As part of
this step, you are logged off from the management interface.
5. After waiting about one minute, log in to the management interface. Because the TS7740
is performing the erasure process, the only selection that is available through the Service
& Troubleshooting menu is the Copy Export Recovery State window. Select that window
to follow the progress of the erasure process.
6. To update the Copy Export Recovery State window, click Refresh. You can log off from the
management interface by clicking Logout. Logging off from the MI does not terminate the
erasure process. You can log back into the MI at a later time, and if the erasure process is
still in progress, the window provides the latest status. If the process had completed, the
Copy Export Recovery State menu item is not available.
The following tasks are listed in the task detail window as the erasure steps are being
performed:
– Taking the TS7700 offline.
– The existing data in the TS7700 database is being removed.
– The existing data in the TS7700 cache is being removed.
– Cleanup (removal) of existing data.
– Requesting the TS7700 go online.
– Copy Export Recovery database cleanup is complete.
After the erasure process has completed, the TS7740 returns to its online state and you
can continue with Copy Export Recovery.
7. Starting with an empty TS7740, you must perform several setup tasks by using the MI that
is associated with the recovery TS7740 (for many of these tasks, you might only have to
verify that the settings are correct because the settings are not deleted as part of the
erasure step):
a. Verify or define the VOLSER range or ranges for the physical volumes that are to be
used for and after the recovery. The recovery TS7740 must know what VOLSER
ranges it owns. This step is done through the MI that is associated with the recovery
TS7740.
b. If the copy exported physical volumes were encrypted, set up the recovery TS7740 for
encryption support and have it connected to an external key manager that has access
to the keys used to encrypt the physical volumes. If you will write data to the recovery
At this time, load the copy exported physical volumes into the library. Multiple sets of
physical volumes have likely been exported from the source TS7740 over time. All of the
exported stacked volumes from the source TS7740 must be loaded into the library. If
multiple pools were exported and you want to recover with the volumes from these pools,
load all sets of the volumes from these pools. However, be sure that the VOLSER you
provided is from the latest pool that was exported so that it has the latest overall database
backup copy.
Important:
Before continuing the recovery process, be sure that all the copy-exported physical
volumes have been added. Any volumes not known to the TS7740 when the
recovery process continues will not be included and can lead to errors or problems.
You can use the Physical Volume Search window from the MI to verify that all
inserted physical volumes are known to the TS7740.
Do not add any physical scratch cartridges at this time. You can do that after the
Copy Export recovery operation has completed and you are ready to bring the
recovery TS7740 online to the hosts.
11.The TS7740 begins the recovery process. As part of this step, you are logged off from the
management interface.
The following tasks are listed in the task detail window as the Copy Export Recovery steps
are performed:
– Taking the TS7700 offline.
– The requested recovery tape XXXXXX is being mounted on device YYY.
– The database backup is being retrieved from the specified recovery tape XXXXXX.
– The requested recovery tape is being demounted following retrieval of the database
backup.
– The database backup retrieved from tape is being restored on the TS7700.
– The restored database is being updated for this hardware.
– The restored database volumes are being filtered to contain the set of logical volumes
that were copy exported.
– Token ownership is being set to this cluster from the previous cluster.
– The restored database is being reconciled with the contents of cache, XX of YY
complete.
– Logical volumes are being restored on the Library Manager, XX of YY complete.
– Copy Export Recovery is complete.
– Copy Export Recovery from physical volume XXXXXX.
– Requesting the TS7700 go online.
– Loading recovered data into the active database.
– In progress.
If an error occurs, various possible error texts with detailed error descriptions can help you
solve the problem. For more details and error messages related to the Copy Export Recovery
function, see the IBM Virtualization Engine TS7700 Series Copy Export Function User’s
Guide white paper, which is available at the following URL:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101092
If everything is completed, you can vary the virtual devices online, and the tapes are ready to
read.
Tip: For more general considerations about DR testing, see 10.6, “Disaster recovery
testing considerations” on page 779.
You might also want to update the library nicknames that are defined through the
management interface for the grid and cluster to match the library names defined to DFSMS.
That way, the names shown on the management interface windows will match those used at
the host for the composite and distributed libraries. To set up the composite name used by the
host to be the grid name, select Configuration Grid Identification Properties. In the
window that opens, enter the composite library name used by the host into the grid nickname
field. You can optionally provide a description. Likewise, to set up the distributed name, select
Configuration Cluster Identification Properties. In the window that opens, enter the
composite library name used by the host into the Cluster nickname field. You can optionally
provide a description. These names can be updated at any time.
The GDPS topology is a Parallel Sysplex cluster spread across two sites, with all critical data
mirrored between the sites. GDPS provides the capability to manage the remote copy
configuration and storage subsystems, automates Parallel Sysplex operational tasks, and
automates failure recovery from a single point of control, thereby improving application
availability.
Planned reconfigurations
GDPS planned reconfiguration support automates procedures performed by an operations
center. These include standard actions to:
Quiesce a system's workload and remove the system from the Parallel Sysplex cluster
(stop the system before a change window).
IPL a system (start the system after a change window).
Quiesce a system's workload, remove the system from the Parallel Sysplex cluster, and
re-IPL the system (recycle a system to pick up software maintenance). The standard
actions can be initiated against a single system or group of systems. Additionally,
user-defined actions are supported (for example, a planned site switch in which the
workload is switched from processors in site A to processors in site B).
If a z/OS system fails, the failed system will automatically be removed from the Parallel
Sysplex cluster, re-IPLed in the same location, and the workload restarted. If a processor
fails, the failed system(s) will be removed from the Parallel Sysplex cluster, re-IPLed on
another processor, and the workload restarted.
With PPRC, there will be limited or no data loss, based upon policy, because all critical data is
being synchronously mirrored from site A to site B in the event of a site failure. Limited data
loss occurs if the production systems continue to make updates to the primary copy of data
after remote copy processing is suspended (any updates after a freeze will not be reflected in
the secondary copy of data) and there is a subsequent disaster that destroys some or all of
the primary copy of data. No data loss occurs if the production systems do not make any
updates to the primary PPRC volumes after PPRC processing is suspended.
Depending on the type of application or recovery options selected by the enterprise, multiple
freeze options are supported by GDPS (the freeze is always performed to allow the restart of
the software subsystems).
The default behavior of the TS7740 in selecting which tape volume cache will be used for the
I/O is to follow the Management Class definitions and considerations to provide the best
overall job performance. It will, however, use a logical volume in a remote TS7740 's tape
volume cache if required to perform a mount operation unless override settings on a cluster
are used.
Tip: The MI Copy Policy Override window is shown in Figure 10-1 on page 751.
A test can be conducted with either approach, but each one has trade-offs. The main
trade-offs for breaking the links are as follows:
On the positive side:
– You are sure that only the data that has been copied to the TS7700 Virtualization
Engine connected to the test system is being accessed.
The concern about losing data in the event of a disaster during a test is the major issue with
using the link break method. The TS7700 Virtualization Engine has several design features
that make valid testing possible without having to break the site-to-site links.
The second approach is the most practical in terms of cost. It would involve defining the
VOLSER range to be used, defining a separate set of categories for scratch volumes in the
DFSMS DEVSUP parmlib, and inserting the volume range into the test TS7700 Virtualization
Engine before the start of the test. It is important that the test volumes inserted using the
management interface are associated with the test system so that the TS7700 Virtualization
Engine at the test site will have ownership of the inserted volumes.
If the links are to be kept connected during the time when the volumes are inserted, an
important step is to make sure that the tape management system at the production site does
not accept use of the inserted volume range, and that the tape management system at the
test site does the following steps:
Changes on production systems:
– Use the RMM parameter REJECT ANYUSE(TST*), which means do not use
VOLSERs named TST* here.
Changes on the DR test systems:
– Use the RMM parameter VLPOOL PREFIX(TST*) TYPE(S) to allow use of these
volumes for default scratch mount processing.
– Change DEVSUPxx to point to other categories. That would be the categories of the
TST* volumes.
DR Host
Running
Production Host
D/R TEST not started
Running
LAN/WAN
Cluster
0
Cluster
1
Cart inserted = TST*
Cart used = 1*
RMM:
RMM:
VLPOOL PREFIX(TST*)…
REJECT ANYUSE(TST*)
PARMLIB DEVSUPxx:
Other TMS’s:
Media2=0012
DISABLE CBRUXENT
(all lpar’s)
After these settings are done, insert the new TST* logical volumes. Any new allocations that
are performed by the DR test system will use only the logical volumes defined for the test. At
the end of the test, the volumes can be returned to scratch status and left in the library, or
deleted if desired.
Figure 10-19 shows that DR is in a running state, which means that the DR test itself is not
started, but the DR system must be running before insertion can be done.
Important: Make sure that one logical unit has been or is online on the test system before
entering logical volumes. For more information, see 5.3.1, “z/OS and DFSMS/MVS
system-managed tape” on page 306.
If you require that the test host be able to write new data, you can use the selective write
protect for DR testing function that allows you to write to selective volumes during DR testing.
With Selective Write Protect, you can define a set of volume categories on the TS7700 that
are excluded from the Write Protect Mode, thus enabling the test host to write data onto a
separate set of logical volumes without jeopardizing normal production data, which remains
write-protected. This requires that the test host use a separate scratch category or categories
from the production environment. If test volumes also must be updated, the test host’s private
You must determine the production categories that are being used and then define separate,
not yet used categories on the test host using the DEVSUPxx member. Be sure you define a
minimum of four categories in the DEVSUPxx member: MEDIA1, MEDIA2, ERROR, and
PRIVATE.
In addition to the host specification, you must also define on the TS7700 those volume
categories that you are planning to use on the DR host and that need to be excluded from
Write-Protect mode.
In 10.7.1, “TS7700 two-cluster grid using Selective Write Protect” on page 788, instructions
are provided regarding the necessary definitions for DR testing with a TS7700 grid using
Selective Write Protect.
The Selective Write Protect function enables you to read production volumes and to write new
volumes from BOT while protecting production volumes from being modified by the DR host.
Therefore, you cannot modify or append to volumes in the production hosts’ PRIVATE
categories, and DISP=MOD or DISP=OLD processing of those volumes is not possible.
For example, with DFSMSrmm, you would insert these extra statements into the EDGRMMxx
parmlib member:
For production volumes in a range of A00000-A09999, add:
REJECT OUTPUT(A0*)
For production volumes in a range of ABC000-ABC999, add:
REJECT OUTPUT(ABC*)
With REJECT OUTPUT in effect, products and applications that append data to an existing
tape with DISP=MOD must be handled manually to function properly. If the product is
DFSMShsm, tapes that are filling (seen as not full) from the test system control data set must
be modified to full by issuing commands. If DFSMShsm then later needs to write data to tape,
it will require a scratch volume related to the test system’s logical volume range.
HSKP PARMS:
VRSEL, EXPROC,
DSTORE,…
Caution:
Disp=mod
DFSMShsm
Clarification: The term HSKP is used because this is typically the jobname used to
execute the RMM EDGHSKP utility which is used for daily tasks such as vital records
processing, expiration processing and backup of control and journal data sets. However, it
can also see the daily process that should be done with other Tape Management Systems.
This publication uses the term HSKP to mean the daily process on RMM or any other Tape
Management Systems.
This includes stopping any automatic short-on-scratch process, if enabled. For example,
RMM has one emergency short-on-scratch procedure.
To illustrate the implications of running the HSKP task in a DR test system has, see the
example in Table 10-1, which displays the status and definitions of one cartridge in a normal
situation.
Table 10-1 VOLSER AAAAAA before returned to scratch from the DR site
Environment DEVSUP TCDB RMM MI VOLSER
Table 10-2 VOLSER AAAAAA after returned to scratch from the DR site
Environment DEVSUP TCDB RMM MI VOLSER
Cart AAAAAA is now in scratch category 0012. This presents two issues:
If you need to access this volume from the Prod system, you need to change its status to
master (000F) in the MI before you can access it. Otherwise, you lose the data in the cart,
which can have serious consequences if you, for example, return to scratch 1000 volumes.
Using DR RMM, reject using Prod carts to output activities. If this cart is mounted in
response to a scratch mount, it will be rejected by RMM. Imagine that you must mount
1000 scratch volumes because RMM rejected all of them before you get one validated by
RMM.
If you are going to perform the test with the site-to-site links broken, then you can use the
Read Ownership Takeover mode to prevent the test system from modifying the production
site's volumes. For further information about ownership takeover, see 10.6.9, “Ownership
takeover” on page 786.
In addition to the protection options noted, you can also use the following RACF commands to
protect the production volumes:
RDEFINE TAPEVOL x* UACC(READ) OWNER(SYS1)
SETR GENERIC(TAPEVOL) REFRESH
In the command, x is the first character of the VOLSER of the volumes to protect.
After the test is finished, you have a set of tapes in TS7700 Virtualization Engine that belong
to test activities. You need to decide what to do with these tapes. As a test ends, the RMM
database and VOLCAT will probably be destaged (with all the data used in the test), but in the
MI database, the tapes remain defined: one will be in master status and the others in scratch
status.
If the tapes will not be needed anymore, manually release the volumes and then run
EXPROC to return the volumes to scratch under RMM control. If the tapes will be used for
future test activities, you only have to manually release these volumes. The cartridges remain
in the scratch status and ready for use.
Important: Although cartridges in MI remain ready to use, you must ensure that the next
time you create the test environment that these cartridges are defined to RMM and
VOLCAT. Otherwise, you will not be able to use them.
To add flexibility, you can use the cache management function Preference Level 0 (PG0) and
Preference level 1 (PG1). In general, PG0 tapes are deleted first from cache. In lower activity
periods, the smallest PG0 volumes are removed, but if the TS7700 Virtualization Engine is
busy and immediately requires space, the largest PG0 volume is removed.
The default behavior for cache preference is done so that when a host is connected to both
TS7700 Virtualization Engines in a grid configuration, the effective cache size is the
combination of both TS7700 Virtualization Engine tape volume caches. This way, more mount
requests can be satisfied from the cache. These “cache hits” result in faster mount times
because no physical tape mount is required. It does have the disadvantage in that in the case
of a disaster, most of the recently copied logical volumes are not going to be resident in the
cache at the recovery site because they were copies and would have been removed from the
cache.
You can modify cache behavior by using the SETTING Host Console command in two ways:
COPYFSC enable/disable: When disabled, logical volumes copied into cache from a Peer
TS7700 Virtualization Engine are managed as PG0 (prefer to be removed from cache).
RECLPG0 enable/disable: When disabled, logical volumes that are recalled into cache are
managed using the actions defined from the Storage Class construct associated with the
volume as defined at the TS7700 Virtualization Engine.
The test host will not be able to modify the production site-owned volumes or change their
attributes. The volume looks to the test host as a write protected volume. Because the
volumes that are going to be used by the test system for writing data were inserted through
the management interface associated with the TS7700 Virtualization Engine at the test site,
that TS7700 Virtualization Engine will already have ownership of those volumes and the test
host will have complete read and write control of them.
Important: Never enable Write Ownership Takeover mode for a test. Write Ownership
Takeover mode should only be enabled in the event of a loss or failure of the production
TS7700 Virtualization Engine.
If you are not going to break the links between the sites, then normal ownership transfer will
occur whenever the test system requests a mount of a production volume.
The best DR test is a “pseudo-real” DR test, which means stopping the production site and
starting real production at the DR site. However, stopping production is rarely realistic, so the
following scenarios assume that production must continue working during the DR test. The
negative aspect of this approach is that DR test procedures and real disaster procedures can
differ slightly.
Tips: In a DR test on a TS7700 grid, without using Selective Write Protect, with production
systems running concurrently, be sure that no return-to-scratch or emergency
short-on-scratch procedure is started in the test systems. You can return to scratch
production tapes, as discussed in 10.6.5, “Protecting production volumes with
DFSMSrmm” on page 782.
In a DR test on a TS7700 grid using Selective Write Protect, with production systems
running concurrently, you can use the Ignore fast ready characteristics of
write-protected categories option together with Selective Write Protect as described in
10.6.4, “Protecting production volumes with Selective Write Protect” on page 781.
The following sections describe procedures for four scenarios, depending on the TS7700
release level, grid configuration, and connection status during the test:
1. TS7700 two-cluster grid using Selective Write Protect
Describe the steps for performing a DR test by using the Selective Write Protect DR
testing enhancements. Whether the links between the clusters are broken or not is
irrelevant, as has been explained before. See 10.7.1, “TS7700 two-cluster grid using
Selective Write Protect” on page 788.
TS7700 TS7700
VE VE
Figure 10-21 Sample DR testing scenario with TS7700 using Selective Write Protect
Clarification: You can also use the steps described in the following procedure when
performing DR testing on one cluster within a three- or four-cluster grid. To perform DR
testing on more than one host or cluster, repeat the steps in the procedure on each of the
DR hosts and clusters involved in the test.
3. Click Enable Write Protect Mode to set the cluster in Write Protect Mode.
Be sure to also leave the Ignore fast ready characteristics if write protected
categories selected. This setting ensures that volumes in Production Fast Ready
categories that are write-protected on the DR Cluster will be treated differently.
Normally, when a mount occurs to one of these volumes, the TS7700 assumes that the
host starts writing at BOT and creates a stub. Also, when Expire Hold is enabled, the
TS7700 will not allow any host access to these volumes until the hold period passes.
Therefore, if the production host returns a volume to scratch “After” time zero, the DR host
still believes within its catalog that the volume is private and the host will want to validate
its contents. It cannot afford to allow the TS7700 to stub it or block access if the DR host
attempts to mount it.
The Ignore fast ready characteristics if write protected categories option informs the
DR Cluster that it should ignore these characteristics and treat the volume as a private
volume. It will then surface the data versus a stub and will not prevent access because of
expire hold states. However, it will still prevent write operations to these volumes.
Click Submit Changes to activate your selections.
4. Decide on which set of categories you want to use during DR testing on the DR hosts and
confirm that no host system is using this set of categories, for example X’0030’ to X’003F’.
You define those categories to the DR host in a later step.
On the DR Cluster TS7700 management interface, define two Fast-Ready categories as
described in 4.3.5, “Defining Fast Ready categories” on page 233. These two categories
Define the categories you have decided to use for DR testing, and make sure that
Excluded from Write Protect is set to Yes. In the example, you would define volume
categories X’0030’ through X’003F’, or, as a minimum X’0031’ (MEDIA1), X’0032’
(MEDIA2), X’003E’ (ERROR), and X’003F’ (PRIVATE).
6. On the DR Cluster, make sure that no copy is written to the Production Cluster that defines
the CCP of No Copy in the Management Class definitions that are used by the DR host.
7. On the DR host, restore your DR system.
8. Change the DEVSUPxx member on the DR host to use the newly defined DR categories.
DEVSUPxx controls installation-wide default tape device characteristics, for example:
– MEDIA1 = 0031
– MEDIA2 = 0032
– ERROR = 003E
– PRIVATE = 003F
Thus, the DR host is enabled to use these categories that have been excluded from
Write-Protect Mode in Step 5 on page 790.
Figure 10-24 shows the environment and the main tasks to perform in this DR situation.
Figure 10-24 Disaster recovery environment: two clusters and links not broken
Issues
Consider the following issues with TS7700 without using Selective Write Protect
environments:
You should not run the HSKP process in production site unless you can run it without the
EXPROC parameter in RMM. In z/OS V1R10, the new RMM parmlib commands PRTITION
and OPENRULE provide for flexible and simple control of mixed system environments.
In z/OS V1R9 and later, you can specify additional EXPROC controls in the EDGHSKP
SYSIN file to limit the return-to-scratch processing to specific subsets of volumes.
Therefore, you could just use EXPROC on the DR system volumes on the DR system and
use the PROD volumes on the PROD system. You can still continue to run regular batch
processing and also run expiration on the DR system.
With other TMSs, you need to stop the return-to-scratch process, if possible. If not, stop
the whole daily process. To avoid problems with scratch shortage, you can add more
logical volumes.
If you run HSKP with the EXPROC (or daily processes in other TMSs) parameter in the
production site, you cannot expire volumes that might be needed in the DR test. If you fail
to do so, TS7700 Virtualization Engine sees these volumes as scratch and, with Fast
Ready Category on, TS7700 Virtualization Engine presents the volume as a scratch
volume, and you lose the data on the cartridge.
Ensure that HSKP or short-on-scratch procedures are deactivated in the DR site.
What to do with these tapes depends on whether they are not needed anymore, or if the
tapes will be used for future DR test activities.
In the second case (tapes will be used in the future), run only step 1. The cartridges remain in
the scratch status and are ready for future use.
Important: Although cartridges in MI remain ready to use, you must ensure that the next
time you create the test environment that these cartridges are defined to RMM and
VOLCAT. Otherwise, you will not be able to use them.
Do not use logical drives in the DR site from the production site.
If you decide to “break” links during your DR test, you must review carefully your everyday
work. For example, if you have 3 TB of cache and you write 4 TB of new data every day, you
are a good candidate for a large amount of throttling, probably during your batch window. To
understand throttling, see 9.3.2, “Host Write Throttle” on page 644.
After the test ends, you might have many virtual volumes in pending copy status. When
TS7700 Virtualization Engine Grid links are restored, communication will be restarted, and
the first task that TS7700 Virtualization Engine will do is make a copy of the volumes created
during your “links broken” window. This can affect TS7700 Virtualization Engine performance.
If your DR test runs over several days, you can minimize the performance degradation by
suspending copies using the GRIDCNTL Host Console Command. After your test is over, you
can enable the copy again during a low peak workload to avoid or minimize performance
degradation. See 8.5.3, “Host Console Request function” on page 589 for more information.
Figure 10-25 Disaster recovery environment: two clusters and links broken
Issues
Consider the following items:
You can run the whole HSKP process at the production site. Because communications are
broken, the return-to-scratch process cannot be completed in the DR TS7700
Virtualization Engine, so your production tapes never return to scratch in the DR site.
In this scenario, be sure that HSKP or short-on-scratch procedures are deactivated in the
DR site.
What to do with these tapes depends on whether they are not needed anymore, or if the
tapes will be used for future DR test activities.
If the tapes are not needed anymore, perform the following steps:
1. Stop the RMM address space and subsystem and, using ISMF 2.3 (at the DR site), return
to scratch all private cartridges.
2. After all of the cartridges are in the scratch status, use ISMF 2.3 again (at the DR site) to
eject all the cartridges. Keep in mind that MI can only accept 1000 eject commands at one
time. If you must eject a high amount of cartridges, this process will be time consuming.
In the second case (tapes will be used in the future), run only step 1. The cartridges remain in
the scratch status and are ready for future use.
Important: Although cartridges in MI remain ready to use, you must ensure that the next
time you create the test environment that these cartridges are defined to RMM and
VOLCAT, otherwise you cannot use them.
DEVSUP = 0002
Production ranges:
DEVSUP = 0012
Write = 1*
D/R ranges:
Read = 1*
Write = 2*
Virtual address: LAN/WAN
Read = 1*
A00-AFF
Virtual address:
B00-BFF
C00-CFF
Cluster 0 Cluster 2
Cluster 1
Figure 10-26 Disaster recovery environment: three clusters and links not broken
Issues
Be aware of the following issues:
You should not run the HSKP process at the production site, or you can run it without the
EXPROC parameter in RMM. In other TMSs, stop the return-to-scratch process, if
possible. If not, stop the whole daily process. To avoid problems with scratch shortage, you
can add more logical volumes.
If you run HSKP with the EXPROC (or daily process in other TMSs) parameter in the
production site, you cannot expire volumes that are needed in the DR test. If you fail to do
so, and the TS7700 Virtualization Engine sees that volumes as scratch and with fast ready
category on, the TS7700 Virtualization Engine presents the volume as a scratch volume,
and you lose the data on the cartridge.
What to do with these tapes depends on whether they are not needed anymore, or if the
tapes will be used for future DR test activities.
If the tapes are not needed anymore, perform the following steps:
1. Stop the RMM address space and subsystem and, using ISMF 2.3 (at the DR site), return
to scratch all private cartridges.
2. After all of the cartridges are in the scratch status, use ISMF 2.3 again (at the DR site) to
eject all the cartridges. Keep in mind that MI can only accept 1000 eject commands at one
time. If you must eject a high amount of cartridges, this process will be time consuming.
In the second case (tapes will be used in the future), run only step 1. The cartridges remain in
the scratch status and are ready for future use.
Important: Although cartridges in MI remain ready to use, you must ensure that the next
time you create the test environment that these cartridges are defined to RMM and
VOLCAT. Otherwise, you cannot use them.
For a bank, during the batch window, and without any other alternatives to bypass a 12 hour
TS7700 Virtualization Engine outage, this can be a real disaster. However, if the bank has a
three-cluster grid (two local and one remote), the same situation is less dire because the
batch window can continue accessing the second local TS7700 Virtualization Engine.
Because no set of fixed answers exists for all situations, you must carefully and clearly define
which situations can be considered real disasters, and which actions to perform for all
possible situations.
As explained in 10.7, “Disaster recovery testing detailed procedures” on page 787, several
differences exist between a DR test situation and a real disaster situation. In a real disaster
situation, you do not have to do anything to be able to use the DR TS7700 Virtualization
Engine, which makes your task easier. However, this “easy-to-use” capability does not mean
that you have all the cartridge data copied to the DR TS7700 Virtualization Engine. If your
copy mode is RUN, you only need to consider “in-flight” tapes that are being created when the
disaster happens. YOu must rerun all these jobs to recreate tapes for the DR site. On the
other hand, if your copy mode is DEFERRED, you have tapes that are not copied yet. To know
what tapes are not copied, you can go to MI in the DR TS7700 Virtualization Engine and find
cartridges that are already in the copy queue. After you have this information you can, using
your Tape Management System, discover what data sets are missing, and re-run the jobs to
recreate these data sets at the DR site.
After you are able to start z/OS partitions, from the TS7700 Virtualization Engine perspective,
you must be sure that your HCD configuration “sees” the DR TS7700 Virtualization Engine.
Otherwise, you will not be able to put the TS7700 Virtualization Engine online.
You must change ownership takeover also. To perform that task, go to the MI interface and
allow ownership takeover for read and write.
All the other changes you did in your DR test are not needed now. Production tapes ranges,
scratch categories, SMS definitions, RMM inventory, and so on, are in a real configuration
that are in DASD copied from the primary site.
Perform the following changes because of the special situation that a disaster merits:
Change your Management Class to obtain a dual copy of each tape created after the
disaster.
Depending on situation, consider using the Copy Export capability to move one of the
copies outside the DR site.
After you are in a stable situation at the DR site, you need to start the tasks required to
recover your primary site or create a new one. The old DR site is now the production site, so
you need to create a new DR site, which is beyond the scope of this book.
Part 4 Appendixes
This part offers management and operation information for your TS7700.
Exception: This appendix provides a general description of feature codes and to where
they apply. When planning for an upgrade, see the TS7700 Virtualization Engine
Information Center at:
http://www.ibm.com/support/docview.wss?rs=1166&uid=ssg1S7001552
Clarification: The symbol “†” means that specific feature has been withdrawn.
Clarification: The symbol “†” means that specific feature has been withdrawn.
Clarification: The symbol “†” means that specific feature has been withdrawn.
Clarification: The symbol “†” means that specific feature has been withdrawn.
The following procedures are for a first-time TS7700 Virtualization Engine tape library
installation:
1. Analyze the TS7700 Virtualization Engine targeted workloads to determine the
appropriate TS7700 Virtualization Engine configuration for your environment. The
BatchMagic tool can be helpful in making this determination. Have your IBM System
Storage Specialist help you through this step.
Important: This is the crucial step for configuring your TS7700 Virtualization Engine
correctly, and should be done in conjunction with your IBM System Storage Specialist.
2. Check the latest TS7700 Virtualization Engine and IBM System Storage TS3500 Tape
Library Systems Assurance Product Review (SAPR) Guides. Again, your IBM System
Storage Specialist can be instrumental in this task.
3. Order stacked volume media or labels.
Important: Verify with the vendor of your tape management product to ensure that the
level of software installed on your system will support the TS35000 and TS7700
Virtualization Engine.
Important: Do not use the DEVSUPxx default categories. Avoiding the use of the
defaults will make future partitioning of the library with other systems easier and
more secure.
– COMMNDxx
If you want your library automatically brought online after each IPL, add the VARY
SMS,LIBRARY command. Another area where you can bring your library automatically
online is through ISMF when you define your library. Set the Initial Online Status to
YES.
– GRSCNFxx
If the tape library will be shared among two or more systems in a DFSMS complex, a
global resource serialization ring can be created to include all sharing systems. This
step enables OAM to serialize the cartridge entry process.
– LOADxx (optional)
The default data set name for the TCDB is SYS1.VOLCAT.VGENERAL. If you want to
use this name, no change is required to the LOADxx member. If you want to use a
separate HLQ than the defined default, update columns 64 - 71 of the SYSCAT
statement with the preferred HLQ.
– COFVLFxx
Add the volume catalogs to the IGGCAS class definitions where you have other ICF
catalogs.
– ALLOCxx
Add tape automation policies.
– IECIOSxx
Set Missing Interrupt Handler values. This value should be 45 minutes for the TS7700
Virtualization Engine logical drives.
10.If your IBM System Service Representative (SSR) has installed your TS7700 Virtualization
Engine, perform the following definitions at the management interface:
– Define stacked volume ranges (see 4.3.3, “Defining VOLSER ranges for physical
volumes” on page 217 for more information).
Remember: If you are sharing the tape library (non-hard partitioned), use IDCAMS to
IMPORT CONNECT the VOLCAT to the other sharing systems.
Resources: The following manuals are helpful in setting up your security environment:
z/OS DFSMSdfp Storage Administrator Reference, SC35-0422
z/OS DFSMSrmm Implementation and Customization Guide, SC26-7405
z/OS MVS Planning: Operation, SC22-7601
After OAM is started, and the library is online, and the host has even a single path
initialized for communication with the library, that host is capable of doing cartridge entry
processing. The library drives do not have to currently be online for the path to be usable.
If this library is being partitioned with other host systems, it is important that each system’s
tape management CBRUXENT first be coded and linked to prevent the host from entering
volumes that do not belong to it.
24.Insert logical volumes (see “Inserting logical volumes” on page 254 for more information).
25.The TS7700 Virtualization Engine is now ready for use.
Tip: For help in navigating the ISMF screens, and detailed information pertaining to the
definition of the DFSMS constructs, see z/OS DFSMSdfp Storage Administrator
Reference, SC35-0422.
A complete set of checklist is available in the Planning section of the TS7700 Virtualization
Engine Information Center at:
http://publib.boulder.ibm.com/infocenter/ts7700/cust/index.jsp
The following checklists are based on the installation instructions for the IBM Customer
Engineer, and provides a practical list of the items to be defined.
For instance, if you are installing a stand alone TS7700 Virtualization Engine, make it Cluster
0 (and you will only need the information regarding Cluster 0 in the checklist). If new clusters
are added they will be Cluster 1, 2, and so on.
The checklists show the maximum configuration supported at this time. You only need to fill in
the information corresponding to your configuration.
Important: There might be additional Tape Control Units, Virtual Tape Subsystems, or
Open Tape drives sharing the same TS3500. Be careful to not disrupt a working
environment when checking or configuring the TS3500.
Multiple TS7740 can be connected to the same TS3500, given that each one is assigned to
its own Logical Library in the TS3500.
Cluster 0 3957-V06
3584-Lxx
Cluster 1 3957-V06
3584-Lxx
Cluster 2 3957-V06
3584-Lxx
Cluster 3 3957-V06
3584-Lxx
Cluster 4 3957-V06/V07
3584-Lxx
Cluster 5 3957-V06/V07
3584-Lxx
TS3500 Tape Library Ethernet Cluster’s 0 Lib SN:_________ The network configuration method is specified by
Network Configuration Method DHCP [ ] the your LAN administrator. It is either Fixed IP or
or Dynamic Host Configuration Protocol (DHCP).
and Fixed IP [___.___.___.___]
Subnet [___.___.___.___] Fixed IP is recommended.
TS3500 Tape Library Ethernet IP Gateway [___.___.___.___]
Address The Ethernet ports are 10/100 Mb only.
Used for TS3500 Tape Library Cluster’s 1 Lib SN:__________
web specialist access DHCP [ ] If the Network Configuration Method is DHCP, then
or this field is not used
Fixed IP [___.___.___.___ ]
Subnet [___.___.___.___]
Gateway [___.___.___.___]
Cluster’s 4 LIB:
Cluster’s 5 LIB:
TS3500 Tape Library Logical Cluster 0: Each IBM Virtualization Engine TS7740 must be
Library Name for each TS7740 connected to a single TS3500 Tape Library logical
cluster attachment). Cluster 1: library. This logical library must have a name,
Note that those logical libraries which should have been assigned when the logical
Cluster 2:
probably belong to separate library was created. Record the logical library
physical libraries. Cluster 3: name (assign it if necessary). The logical library
name will be needed when performing the
Cluster 4: following tasks:
Configuring the logical library.
Cluster 5: Obtaining the Starting Element Address for the
logical library.
Obtaining the physical position of tape drives
within the logical library.
Obtaining the WWNNs of those tape drives.
Setting the Cartridge Assignment Policy.
Configuring ALMS.
Restriction: A TS7740 cluster must have a minimum of 4 and maximum of 16 drives that
can be connected.
List the drives in the worksheet in ascending order by frame and row. This will help in the
clarity and avoid possible mistakes during the installation or troubleshooting.
To obtain the WWNN using the Operator window in the TS3500, press Menu > Vital Product
Data > Word Wide Node Names.
To obtain the WWNN using S3500 WEB interface, click DRIVES > Drive Assignment >
World Wide Names
Tape Drive Physical Cluster 0: [3584-Lxx S/N_____] List the drives for a stand-alone cluster in order,
Positions (Fn, Rnn) in the F=Frame, R=Row, CP=Control Path using the frame and row. The lowest numbered
TS3500 Tape Library for the 1. F____,R____CP: [_]WWNN_____ drive should be in the lowest numbered frame and
drives assigned to this 2. F____,R____CP: [_]WWNN_____ row for its assign drive slots. This is not a
TS7700 Virtualization 3. F____,R____CP: [_]WWNN_____ requirement, but it might help avoid confusion
Engine. 4. F____,R____CP: [_]WWNN_____ when identifying drives during future
Mark the drives that will be 5. F____,R____CP: [_]WWNN_____ troubleshooting.
control paths. 6. F____,R____CP: [_]WWNN_____
Read the notes in the right 7. F____,R____CP: [_]WWNN_____ Distribute the tape drives across TS3500 Tape
column of this table for 8. F____,R____CP: [_]WWNN_____ Library frames to improve availability. Drives can
guidance in filling in the 9. F____,R____CP: [_]WWNN_____ reside in the same frame as the FC switches or
values. 10. F____,R____CP: [_]WWNN____ within three frames of the frame containing the
Mark the final 2 digits of 11. F____,R____CP: [_]WWNN____ switches.Distribute the control paths among those
WWNN for each selected 12. F____,R____CP: [_]WWNN____ frames.
drive. Watch out for 13. F____,R____CP: [_]WWNN____
duplicates. 14. F____,R____CP: [_]WWNN____ To obtain the WWNN from the Operator window,
15. F____,R____CP: [_]WWNN____ press Menu Vital Product Data World
16. F____,R____CP: [_]WWNN____ Wide Node Names.
Fill out the cluster numbers, the associated volume serial ranges, and the media type for each
range. Also fill out the Distributed Library Sequence Numbers that are associated to each
cluster in the proper field, and the planned Home Pool for each range (if you want separated
pools: If no requirement exists, the default Scratch Pool is 00). You can mark if encryption will
be adopted for an specific pool or not (not valid for pool 00).
Important: Each TS7700 Virtualization Engine Cluster must have a single, unique
value for the Distributed Library Sequence Number. For the TS7700 Virtualization
Engine, a typical value is the last 5 digits of the 3952-F05 frame serial number.
Home Pool (also called Scratch Pool): You might have assigned a Home Pool value. If one
has not been set, the default value is 00.
Encrypted pool: Encryption can be controlled by pools. If you want, mark the pool enabled
for encryption.
Restriction: Certain requirements must be met before enabling encryption. See the
TS7700 Virtualization Engine Information Center and Chapter 4, “Hardware
implementation” on page 189 and Chapter 5, “Software implementation” on page 2835
for more details about encryption.
Composite Library Sequence Number This five-character name must be the same on
all clusters (peers) within the same grid. This
identifier is specified in the TS7700
Virtualization Engine configuration. It is
required even if the machine is not in a grid
configuration.
The Composite Library Sequence Number
must differ from the Distributed Library
Sequence number specified.
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
Cluster 3:
____.____.____.____
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
Cluster 3:
____.____.____.____
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
Customer Gateway Cluster 0: This is used with the virtual, primary, and
____.____.____.____ alternate IP addresses.
Cluster 1:
____.____.____.____
Cluster 2:
____.____.____.____
Cluster 3:
____.____.____.____
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
Customer Subnet Mask Cluster 0: This is used with the virtual, primary, and
____.____.____.____ alternate IP addresses.
Cluster 1:
____.____.____.____
Cluster 2:
____.____.____.____
Cluster 3:
____.____.____.____
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
NTP server IP address (if used) ____.____.____.____ The TCP/IP address you obtain from the
Using the NTP server ensures that all customer is either the NTP server at the
components have consistent time customer site (if the customer maintains one
settings. locally), or an Internet server. Use of an
Internet server assumes that the customer
allows access to the Internet on the NTP
services port (TCP/IP port 123).
Cluster 1:
J1A Emulation Mode [_]
E05 Native Mode [_]
E06/EU6 Mode [_]
Cluster 2:
J1A Emulation Mode [_]
E05 Native Mode [_]
E06/EU6 Mode [_]
Cluster 3:
J1A Emulation Mode [_]
E05 Native Mode [_]
E06/EU6 Mode [_]
Cluster 4:
J1A Emulation Mode [_]
E05 Native Mode [_]
E06/EU6 Mode [_]
Cluster 5:
J1A Emulation Mode []
E05 Native Mode [_]
E06/EU6 Mode [_]
The grid interfaces require connections using the following TCP/IP ports:
7 (Ping)
9 (Discard Service - for bandwidth measuring tools)
123 (Network Time Protocol)
350 (Distributed Library to Distributed Library File Transfer)
1415 (WebSphere message queues grid to grid)
1416 (WebSphere message queue HDM to HDM)
Cluster 4
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 5
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 4
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 5
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
For a 3957-V07 with dual-ported GRID adapters and FC1034 (which enables the second port
for the 1 GB copper or fiber grid adapters), fill out the tables corresponding to the third and
fourth grid links (Table C-7).
Cluster 4
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 5
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Fourth Grid Interface IP address Cluster 0 The Fourth Grid Interface is the 1 Gb
I/P: ___.___.___.___ Ethernet Adapter located in slot
Subnet: ___.___.___.___ Drawer 1, Slot 1, Port 1, only in
Gateway: ___.___.___.___ 3957-V07 machine.
Cluster 1
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 2
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 3
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 4
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Cluster 5
I/P: ___.___.___.___
Subnet: ___.___.___.___
Gateway: ___.___.___.___
Clarification: Cascade Deferred means one or more clusters do not have host FICON
connections enabled in normal operation. There is no need to use AOTM on a cluster
that does not have host FICON connections enabled in normal operation.
See “The Autonomic Ownership Takeover Manager (AOTM)” of the Virtualization Engine
TS7700 Installation Roadmap for use with IBM Systems Storage TS3500, IBM 3584 Tape
Library, PN 23R7608 for more information about AOTM before you continue. Do not
attempt to configure AOTM, but use the information to make an informed decision about
whether to use AOTM.
If you do not want to use the AOTM, leave the table blank.
The TSSC Grid Interface is only used for the AOTM.
Each cluster can be configured to use AOTM to provide ownership takeover for one
cluster.
TSSC Grid Interface IP Address Cluster 0: The TSSC Grid Interface is used to allow the
____.____.____.____ TSSC at one cluster to communicate with the
TSSC at one other cluster. This is required if
the AOTM will be used.
Cluster 1:
____.____.____.____
Cluster 2:
____.____.____.____
Cluster 3:
____.____.____.____
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
TSSC Grid Interface Subnet Mask Cluster 0: Specify the subnet mask to use with the grid
____.____.____.____ interface IP address.
Cluster 1:
____.____.____.____
Cluster 3:
____.____.____.____
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
TSSC Grid Interface Gateway Cluster 0: Specify the gateway IP address to use with the
____.____.____.____ grid interface IP addresses.
Cluster 1:
____.____.____.____
Cluster 2:
____.____.____.____
Cluster 3:
____.____.____.____
Cluster 4:
____.____.____.____
Cluster 5:
____.____.____.____
Takeover Mode Cluster 0: The Takeover Mode must be either Read Only
ROT [_] Takeover (ROT) or Write Only Takeover
WOT [_] (WOT).
Cluster 1:
ROT [_]
WOT [_]
Cluster 2:
ROT [_]
WOT [_]
Cluster 3:
ROT [_]
WOT [_}
Cluster 4:
ROT [_]
WOT [_]
Cluster 5:
ROT [_]
WOT [_]
Grace Period (Minutes) Cluster 0: When a cluster detects that another cluster in
the same grid has failed, it will wait for the
number of minutes specified as the Grace
Period before it attempts to take over the
volumes of the failed cluster. A typical Grace
Period is 25 minutes.
Cluster 1:
Cluster 2:
Cluster 4:
Cluster 5:
Retry Period (Minutes) Cluster 0: The retry period is the number of minutes
between attempts to take over ownership of
the volumes associated with a failed cluster. A
typical Retry Period is five minutes. In most
cases, this will be the same for both clusters.
Cluster 1:
Cluster 2:
Cluster 3:
Cluster 4:
Cluster 5:
These examples provide all of the necessary information to install any possible configuration
in an IBM System Storage TS3500 Tape Library. For more basic information about the
products in these scenarios, see the following publications:
z/OS JES3 Initialization and Tuning Guide, SA22-7549
z/OS JES3 Initialization and Tuning Reference, SA22-7550
DFSMS has support that provides JES3 allocation with the appropriate information to select a
TS3500 Tape Library device by referencing device strings with a common name among
systems within a JES3 complex.
To set up a TS3500 Tape Library in a JES3 environment, perform the following steps:
1. Define library device groups (LDGs). Prepare the naming conventions in advance. Clarify
all the names for the library device groups that you need.
2. Include the esoteric names from step 1 in the hardware configuration definition (HCD) and
activate the new EDT.
3. Update the JES3 INISH deck:
a. Define all devices in the TS3500 Tape Library through DEVICE statements.
b. Set JES3 device names through the SETNAME statement.
c. Define which device names are subsets of other device names through the high water
mark setup name (HWSNAME) statement.
All TS3500 Tape Library units can be shared between processors in a JES3 complex. They
must also be shared among systems within the same SMSplex.
Restriction: Tape drives in the TS3500 Tape Library cannot be used by JES3 dynamic
support programs (DSPs).
Define all devices in the libraries through DEVICE statements. All TS3500 Tape Library tape
drives within a complex must be either JES3-managed or non-JES3-managed. Do not mix
managed and non-managed devices. Mixing might prevent non-managed devices from use
for new data set allocations and reduce device eligibility for existing data sets. Allocation
failures or delays in the job setup will result.
Neither JES3 or DFSMS verifies that a complete and accurate set of initialization statements
is defined to the system. Incomplete or inaccurate TS3500 Tape Library definitions can result
in jobs failing to be allocated.
The DFSMS JES3 support requires LDGs to be defined to JES3 for SETNAME groups and
HWSNAME names in the JES3 initialization statements. During converter/interpreter (C/I)
processing for a job, the LDG names are passed to JES3 by DFSMS for use by MDS in
selecting library tape drives for the job. Unlike a JES2 environment, a JES3 operating
environment requires the specification of esoteric unit names for the devices within a library.
These unit names are used in the required JES3 initialization statements.
The only need for coding the LDG definition in HCD as an esoteric name is the HWSNAME
definitions in the JES3 INISH deck.
Each device within a library must have exactly four special esoteric names associated with it.
These names are:
The complex-wide name is always LDGW3495. It allows you to address every device and
device type in every library.
The library-specific name is an eight character string composed of LDG prefixing the five
digit library identification number. It allows you to address every device and device type in
that specific library.
The complex-wide device type, shown in Table D-1, defines the various device types that
are used. It contains a prefix of LDG and a device type identifier. It allows you to address a
specific device type in every tape library.
Table D-1 Library device groups: Complex-wide device type specifications
Device type Complex-wide device type definition
3490E LDG3490E
3592-J1A LDG359J
3592-E05 LDG359K
It also allows you to address a specific device type in a specific tape library. In a
stand-alone grid and a Multi Cluster TS7700 Virtualization Engine Grid installed in two
physical libraries, there is still only one library-specific device name. The LIBRARY-ID of
the Composite Library is used.
The letters CA in the XTYPE definition tell us that this is a CARTRIDGE device.
DEVICE,XTYPE=(LB13592J,CA),XUNIT=(1100,*ALL,,OFF),numdev=4
DEVICE,XTYPE=(LB13592K,CA),XUNIT=(1104,*ALL,,OFF),numdev=4
DEVICE,XTYPE=(LB13592M,CA),XUNIT=(0200,*ALL,,OFF),numdev=4
Restriction: TS3500 Tape Library tape drives cannot be used as support units by JES3
dynamic support programs (DSPs). Therefore, do not specify DTYPE, JUNIT, and JNAME
parameters on the DEVICE statements. No check is made during initialization to prevent
TS3500 Tape Library drives from definition as support units, and no check is made to
prevent the drives from allocation to a DSP if they are defined. Any attempt to call a tape
DSP by requesting a TS3500 Tape Library fails, because the DSP is unable to allocate a
TS3500 Tape Library drive.
SETNAME statement
The SETNAME statement is used for proper allocation in a JES3 environment. For tape
devices, it tells JES3 which tape device belongs to which library. The SETNAME statement
specifies the relationships between the XTYPE values (coded in the DEVICE Statement) and
the LDG names (Figure D-2). A SETNAME statement must be defined for each unique
XTYPE in the device statements.
SETNAME,XTYPE=LB1359K,
NAMES=(LDGW3495,LDGF4001,LDG359K,LDKF4001)
Complex Library Complex Library
Wide Specific Wide Specific
Library Library Device Device
Name Name Name Name
Tip: Do not specify esoteric and generic unit names, such as 3492, SYS3480R, and
SYS348XR. Also, never use esoteric names such as TAPE and CART.
F4001 4 x 3592-E05
UADD 1104-1107
4 x 3592-J1A
UADD 1100-1103
F4006
4 x 3592-E05
UADD 2004-2007
4 x 3592-E05
UADD 2000-2003
Complex-wide device type One definition for each installed device type:
LDG359J Represents the 3592-J1A devices
LDG359K Represents the 3592-E05 devices
Library-specific device type One definition for each device type in each library:
LDJF4001 Represents the 3592-J1A in library F4001
LDKF4001 Represents the 3592-E05 in library F4001
LDKF4006 Represents the 3592-E05 in library F4006
DEVICE,XTYPE=(LB13592J,CA),XUNIT=(1000,*ALL,,OFF),numdev=4
DEVICE,XTYPE=(LB13592K,CA),XUNIT=(1104,*ALL,,OFF),numdev=4
DEVICE,XTYPE=(LB23592L,CA),XUNIT=(2000,*ALL,,OFF),numdev=8
SETNAME,XTYPE=(LB13592J,CA),NAMES=(LDGW3495,LDGF4001,LDG359J,LDJF4001)
SETNAME,XTYPE=(LB13592K,CA),NAMES=(LDGW3495,LDGF4001,LDG359K,LDKF4001)
SETNAME,XTYPE=(LB23592L,CA),NAMES=(LDGW3495,LDGF4006,LDG359K,LDKF4006)
HWSNAME,TYPE=(LDGW3495,LDGF4001,LDGF4006,LDG359J,LDG359K,LDJF4001,LDKF4001,LDKF4006)1
HWSNAME,TYPE=(LDGF4001,LDJF4001,LDKF4001,LDG359J)2
HSWNAME,TYPE=(LDGF4006,LDKF4006)3
HWSNAME,TYPE=(LDJF4001,LDG359J)4
HWSNAME,TYPE=(LDG359J,LDJF4001)5
HWSNAME,TYPE=(LDG359K,LDKF4001,LDGF4006,LDKF4006)6
Library 3 has a LIBRARY-ID of 22051 and only a TS7700 Virtualization Engine installed with a
Composite Library LIBRARY-ID of 13001.
8 x 3592-E05
UADD 1107-110F
47110
Single Cluster TS7700 Grid
Composite Library ID
8 x 3592-J1A
UADD 1100-1107
256 x 3490E
UADD 0100-01FF
F4001
8 x 3592-E06
UADD 4000-4007
8 x 3592-E05
UADD 2000-2007
22051 F4006
Library-specific name LDGF4001 One definition for each library and for each
LDGF4006 Stand-Alone Cluster TS7700 Virtualization
LDG13001 Engine Grid. For a Single or Multi Cluster
LDG47110 TS7700 Virtualization Engine Grid, only the
Composite Library LIBRARY-ID is specified.
Library-specific device type One definition for each device type in each
library, except for the Multi Cluster TS7700
Virtualization Engine Grid:
LDE13001 Represents the virtual drives in the
Stand-Alone Cluster TS7700
Virtualization Engine Grid in library
22051
LDE47110 Represents the virtual drives in the
Multi Cluster TS7700 Virtualization
Engine Grid in libraries F4001 and
F4006
LDJF4001 Represents the 3592-J1A in library
F4001
LDKF4001 Represents the 3592-E05 in library
F4001
LDLF4006 Represents the encryption-enabled
3592-E05 in library F4006
LDMF4006 Represents the 3592-E06 in library
F4006
DEVICE,XTYPE=(LB13592J,CA),XUNIT=(1100,*ALL,,OFF),numdev=8
DEVICE,XTYPE=(LB13592K,CA),XUNIT=(1107,*ALL,,OFF),numdev=8,
DEVICE,XTYPE=(LB2359M,CA),XUNIT=(4000,*ALL,,OFF),numdev=8
DEVICE,XTYPE=(LB2359L,CA),XUNIT=(2000,*ALL,,OFF),numdev=8
DEVICE,XTYPE=(LB3GRD1,CA),XUNIT=(3000,*ALL,,OFF),numdev=256
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0110,*ALL,S3,OFF)
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0120,*ALL,S3,OFF)
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0130,*ALL,S3,OFF)
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0140,*ALL,S3,OFF)
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0111,*ALL,S3,OFF)
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0121,*ALL,S3,OFF)
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0131,*ALL,S3,OFF)
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(0141,*ALL,S3,OFF)
;;;;;;;
DEVICE,XTYPE=(LB12GRD,CA),XUNIT=(01FF,*ALL,S3,OFF)
The same restriction applies to the virtual devices of the clusters of a multi-cluster grid
configuration. If you want to balance the workload across the virtual devices of all clusters,
do not code the NUMDEV parameter.
SETNAME,XTYPE=LB1359J,NAMES=(LDGW3495,LDGF4001,LDG359J,LDJF4001)
SETNAME,XTYPE=LB1359K,NAMES=(LDGW3495,LDGF4001,LDG359K,LDKF4001)
SETNAME,XTYPE=LB2359L,NAMES=(LDGW3495,LDGF4006,LDG359L,LDKF4006)
SETNAME,XTYPE=LB2359M,NAMES=(LDGW3495,LDGF4006,LDG359M,LDMF4006)
SETNAME,XTYPE=LB3GRD1,NAMES=(LDGW3495,LDG13001,LDG22051,LDG3490E,LDE22051,LDE13001)
SETNAME,XTYPE=LB12GRD,NAMES=(LDGW3495,LDG47110,LDG3490E,LDE47110)
HWSNAME,TYPE=(LDGW3495,LDGF4001,LDGF4006,LDG13001,LDG47110,LDG3490E,
LDG359J,LDG359K,LDG359L,LDG359M,LDE13001,LDE47110,LDJF4001,
LDKF4001,LDLF4006,LDMF4006)
HWSNAME,TYPE=(LDGF4001,LDJF4001,LDKF4001)
HWSNAME,TYPE=(LDGF4006,LDLF4006,LDMF4006)
HWSNAME,TYPE=(LDG47110,LDE47110)
HWSNAME,TYPE=(LDG13001,LDE13001)
HWSNAME,TYPE=(LDG3490E,LDE47110,LDE13001)
HWSNAME,TYPE=(LDG359J,LDJF4001)
HWSNAME,TYPE=(LDG359K,LDKF4001)
HWSNAME,TYPE=(LDG359L,LDLF4006)
HWSNAME,TYPE=(LDG359M,LDMF4006)
Figure D-10 High watermark setup statements for the second example
Processing changes
Although no JCL changes are required, a few processing restrictions and limitations are
associated with using the TS3500 Tape Library in a JES3 environment:
JES3 spool access facility (SPAF) calls are not used.
Two calls, one from the prescan phase and the other call from the locate processing
phase, are made to the new DFSMS/MVS support module, as shown in Figure D-11 on
page 858.
Figure D-11 shows the JES3 processing phases for C/I and MDS. The processing phases
include the support for system-managed direct access storage device (DASD) data sets.
The major differences between TS3500 Tape Library deferred mounting and tape mounts for
non-library drives are:
Mounts for non-library drives by JES3 are only for the first use of a drive. Mounts for the
same unit are issued by z/OS for the job. All mounts for TS3500 Tape Library drives are
issued by z/OS.
If all mounts within a job are deferred because there are no non-library tape mounts, that
job is not included in the setup depth parameter (SDEPTH).
MDS mount messages are suppressed for the TS3500 Tape Library.
DFSMS/MVS system-managed tape devices are not selected using the UNIT parameter in
the JCL. For each DD request requiring a TS3500 Tape Library unit, a list of device pool
names is passed, and from that list, an LDG name is assigned to the DD request. This results
in an LDG name passed to JES3 MDS for that request. Device pool names are never known
externally.
Selecting UNITNAMEs
For a DD request, the LDG selection is based on the following conditions:
When all devices in the complex are eligible to satisfy the request, the complex-wide
LDGW3495 name is used.
When the list of names contains names of all devices of one device type in the complex,
the corresponding complex-device type name (for example, LDG3490E) must be used.
When the list of names contains all subsystems in one TS3500 Tape Library, the
library-specific LDG name (in the examples, LDGF4001, LDGF4006, and so on) is used.
When the list contains only subsystems for a specific device type within one TS3500 Tape
Library, the LDG device type library name (in the example, LDKF4001, and so on) is used.
DFSMS-managed DISP=MOD data sets are assumed to be new update locate processing. If
a catalog locate determines that the data set is old by the VOLSER specified, a new LDG
name is determined based on the rules for old data sets.
DFSMS catalog services, a subsystem interface call to catalog locate processing, is used for
normal locate requests. DFSMS catalog services is invoked during locate processing. It
invokes SVC 26 for all existing data sets when DFSMS is active. Locates are required for all
existing data sets to determine whether they are DFSMS-managed, even if VOL=SER= is
Fetch messages
While TS3500 Tape Library cartridges are mounted and demounted by the library, fetch
messages to an operator are unnecessary and can be confusing. With this support, all fetch
messages (IAT5110) for TS3500 Tape Library requests are changed to be the non-action
informational USES form of the message. These messages are routed to the same console
destination as other USES fetch messages. The routing of the message is based on the
UNITNAME.
MDS processing also determines which processors are eligible to execute a job based on
resource availability and connectivity in the complex.
z/OS allocation interfaces with JES3 MDS during step allocation and dynamic allocation to
get the JES3 device allocation information and to inform MDS of resource deallocations. z/OS
allocation is enhanced by reducing the allocation path for mountable volumes. JES3 supplies
the device address for the TS3500 Tape Library allocation request through an SSI request to
JES3 during step initiation when the job is executing under the initiator. This support is not
changed from previous releases.
DFSMS/MVS and z/OS provide all of the TS3500 Tape Library support except for the
interfaces to JES3 for MDS allocation and processor selection.
JES3 MDS continues to select tape units for the TS3500 Tape Library. MDS no longer uses
the UNIT parameter for allocation of tape requests for TS3500 Tape Library requests.
Restriction: An LDG name specified as a UNITNAME in JCL can be used only to filter
requests within the ACS routine. Because DFSMS/MVS replaces the externally specified
UNITNAME, it cannot be used to direct allocation to a specific library or library device type.
All components within z/OS and DFSMS/MVS request tape mounting and demounting inside
a TS3500 Tape Library. They call a DFP service, library automation communication services
(LACS), instead of issuing a write to operator (WTO), which is done by z/OS allocation, so all
mounts are deferred until job execution. The LACS support is called at that time.
MDS allocates an available drive from the available unit addresses for LDGW3495. It passes
that device address to z/OS allocation through the JES3 allocation SSI. At data set OPEN
time, LACS are used to mount and verify a scratch tape. When the job finishes with the tape,
either CLOSE or deallocation issues a demount request through LACS, which removes the
tape from the drive. MDS does normal breakdown processing and does not need to
communicate with the TS7700.
Consider the following aspects, especially if you are using a multi-cluster grid with more than
two clusters and not all clusters contain copies of all logical volumes:
Retain Copy Mode setting
If you do not copy logical volumes to all of the clusters in the grid, JES3 might, for a
specific mount, select a drive that does not have a copy of the logical volume. If Retain
Copy Mode is not enabled on the mounting cluster, an unnecessary copy might be forced
according to the Copy Consistency Points that are defined for this cluster in the
Management Class.
Copy Consistency Point (CCP)
Copy Consistency Point has one of the largest influences on which cluster’s cache is used
for a mount. The CCP of Rewind/Unload (R) takes precedence over a CCP of Deferred
(D). For example, assuming each cluster has a consistent copy of the data, if a virtual
device on Cluster 0 is selected for a mount and the CCP is RD, then the CL0 cache will be
selected for the mount. However, if the CCP is DR, CL1’s cache will be selected.
For workload balancing, consider specifying “DD” rather than “RD”. This will more evenly
distribute the workload to both clusters in the grid.
You can find detailed information about these settings and other workload considerations in
Chapter 4, “Hardware implementation” on page 189 and Chapter 9, “Performance and
Monitoring” on page 635.
SDAC, also known as Hard Partitioning, can isolate and secure environments with various
requirements and objectives, shielding them from unintended or malicious interference
between hosts. In the TS7700 Virtualization Engine R2.0. partitioning is accomplished by
granting access to determined ranges of logical volumes by selected groups of devices in a
logical control unit granularity (also referred as LIBPORT-ID).
The present section gives you an example of a real implementation of this function, going
through the necessary steps to have the environments Production (named PROD) and Test
(named TEST) secluded from each other despite sharing the same TS7700 Virtualization
Engine two-cluster grid.
Hardware must be used and managed as effective as possible to ensure your investments.
One of the ways to protect the investment is by using the same hardware for more than one
Sysplex/host. This section describes points to be considered, common areas that might need
other technical competencies besides Storage team to be involved within the I/T structure for
a proper dimensioning, and planning. It also gives you a practical guideline for aspects of the
project such as in naming conventions and checklists. The solution is based on standard
functions from z/OS, DFSMSrmm, RACF and functions available in the TS7700 two-cluster
grid. A similar implementation can be done in any single- or multi-cluster grid configuration.
The setup must be as complete as possible and established in a way that generates the best
possible protection against unauthorized access to logical volumes dedicated to the other
partition.To protect against unauthorized user access for logical volumes on PROD from
TEST and from TEST to PROD logical volumes.
The function Selective Device Access Control (SDAC), introduced with R2.0, is exploited in
this appendix. It can be ordered as Feature Code 5271.
Other requirements that must be agreed on before implementation include the following:
Acceptance of sharing the two-cluster grid
Specific security requirements
Bandwidth needed per host
Number of FICON channels per host
Acceptance of shared or dedicated FICON channels
Number of logical units needed per host
Tape security in RACF: Volume related, dataset related, or both
Establish defined naming conventions before making your definitions. This makes it easier to
logically relate all definitions and structures to one host when updates are needed.
Figure E-1 gives an overview of the setup. Updates are needed on many places. Adapt your
current naming standards to your setup.
Logical Partitioning
•HCD •HCD
z/OS TEST
•PARMLIB
z/ OS PROD
definitions
•PARMLIB
definitions
Logic al Partitioning
z/OS examples
A00000-A99999 B00000-B99999
Device Range:
z/OS TEST
Device range:
z/ OS PROD
definitions
definitions
1000-10DF 10E0-10FF
2000-20DF 20E0-20FF
Fast Ready Category: Fast Ready Category:
0012 0022
Other minor updates Other minor updates
Definitions in HCD
In HCD you will define the devices needed for each of the hosts and connect them to the
LPARS. This case study defines 28 out of 32 control units for PROD (2*224 logical devices)
and 4 out of 32 control units (2*32 logical devices) for TEST. The devices for Cluster 0 have
addresses from 1000-10FF, and for Cluster 1 the values are 2000-20FF.
Normally you can activate the new definitions dynamically. Details regarding HCD definitions
are described in 5.2, “Hardware configuration definition” on page 289.
PARMLIB definitions
Use PARMLIB to define all the essential parameters needed to run the z/OS system. Some
parameters apply to this case study and definitions can be made with same values on both
hosts (TEST and PROD). All the described parameters can be activated dynamically on the
current release of z/OS.
See the following resources for complete information regarding options in PARMLIB,
definitions, and commands to activate without IPL:
MVS System Commands, SA22-7627
MVS Initialization and Tuning Reference, SA22-7592
The updates are within the following members of parmlib, where suffix and the exact name of
the PARMLIB dataset applies to your naming standards. It is important to make the changes
according to normal change rules. If the updates are not implemented correctly, severe
problems can occur when next IPL is planned.
IEFSSNxx. These updates apply for TEST and PROD.
– If OAM is new to the installation, the definitions in Example E-1 are required.
Tip: V 1000,AS,ON makes the specified address available for AS support. Followed
by V 1000,ONLINE varies the device online. Both commands must be entered on all
hosts that want device 1000 online and auto switchable.
IECIOSxx. In this member you can define specific device ranges, and you must separate
TEST from PROD updates.
– TEST updates in Example E-4. One line for each range of devices. The MOUNTMSG
parameters ensures that the console will receive Mount Pending message (IOS070E) if
a mount isn’t complete within 10 minutes. You can adjust this value. It depends on
many factors like read/write ratio on the connected host and available capacity in the
grid.
DEVSUPxx. In this member you can define specific device ranges. You must be specific
and separate TEST from PROD updates.
– DEVSUPxx for TEST is shown in Example E-6, for the categories that apply for TEST.
COMMANDxx can be used to vary the range of devices online after IPL.
– For TEST apply a specific ranges of devices as shown in Example E-8.
DFSMSrmm definitions
In this case study you have DFSMSrmm as Tape Management System (TMS). Similar
definitions must be defined if you prefer to use another vendor’s TMS.
These definitions can be done using options in DFSMSrmm such as PRTITION, OPENRULE,
REJECT, and VLPOOL. This example uses REJECT and VLPOOL.
Table E-2 shows the definitions needed in this specific case study. You must define the
volume range connected to the host, but also reject use of volumes connected to the other
host.
The naming convention in this case defines that all TEST definitions are prefixes with TS and
PROD with PR.
ACS constructs and definitions for TEST as Table E-3. Make sure that the construct names
match the names you define on the Management Interface (MI).
RACF definitions
General rules for RACF definitions will be defined by your policies. Security, in the area of
access to update of tape information in DFSMSrmm and protection of access to datasets on
tape volumes, has been improved from z/OS 1.8. However many installations seem to forget
that access to read and write tape volumes and tape dataset is by default not restricted with
RACF settings. You need to define own rules to protect for unauthorized access.
The same apply for access to do updates of the content in DFSMSrmm. There are various
solutions that can be implemented.
For more information about options, security options, and access rules, see DFSMSrmm
Implementation and Customization Guide, SC26-7405.
Automation activities
If OAM and TS7700 Virtualization Engine is new on the host, there are concerns that must be
evaluated:
OAM must start after IPL
New messages are introduced
Hardware errors and operator interventions occur and must be handled
See the IBM Virtualization Engine TS7700 Series Operator Informational Messages White
Paper which is available at the Techdocs Library website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101689
Logical Partitioning
B0000-B99999
Device Range:
z/OS TEST
Device range:
z/ OS PROD
definitions
definitions
1000-10DF 10E0-10FF
2000-20DF 20E0-20FF
Fast Ready Category: Fast Ready Category:
0012 0022
Other minor updates Other minor updates
Other definitions on this window must adhere to your policies. In this case study, PROD
volumes (category 0012) are defined with 3 days of expire hold, which means private tapes
returned to scratch will remain available for recovery three days after expiration, often for legal
reasons. TEST volumes (category 0022) can be reused and overwritten after return to
scratch processing whenever they are needed fo.r the host. For more information, see “Fast
Ready Categories window” on page 501.
Define a management class for TEST named TSMCCL0 with only one copy of data and
cache residency in cluster0 as shown in Figure E-7. Set the Copy Consistence Point
(CCP) to RUN on cluster0 and NOCOPY on cluster1. The other management classes are
defined in a similar way.
Remember: Without use of management class from z/OS, the default is a copy in both
clusters.
Also define a data class for TEST named TSDC6GB(6000MB) as shown in Figure E-9.
Figure E-9 Data class for TEST with 6000 MB volume size
Using the defined naming convention, the access groups are named TEST and PROD.
Figure E-11 shows the window where you define TEST and relate TEST to LIBPORT-IDs. You
also define the logical volume range (B00000-B99999) and connect that range to the access
group named TEST.
Verification of changes
After setting the definitions, evaluate your setup against the one shown in Figure E-14. If you
try to read or write to a logical volume belonging to the other host, the job will fail and a
message will present the reason.
PROD TEST
X X
volumes for PROD Volumes for TEST
A00000-A99999 B00000-B99999
Physical volume pool 2 Physical volume pool 1
Notes: You can find tailored JCL to run BVIR jobs and to analyze the data using
VEHSTATS in the IBMTOOLS libraries. To access the IBM Tape Tools, go to the following
URL:
ftp://ftp.software.ibm.com/storage/tapetool/
For the most current information about BVIR, see the IBM Virtualization Engine TS7700
Series Bulk Volume Information Retrieval Function User’s Guide, located at the following
address:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101094
These jobs are also available as members in userid.IBMTOOLS.JCL after you have installed
the IBMTOOLS.exe on your host. See 9.10, “IBM Tape Tools” on page 727.
After you have run one of these jobs, you can create a variety of reports using VEHSTATS.
See “VEHSTATS reports” on page 896.
BVIRHSTS
Example F-1 lists the JCL in userid.IBMTOOLS.JCL member BVIRHSTS.
BVIRHSTV
Example F-3 lists the JCL in userid.IBMTOOLS.JCL member BVIRHSTV.
Example: If this is an IBM Virtualization Engine TS7720 cluster, the following record is
returned:
'NOT SUPPORTED IN A DISK-ONLY TS7700 VIRTUALIZATION ENGINE'
Example F-4 shows the JCL to obtain the Volume Map report, which is also contained in
userid.IBMTOOLS.JCL member BVIRVTS.
You can use the same JCL as shown in Example F-4 on page 887 for the cache report by
replacing the last statement written in bold with the statement listed in Example F-5, which
creates a report for Cluster 0.
Change the following parameters to obtain this report from each of the clusters in the grid:
VTSID=
MC=
Clarification: Cache contents report refers to the specific cluster to which the request
volume was written. In a TS7700 Virtualization Engine Grid configuration, separate
requests must be issued to each cluster to obtain the cache contents of all of the clusters.
To obtain the Copy Audit report, use the same JCL shown in Example F-4 on page 887, but
replacing the last statement written in bold with the statement shown in Example F-6 and
updating the following parameters:
VTSID=
MC=
Example: If this is a TS7720 Virtualization Engine cluster, the following record is returned:
'NOT SUPPORTED IN A DISK-ONLY TS7700 VIRTUALIZATION ENGINE'
These three VEHSTATS jobs can also be found in userid.IBMTOOLS.JCL. Example F-12 lists
the sample JCL for VEHSTPO.
Example F-13 serves as input for an Copy Export function, which enables you to do off-site
vaulting of physical volumes from a TS7700 Virtualization Engine.
Example F-14 Verify information in RMM CDS, Library Manager database, and TCDB
//EDGUTIL EXEC PGM=EDGUTIL,PARM='VERIFY(ALL,VOLCAT)'
//SYSPRINT DD SYSOUT=*
//MASTER DD DSN=your.rmm.database.name,DISP=SHR
//VCINOUT DD UNIT=3390,SPACE=(CYL,(900,500))
After running EDGUTIL, you receive information about all volumes with conflicting
information. Resolve discrepancies before the migration. For more information about this
utility, see z/OS V1 DFSMSrmm Implementation and Customization Guide, SC26-7405. The
job must be run before the migration starts.
Example F-17 JCL for changing the TCDB to a new TS7700 Virtualization Engine
//****************************************************************
//**** Change TCDB for a scratch volume to a new TS7700 ****
//****************************************************************
//TCDBSCR EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
ALTER Vvolser VOLENTRY LIBRARYNAME(TSname) USEATTRIBUTE(SCRATCH)
//****************************************************************
//**** Change TCDB entry for a private volume to a new TS7700 ****
//**** Also change sgname to the one used (same as on the VTS)****
//****************************************************************
//TCDBPRIV EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
ALTER Vvolser VOLUMEENTRY LIBRARYNAME(TSname) -
USEATTRIBUTE(PRIVATE) STORAGEGROUP(sgname)
Example F-18 JCL for changing volumes in DFSMS/RMM to a new TS7700 Virtualization Engine
//PROCESS EXEC PGM=IKJEFT01,DYNAMNBR=25,
// TIME=100
//ISPLOG DD DUMMY
//SYSPRINT DD SYSOUT=*
//SYSTSPRT DD SYSOUT=*
//SYSTSIN DD *
RMM CV volser LOCATION(TSname) CMOVE FORCE
Even if you specify the FORCE parameter, it takes effect only when necessary. This
parameter requires you to be authorized to use a specific RACF Facility class named
STGADMIN.EDG.FORCE. Verify that you have the required authorization.
Example F-19 REXX EXEC for updating the library name in the TCDB
/* REXX */
/*************************************************************/
/* ALTERVOL */
/* */
/* Usage: ALTERVOL DSN(volserlist) LIB(libname) */
Remember: z/OS users can define any category from 0x0001 to 0xFEFF (0x0000 and
0xFFxx cannot be used) with the DEVSUPxx member SYS1.PARMLIB. The appropriate
member must be pointed to by IEASYSxx. For z/OS environments, use categories 0x1000
to 0xF000 to avoid potential conflicts with requirements of other operating systems.
0010 to 007F DFSMS/MVS Reserved. These volume categories can be used for library
partitioning.
00A0 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH00.
00A1 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH01.
00A2 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH02.
00A3 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH03.
00A4 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH04.
00A5 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH05.
00A6 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH06.
00A7 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH07.
00A8 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH08.
00A9 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH09.
00AA Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH10.
00AB Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH11.
00AC Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH12.
00AD Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH13.
00AE Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH14.
00AF Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH15.
00B0 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH16.
00B1 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH17.
00B2 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH18.
00B3 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH19.
00B4 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH20.
00B5 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH21.
00B6 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH22.
00B7 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH23.
00B8 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH24.
00B9 Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH25.
00BA Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH26.
00BB Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH27.
00BC Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH28.
00BD Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH29.
00BE Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH30.
00BF Native z/VSE Indicates that the volume belongs to the VSE category
SCRATCH31.
0100 IBM OS/400® Indicates that the volume has been assigned to category
(MLDD) *SHARE400. Volumes in this category can be shared with all
attached IBM System i and AS/400® systems.
0101 OS/400 (MLDD) Indicates that the volume has been assigned to category
*NOSHARE. Volumes in this category can be accessed only
by the OS/400 system that assigned it to the category.
012C TSM for AIX Indicates a private volume. Volumes in this category are
managed by Tivoli Storage Manager (TSM).
012D TSM for AIX Indicates an IBM 3490 scratch volume. Volumes in this
category are managed by TSM.
012E TSM for AIX Indicates an IBM 3590 scratch volume. Volumes in this
category are managed by TSM.
F00E BTLS Indicates a volume in error. Volumes are assigned to the error
category during demount if the volume serial specified for
demount does not match the external label of the volume
being demounted.
FF01 Virtual Tape Server Stacked Volume Insert category for a Virtual Tape Server &
and IBM TS7700 Virtualization Engine.
Virtualization A volume is set to this category when its volume serial
Engine TS7700 number is in the range specified for stacked volumes for any
VTS library partition.
FF02 Virtual Tape Server Stacked Volume Scratch category 0 for a Virtual Tape Server
and TS7700 Virtualization Engine.
This category is reserved for future use for scratch stacked
volumes.
FF03 Virtual Tape Server Stacked Volume Scratch category 1 for a Virtual Tape Server
and TS7700 Virtualization Engine.
This category is used by the VTS for its scratch stacked
volumes. This category is not used if LIC is 527 or later.
FF04 Virtual Tape Server Stacked Volume Private category for a Virtual Tape Server
and TS7700 and TS7700 Virtualization Engine.
Virtualization This category includes both scratch and private volumes
Engine (since VTS LIC level 527).
FF05 Virtual Tape Server Stacked Volume Disaster Recovery category for a Virtual
and TS7700 Tape Server and TS7700 Virtualization Engine.
Virtualization A volume is set to this category when its volume serial
Engine number is in the range specified for stacked volumes for any
VTS library partition and the Library Manager is in disaster
recovery mode.
FF06 Virtual Tape Server This category is used by the VTS as a temporary category for
and TS7700 disaster recovery. After a stacked volume in category FF05 is
Virtualization processed, it is put into this category.
Engine This category is also used be the PFE tool called “movedata”
as a temporary category.
FF07 Virtual Tape Server This category is reserved for future hardware functions.
and TS7700
Virtualization
Engine
FF08 Virtual Tape Server This category is used by the VTS to indicate a
Read-Only-Recovery Stacked Volume with active data
cannot be recovered.
Additional support has been provided in APAR OA07505 (Example H-2 on page 922) as well
as in APAR OA24965 (Example H-1 on page 916).
Important: Do not use this state save command for testing purposes. It will impact the
performance of your VTS/ATL because it consumes time to take the dump in the
hardware.
When using the DEVSERV QLIB command to display the subsystems (port-IDs) and
drives associated with the specified Library ID. If the Library ID specified is for a composite
library, the command will now display the distributed library IDs associated with the
composite library.
DOCUMENTATION:
This new function APAR adds support to the DEVSERV command for
a new Query Library option.
DS QL,LIST(,filter)
DS QL,LISTALL(,filter)
DS QL,libid(,filter)
DS QL,dddd,SS
Parameters:
Sub-Parameters:
DS QL,LIST
IEE459I 13.59.01 DEVSERV QLIB 478
The following are defined in the ACTIVE configuration:
10382 15393
DS QL,10382
IEE459I 13.59.09 DEVSERV QLIB 481
The following are defined in the ACTIVE configuration:
LIBID PORTID DEVICES
10382 04 0940 0941 0942 0943 0944 0945 0946 0947
0948 0949 094A 094B 094C 094D 094E 094F
DS QL,10382,DELETE
*04 REPLY 'YES' TO DELETE THE INACTIVE CONFIGURATION FOR
LIBRARY 10382, ANY OTHER REPLY TO QUIT.
IEF196I Reply 'YES' to delete the INACTIVE configuration for
library 10382, any other reply to quit.
R 4,YES
IEE459I 14.01.19 DEVSERV QLIB 490
Inactive configuration for library 10382 successfully deleted
COMMENTS:
CROSS REFERENCE-MODULE/MACRO NAMES TO APARS
IGUDSL01 OA07505
DS QL,LIST(,filter)
DS QL,LISTALL(,filter)
DS QL,libid(,filter)
Parameters:
Sub-Parameters:
DS QL,LIST
IEE459I 13.59.01 DEVSERV QLIB 478
The following are defined in the ACTIVE configuration:
10382 15393
DS QL,10382
IEE459I 13.59.09 DEVSERV QLIB 481
The following are defined in the ACTIVE configuration:
LIBID PORTID DEVICES
10382 04 0940 0941 0942 0943 0944 0945 0946 0947
0948 0949 094A 094B 094C 094D 094E 094F
03 09A0 09A1 09A2 09A3 09A4 09A5 09A6 09A7
09A8 09A9 09AA 09AB 09AC 09AD 09AE 09AF
02 09D0 09D1 09D2 09D3 09D4 09D5 09D6 09D7
09D8 09D9 09DA 09DB 09DC 09DD 09DE 09DF
01 F990 F991 F992 F993 F994 F995 F996 F997
F998 F999 F99A F99B F99C F99D F99E F99F
DISTRIBUTED LIBID(S)
AAAAA BBBBB
DS QL,10382,DELETE
*04 REPLY 'YES' TO DELETE THE INACTIVE CONFIGURATION FOR
LIBRARY 10382, ANY OTHER REPLY TO QUIT.
IEF196I Reply 'YES' to delete the INACTIVE configuration for
library 10382, any other reply to quit.
R 4,YES
IEE459I 14.01.19 DEVSERV QLIB 490
Inactive configuration for library 10382 successfully deleted).
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other publications
These publications are also relevant as further information sources:
DFSMSdfp Utilities, SC26-7414
DFSMShsm Storage Administration Guide, SC35-0421
DFSMS/VM Function Level 221 Removable Media Services User's Guide and Reference,
SC35-0141
FICON Planning and Implementation Guide,SG24-6497
IBM Encryption Key Manager component for the Java platform Introduction, Planning, and
User's Guide, GA76-0418
IBM System Storage Tape System 3592 Introduction and Planning Guide, GA32-0464
IBM System Storage TS1120 and TS1130 Tape Drives and TS1120 Controller Introduction
and Planning Guide, GA32-0555
The documents in IBM Techdocs are active, and in which content is constantly being changed
and new documents care being created. To ensure that you reference the newest and latest
one, search on the Techdocs website. To access the website, go to:
From the Search drop-down list, select All of the Techdocs Library and enter TS770.
WP100829 IBM Virtualization Engine TS7700 Series Statistical Data Format White Paper
This document outlines the various statistic records generated due to the activity on the TS7700
Virtualization Engine, and gives a description of each record type.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP100829
WP101092 Virtualization Engine TS7700 Series Copy Export Function User's Guide
One of the key reasons to use tape is for recovery of critical operations in the event of a disaster. The
TS7700, in a grid configuration, provides for automatic, remote replication of data that supports recovery
time and recovery point objectives measured in seconds. If you do not require the recovery times that
can be obtained in a grid configuration, a function called Copy Export is being introduced for the
TS7700. With Copy Export, a secondary copy of the logical volumes written to a TS7700 can be
removed from the TS7700 and taken to an offsite location to be used for disaster recovery. This white
paper describes the use of this function.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101092
WP101094 Virtualization Engine TS7700 Series Bulk Volume Information Retrieval Function User's Guide
The TS7700 Virtualization Engine provides a management interface based on open standards through
which a storage management application can request specific information the TS7700 Virtualization
Engine maintains. The open standards are not currently supported for applications running under z/OS,
so an alternative method is needed to provide the information to mainframe applications. This white
paper describes the use of a facility of the TS7700 Virtualization Engine through which a z/OS
application can obtain that information.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101094
WP101091 Virtualization Engine TS7700 Series z/OS Host Command Line Request User's Guide
This white paper describes a facility of the TS7700 that supports a new z/OS Library Request host
console command to allow an operator to request information pertaining to the current operational state
of the TS7700, its logical and physical volumes and its physical resources. Although the command is
primarily for information retrieval, it can also be used to initiate outboard operations.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101091
WP101224 Virtualization Engine TS7700 Series Best Practices - Performance Increments versus Number of
Backend Drives
This document provides the best practices for TS7700 Virtualization Engine performance increments
depending on the number of backend drives.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101224
WP101230 Virtualization Engine TS7700 Series Best Practices - Copy Consistency Points V1.0
This white paper describes best practices for the TS7700 based on theoretical data and practical
experience. Copy Consistency Point recommendations are described for various configurations of two
and three cluster grids.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101230
WP101281 Virtualization Engine TS7700 Series Best Practices - Return-to-Scratch Considerations for
Disaster Recovery Testing with a TS7700 Grid
When performing disaster recovery (DR) testing with a TS7700 grid, you need to consider how to handle
Return-To-Scratch (RTS) processing using a production host.This paper helps you to understand the
scratch selection criteria used. With this knowledge, you will be able to plan for your DR test while RTS
processing is kept active. This paper also addresses other scratch category considerations for a DR test.
These include the fast-ready and Expire-Hold attributes for a scratch category.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101281
WP101382 Virtualization Engine TS7700 Series Best Practices - Cache Management in the TS7720
The TS7720’s capacity is limited by the size of its cache. The intent of this document is to make
recommendations for managing the cache usage in the TS7720. It details the monitoring of cache
usage, messages, and attentions presented as the cache approaches the full state, consequences of
reaching the full state, and the methods used for managing the amount of data stored in the cache.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101382
WP101430 System Storage TS7700 Virtualization Engine TS7720 and TS7740 Performance White Paper
This paper provides performance information for the IBM TS7720 and TS7740. The paper is intended
for use by IBM field personnel and their customers in designing virtual tape solutions for their
applications.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101430
WP101465 Virtualization Engine TS7700 Series Best Practices - Understanding, Monitoring and Tuning the
TS7700 Performance
This document will help you understand the inner workings of the TS7700 so that you can make
educated adjustments to the subsystem to achieve peak performance. This document starts by
describing the flow of data through the subsystem. Next, the various throttles used to regulate the
subsystem are described. Performance monitoring is then discussed, and how and when to tune the
TS7700 Virtualization Engine.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101465
WP10689 IBM Virtualization Engine TS7700 Series Operator Informational Messages White Paper
During normal and exception processing within an IBM tape library, intervention or action by an operator
or storage administrator is sometimes required. IBM tape libraries have an optional facility that causes
a z/OS console message, CBR3750I. The purpose of this white paper is to list the intervention or action
messages that are generated on the TS7700 Virtualization Engine, and to indicate the importance of
each message in regards to how they need to be handled.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101689
WP101656 IBM Virtualization Engine TS7700 Series Best Practices - TS7700 Hybrid Grid Usage
This white paper describes the usage of various hybrid TS7700 grid configurations where TS7720s and
TS7740s both exist in the same grid. It describes how hybrid configurations can be used to improve read
hits for recently used volumes and how the TS7720 can be used for additional mount points and for high
availability. It also discusses various considerations such as Retain Copy Mode, Copy Consistency
Points, Cluster Families, Service Outages, and Disaster Recovery considerations for the hybrid grid.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101656
PRS2844 TS7700 HCD and IOCP Definitions With Logical Path Calculator
This presentation illustrates a typical HCD setup for zOS to be used when installing a TS7740 or TS7720
Subsystem. It includes a sample HCD spreadsheet for calculating Logical Paths consumed is also part
of the presentation.
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2844
Index 931
activate encryption 202 backup copy database 629
Activate New Feature License 249 backup pool 163
active control data set (ACDS) 156 backup site 380
active data 75, 160, 229, 232, 911 Backup/Restore 278
minimum amount 163 BADBLKSZ 178
threshold 228 bandwidth 141, 702
Active Data Distribution 686 bar code label 162
active log Base Frame 344
DASD data set 445 batch window efficiencies 182
data 445 battery backup units 50
active volume list 74 beginning of tape (BOT) 81
Add Category 234 BMCNTL.BIN 176
Additional 1 TB Cache Enablement 123–124 BMPACKT 179
Additional 100MB/sec Increment 123–124 borrow 221
additional cache controllers 348 Borrow, Keep 221
additional functions 342 Borrow, Return 221
adjusting the TS7700 698 borrowing capability 229
Advanced Encryption Settings 205 borrow-return policy 227
Advanced Library Management System BOT 81–82
See ALMS bottlenecks 180
Advanced Library Manager System 208 boundaries 270
Advanced Policy Management 75 Brocade 151–152
affinity 82 browser 144
Age of Last Data Written 221 buffer credit 149
Allocation and Copy Consistency Point Setting 654 buffering 141
Allocation and Device Allocation Assistance 655 bulk loading 207
Allocation and Scratch Allocation Assistance 655 Bulk Volume Information Retrieval
Allocation Assistance 73, 103 See BVIR
allocation processing 94 BVIR 34, 37, 78, 114, 165, 228, 233, 621, 711, 732, 881
allocation recovery 265 binary format 733
ALMS 54, 193–195, 201, 211 request 708
window 194 response 680
alternate source category 72 data 710, 718
analyze workload 178 BYDEVICES 657
AOTM 86–87, 154
APAR 156, 264, 357
APAR OA32957 90, 664 C
Application-Managed encryption 176 cables 130
architectural capability 91 cabling infrastructure 148
archival 381 cache capacity 270
archive log 444 increments 350
ATL 184, 614, 910 cache controllers 345
environment 184 cache data flow 669–670
ATM switches 141 cache enablements 248
Authorized Program Analysis Report 264 cache expansion 344
AUTOBACKUP 429 cache hits 373, 381
AUTODUMP 434 Cache Increments 250
Automated Library 184 cache increments 350
automated tape library cache management 66
See ATL Cache Management Policies 371
automatic data removal 272 cache management policies 31
automatic removal 270, 272, 383 cache overrun 245
automatic utilization 183 Cache Partition Activity 737
Autonomic Ownership Takeover Manager cache preference group 273
See AOTM cache preference groups 169
cache removal 279
cache residency 100
B cache subsystem 17, 345
B20 VTS model 386 cache thresholds 270
back end 9, 304 cache throughput 696
back-end drives 226, 701 cache upgrade 345, 348
Index 933
504, 566, 580–581, 591, 610, 641, 643–644, 652, 705, data erase function 76, 166–167
749, 765–775, 802, 881, 901 Data Facility Storage Management Subsystem
function 76–78, 96, 163, 182–183 See DFSMS
job 78 Data Key 170
operation 77–78, 163, 310–312, 315, 317–318, data loss 777
320–323, 427, 705, 768, 773 data management 6, 278
pool 71, 78 data movement 667
sets 78–79 data movement through tape volume cache 667
state 163 data protection 169
volumes 78 data record 715
Copy Export state 163 data replication functionality. 11
Copy Exported physical volumes 182 data retention 381
Copy Exported state 77 data set 156, 182, 243, 423–424, 712, 715, 825
Copy Failure Reporting 15 existing policies 785
Copy Management 61 minimum size 433
copy modes 380 data sets, old 859
Copy Operation 73, 424, 464, 691 data storage requirements 350
copy override 273 data storage values 344
Copy policies 383 data type 421
Copy Policy 242 database backup 441
copy policy management requirements 91 Days Before Secure Data Erase 221
copy priority 169 Days Without Access 221
Copy Time Management 61 Days Without Data Inactivation 222
CPU cycles 183 DB2 172
Cr value 164 DB2 catalog 423
Create Logical Library 196 DB2 database 34
critical data 162 full image copy 446
cross-cluster mount 94, 266 DC 238
cross-site messaging 142 DCT 695
cryptography function 80, 169 dddd,SS (DS) 918, 922
CST 32, 159, 255, 433, 435, 499, 608–609 DDMs 48, 50
emulated cartridges 159 decrypt 251
emulation 159 dedicated pool 167
customer-configured policy 87 default configuration 141
CX7 Cache Drawer 50 Default Encryption Keys 80
CX7s 48 Default key 80
default management class 243
default Storage Class 63
D default storage group 238
D32 54 Deferred 242
DAA 24, 73, 89, 93, 264, 661 deferred (DD) 154
function 88–89 deferred copy consistency point 96, 104, 366
daily production workload 160 deferred copy mode 30, 61, 368, 371, 640, 739
DASD 61, 158, 183, 420, 425, 427, 430, 436, 441, Deferred Copy Throttle 647, 695, 698
443–444, 588, 858 threshold 698
data access 256 value 698
data cartridge 211 Deferred Copy Throttle value 698
Data Cartridges window 211 define
Data Class 32, 191, 238, 247, 255, 308, 420, 827 fast ready 233
ACS routine 420 space reclamation 225
construct 33, 82 defining a logical library 193, 202, 217, 250, 258
definition 165 delete expire settings 235
Name 32, 589 Delete Expired Volume Data 163, 234
parameter Compaction 165 setting 163, 234
policy name 589 Delete Expired Volume Data setting 163
storage construct 159 delete-expired 273
Data Class ACS Dense Wave Division Multiplexers (DWDMs) 149
routines 421 Destination Settings 253
Data Classes Table 248 table 253
data consistency 381 Device Allocation Assistance 15, 73, 88, 93, 373, 661,
data encryption solutions 169
Index 935
format 131 enhanced statistical reporting 4
EDGINERS program 72 Enterprise Automated Tape Library (ETL) 452
education 183 Enterprise Economy Tape Cartridge 220
EETC 221 Enterprise Extended-Length Tape Cartridge 220
effective cache usage 235 Enterprise Library Controller 118
effective grid cache size 89 Enterprise Tape
eight-character Volser 199 Cartridge 69, 71, 220, 906
Eight-Character-VOLSER support 192 Library Specialist 680
EJECT logical volume 622 entry processing 254
eject stacked volume function 76 environmental conditions 138–139
EKM 169, 172, 176, 252 Environmental operating requirements 139
addresses 173, 250 EOV 423, 432, 435, 439
configuration file 172 EQUAL 655
must be installed first 172 erase, functionality 166
ELC 51 erasure reclamation policy 167
ELC Library Manager 396 EREPMDR 178
Eligible Device Table (EDT) 295 ESCON 35, 53, 149, 304, 390, 395, 398, 401, 405, 408,
empty scratch volume 160 411
empty stacked volume 165 director 157
empty storage cells 207 ETC 221
emulated 3490E volumes 60 ETCL 221
emulated tape volumes 60 Ethernet 681
emulated volumes 10 adapters 141, 343
emulation mode 51, 132, 152, 192 connection 85, 101, 754
Enable dual port grid connection 343 extenders 141
enable dual port grid connection 134 IP address 211
Enable Dual Port Grid Connections 123–124, 128–129 router 39
encrypted data 252 routers 39, 140
encrypted data keys 166 switches 140
encrypted physical volume 166 ETL Specialist 63, 453, 682
encryption 13, 169, 192 logical volume status 468, 474
certificates 80 TS7700 Virtualization Engine Management pages
characteristics 80 684
configuration 124, 135, 143 Expanded Memory 137
configuration enablement 249 expansion drawer 345
enablement 343 expansion frame 345
infrastructure 80 expiration time 235
parameters 80, 205 Expire Hold 74, 234, 237
settings 173, 220 Expire Time 235–237
solution 169–170 expired data 165
support 80, 170 expired logical volume 74
Encryption Capable 202 expired volume attribute 73
Encryption Configuration 124 expired volume management 74
Encryption Configuration (FC9900) 125 expired volumes 74
encryption key 173, 251 Export List File Volume 310–311, 318, 773
server 148 Export Pool 221
Encryption Key Manager 172, 251 exported physical volumes 77
default key 224 export-hold category 78
encryption method 176, 202 External Director connections 153
definition 203
encryption mode 176, 222
encryption-capable 176 F
encryption-capable E05 drives 173 factory-installed 350
encryption-capable tape drives 173 Failover 749
encryption-enabled 173, 176, 251 failover 749
end of file (EOF) 717 failover scenario 754
enhanced cache removal 373 false-end-of-batch 697
enhanced capacity cartridge system tape (ECCST) 32, False-end-of-batch phenomenon 697
255, 433 Fast Host Write premigration 697
Enhanced Library Controller attachment 45 Fast Ready 73, 236
explanation 233
Index 937
HA1 Frame 630 considerations 675
Hard Partitioning 84 definition of 567
hard partitions 158 Hydra Storage Manager
hardware 191 See HSM
hardware configuration definition 31, 155, 189, 283, 289
Hardware I/O configuration definition 190
Hash 80 I
Hash Label 222 I/O drawers 146
HBA 136, 149, 690 I/O Expansion drawers 118–119
HCD 31, 155, 185, 189, 191, 216, 283–284, 286–294, I/O operation 24, 92, 169, 264, 278, 297
296, 331, 361, 366–367, 370, 374–375, 377–379, I/O station 54, 206–208, 211–212, 615, 620
386–387, 390, 395, 398, 401–402, 405, 408, 411, 802, tape library 54
824, 848–849, 852, 917, 921 window 211
dialog 286 I/O tape volume cache 24, 67, 72–73, 91–92, 94, 96–99,
HCD/IOCP 155, 190 101, 778
HD frame 133 PG0 assignments 67
model S24 133 selection 98
HD slots 133 selection criteria 67
HDR1 record 72 IART 63, 244
Health & Monitoring 217 value 63
hexadecimal 214 IBM 3490 control unit 31
hierarchical storage management (HSM) 42 IBM 3490E
Hierarchical Storage Manager (HSM) 424, 429, 708 device 304
high availability 12, 54, 153, 381 I/O 31
solutions 105 IBM 3490E Tape Drives 60
high capacity media 10 IBM 3494 Tape Library 118, 189, 191
High Density Capacity on Demand (HD CoD) feature 133 IBM 3592 54
High Voltage Differential (HVD) 53 media 206
high watermark setup names 851 IBM 3592 Model EU6 Tape Drive 131
high availability 256 IBM 3592-J1A 31
high-capacity tape cartridges 60 IBM 3592-J1A Tape Drive 55
Historical Statistics 37, 706, 708, 723 IBM 3952 Storage Expansion Frame 39
Historical Summary 685 IBM 3952 Tape Base Frame Model F05 40
HLQs 179 IBM 3952 Tape Frame 40, 119, 138
hNode 17, 91, 638, 707–709 IBM 3953 56, 388
Home Pool 218 IBM 3953 Library Manager 287, 629
home pool 229 IBM 3953 Tape System 51, 190
host IBM Encryption Key Manager 172
allocation 271 IBM Information Infrastructure 13
attachment 149 solutions 13
compression definition 165 IBM Integrated Technology Services 184
configuration 155, 359 IBM Multi-Path Architecture 53
connectivity 104 IBM Security Key Lifecycle Manager 172
definitions 191 IBM SSR 60, 88
I/O operation 755 IBM standard label 32
system 283, 306, 438, 442, 909 IBM Support Center 38
3490E tape drives 306 IBM System p 118–119
host bus adapter IBM System Service Representative (SSR) 34, 192, 201,
See HBA 279, 454, 614, 778, 825
Host Command Line Query Capabilities 65 IBM System Storage TS1120 Tape Drive 55, 57
Host Console Request 376 IBM System Storage TS1130 Tape Drive 6
function 61, 84, 165, 228 IBM System Storage TS3500 Tape Library 7
Host Throughput window 689 IBM System x 53
Host Write Throttle 700 IBM System z 53, 190
HSM 42, 71, 429, 431–432, 436, 707, 709, 735, 738 environment 191–192, 201
HWSNAME 851 host 52, 148, 152, 155, 192, 201, 284
hybrid 264, 272, 359, 379, 383 host support 201
configuration 371, 766 logical library 208
hybrid grid 10, 21, 23, 262, 301, 371–374, 377, 380, 488, operator command 304
568, 675, 677, 679, 762, 765 software 7, 283
tape drive attachment 201
Index 939
job stream 178 library-specific device type name 849
job throttling 97 library-specific name 849
join 359 LIBSERV JCL 332–333
license key 193, 202
licensed functional characteristics 115
K Licensed Internal Code 133, 341
key labels 81 upgrade 357
key management request 81 limitations 131
key manager 80–81, 172 limited free cache space warning 271–272
key manager services 80 Linear Tape Open (LTO) 52, 206
Key Mode 222 link extenders 149
link failure 256
L list of candidate clusters 266
L22 54 loading 138
LAN 20–21 Local Cache 98, 168, 262, 778
LAN/WAN 142 local cache for non-fast ready mount 262
requirements 140 local cluster 182, 778
larger core diameter 148 Local Cluster for Fast Ready Mounts override 154
laser 148 local copy 167, 262–263, 445–446, 778
latency 141 local site
Lc 164 Cluster 0 101
value 164 only connection 103
LC Duplex 131 local tape volume cache 98–100, 778
connector 41 valid copy 103
LCU 286 local TS7700 Virtualization Engine 106
definitions 395, 398, 402, 405, 408, 411 locality factor 100
virtual devices 296 logical block ID 32
LDG 848, 851 logical control unit 158
LDD 851 See LCU
LDE 851 logical library 190, 192, 195–197, 200, 202, 206, 208,
least recently used 211, 217, 284, 287, 586
See LRU Cartridge Assignment Policy 206
LI 255 control path drive 202
LIBPORT-ID 155, 158 drive assignment window 200
Library command 745 first four drives 202
library control system (LCS) 156, 587 in TS3500 Tape Library 210
library device group (LDG) 848 inventory 208
LIBRARY EJECT command 586 library ID 284
Library Emulation 55 library sequence number 284
Library Emulation Capable 68 name 196, 200
Library Manager 69, 306, 453 partition 207, 211, 619
category 217 sharing between hosts 284
database 386, 394, 396, 398, 902 starting SCSI element address 197
functions 190 tape drives 201
multiple logical scratch pools 280 tape units 586
VOLSER ranges 192 view 453
Library Name 91, 307, 334, 389, 588 logical mount request 182
Library Operator window 193 Logical Mounts window 688
library outages 4 logical partitions 158
Library partition 18 logical volume 10, 24, 68, 81, 156, 158–160, 162–164,
LIBRARY REQUEST 90, 264, 271, 664 166–167, 169, 172, 181, 219, 229, 236, 331, 388, 481,
LIBRARY RESET 254 750, 754, 824, 826
Library Sequence Number 217 copy count 94
library slot capacity 165 current copy 73
Library Specialist 192 data 165
LIBRARY-ID 155 data movement 97
Library-ID 29, 155, 288, 295, 307–308, 361, 366, 370, dual copies 71
375, 390, 395, 398, 401, 404, 407–408, 410–411, 612, EJECT 622
714 following number 160
Library-Managed 176 host request 85
Index 941
MEDIA1 32 multiple grid configuration 154
MEDIA2 32, 160 multiple logical volumes 180
media-type 208 multi-volume set 159, 432
Memory Upgrade 138, 356 MVS 585, 587, 612, 743
memory upgrade 344 operator commands 585
merge 357, 359, 373
merge process 373
MES 344, 347, 350 N
message exchange 148 native (E05) mode 52, 55, 173
metadata 162 native drive 181, 388
metro distance 381 IDRC compression 435
migrated volumes 10 recommendations 440
migration 10 native longwave Fibre Channel transmitters 148
options 137 native mode 122, 173, 192, 203
Migration to TS7700 Virtualization Engine 386 Native z/VSE 332, 907
MIH 61, 190, 284, 304–305, 366, 370, 375, 395, 398, network 147
402, 405, 408, 411, 700, 825 redundancy 142
settings 190 switches 147
time 305 Network Time Protocol
value 304–305, 375 See NTP
minute interval No Borrow, Keep 221
statistics reports operations 706 No Borrow, Return 221
Miscellaneous Data Records (MDR) 178 No Copy 98, 242
Missing Interrupt Handler No Copy Consistency Point 366
See MIH No Copy option 91
ML2 429, 432–433, 436 non-concurrent 343
Model CC8 Cache Controllers 20 non-encryption-capable 131
Model CX7 Cache Drawers 19 non-fast ready 98
Model L23 55 mount 99
Model S24 133 non-zero
frame 54 delete expired data parameter value 166
Model XS7 Cache Drawers 20 Expire Time 236–237
Modify Pool Properties 227 setting 166
MOUNT FROM CATEGORY 233 Normal reclaim 230
mount process 101 NOSTACK 434
mount vNode 98, 101 NTP 35, 147, 614
mounted tape volume I/O operations 98 server 35
mounting cluster 72 number of logical volumes 342
MOUNTMON 732 number of production clusters 380
MOUNTRetention 440
Move function 76 O
Mozilla Firefox 144 OAM 63, 73, 156, 185, 191, 255, 280, 300–301,
Multi Cluster TS7720 59 307–309, 329, 362, 366–368, 370, 377, 396, 399, 402,
multi-cluster configuration 20, 59, 97, 115, 119, 214 405, 408, 411, 427, 433, 443–444, 452, 587–590, 623,
multi-cluster grid 12, 21, 28–30, 32, 36, 66–67, 85, 99, 626–627, 743, 746, 753, 824, 826–827
101–102, 118, 144, 151, 155, 167, 182, 256, 260, 262, defining new tape library 191
278–279, 287, 304, 306, 315, 330, 339, 357, 372, 436, documentation resources 306
439, 445, 452, 480, 611, 614–615, 689, 691–693, eject logical volume 623
706–709, 712, 731, 739, 763, 788, 854, 857, 861 object 444
configuration 11, 13, 85, 90–91, 111, 150, 358, 452 uses LCS 156
copy control 308 object access method
environment 710, 731 See OAM
installation 213 old data sets 859
operation 296 On Demand business 184
TS7700 Virtualization Engine 306 One to Two TS7740 Cache Drawers 349
installation 306 ONLINE cluster 34
multifile volumes 183 ONLINE peer 34
multi-mode 148 open systems 206
Multimode Fiber 141 attachment 55
multi-platform environment 201 environment 201
Index 943
Port Address 153 controller 60
port type 153 group 60
port-to-port 141 implementation 60
post-installation tasks 192 RAID 5 9, 48
power 138–139 RAID 6 47
distribution 139 random pattern 166
requirements 139–140 range 255
power supply 139 rapid recall 381
predefined VOLSER range 219 RCLMMAX 704
Prefer Keep 245, 273 Read Configuration Data (RCD) 304
Prefer Local Cache for Non-Fast Ready Mounts 98 read ownership takeover (ROT) 85, 720, 752, 759
Prefer Local Cluster for Fast Ready Mount 154 mode 154
Prefer Local Cluster for Non-Fast Ready Mounts 154 read/write ownership takeover mode 154
Prefer Remove 245, 272 read-only processing 628
preference group 60, 709 real disaster 801
Preference Group 0 67, 73, 89, 244, 273, 279 recall 230
volumes 61 recall takeaway 429
Preference Group 1 67, 244, 273, 278 Recall task 643
preference level 0 61–63 recalled volume 68
preference level 0 volume 61–62 receiving port 149
preference level 1 volume 61–62 reclaim
Preferred Pre-Migration 699 inhibit schedule 231
threshold 699 operations 230, 704
pre-installation planning 190, 356 parameters 219
Premigrate task 643 policies 229
premigrated 71, 78, 81 scheduled time 165
premigration 10, 33, 72, 167, 230 tasks 229
Premigration Management 60 threshold 229
Pre-Migration Throttling threshold 699–700 threshold setting 163
pre-owned volumes 85 RECLAIM keyword 704
PRESTAGE 362, 368, 371, 719, 888 Reclaim Pool 221
prevent reclamation 228 Reclaim task 643
Preventive Service Planning 357 Reclaim Threshold 222
primary key manager address 251 reclaim threshold 225, 704
primary key server 174 Reclaim Threshold Percentage 163, 232–233
Primary Pool 239–240 reclaimed physical volumes 76
prior generation 630 reclamation 76–77, 163, 165, 167, 229, 231, 233
prioritizing copies within the grid 648 Inhibit Reclaim Schedule 228
priority move 230 interruption 231
private stacked volumes 70 level 227
production cluster 381 methods 229
production site 380–381 operation 81
production systems 153 policies 70
PSP 357 prevent 228
buckets 156 process 76, 165, 167, 233
PTF 156 setting 163
Pv value 164 tasks 162, 226
threshold 70, 74
reconciliation process 74
Q record types 708
QLIB List 295, 921 14, 15, 21, and 30 178
Query Library (QL) 302–303, 919 recording technology parameters 246
query tape (QT) 288 recovery
questions for five- and six-cluster configurations 380 objectives 66
scenarios 628
R time 60, 279, 763
R1.5 118 Recovery Point Objective (RPO) 79
R1.6 11–12 RECYCLE 421, 423, 436
R2.0 systems 115 Redbooks website 925
RAID 35 Contact us xix
Index 945
mode 34 source TS7740 77
Set expiration 234 space reclamation 225
SETNAME statement 850 SSIC 151
SETSYS 431–433, 436 SSR 34, 38, 60, 87–88, 98, 112, 132, 185, 192–193, 201,
seven-drawer 346 212, 214, 279, 286–288, 356, 361, 366, 369–370, 373,
SG 238 375, 377–379, 387, 391–392, 394, 398, 401, 404, 407,
sharing 157 410, 454, 456, 477, 566, 629–632, 703, 752, 754–761,
sharing and partitioning 158 763, 769, 830
sharing network switches 141 SSRE 172
shortwave 343 stack multiple files 183
shortwave fibre Ethernet 343, 357 stacked cartridge 98
shredding encryption keys 166 stacked media 157
shredding user data 112 stacked physical volume 182
Single Cluster 27–29, 155, 213, 240, 284, 294, 306, 360, stacked volume 10, 30, 68–69, 71–74, 79, 183
375, 377, 393, 397–400, 402–403, 405–406, 419, 452, add 630
581, 614, 667–668, 833, 847, 855–856 function 167
configuration 76 Stacked Volume Pool properties 70
configuration requirements 118–119 stacked volume pools 33
hardware components 452 stacked volumes 33, 69–70, 229, 232, 319, 438, 911
TS7700 Virtualization Engine 131 stacking 10
single cluster grid 20, 27, 29, 71, 213, 261, 297, 304, 306, stand-alone 357
321, 362, 374, 389, 393–396, 398–404, 406, 408, 411, stand-alone mode 442
429, 766, 849 source device 443
configuration 28 Stand-Alone Services 441, 443
stand-alone TS7700 Virtualization Engine 216 installation procedure 442
single library 155 stand-alone TS7700 Virtualization Engine 27
single point of failure 34, 104 statistical data 706
single point of loss 381 status of read-only 167
single use feature 112 Stop Threshold 275
single-port adapters 85 storage administrator training 192
six-cluster configurations 379 storage administrators 184
six-cluster grid 21, 138, 214, 358 storage cells 208
six-drawer 350 Storage Class 238, 244–246, 255
SMF 178 table 246
SMF records 176, 178–179, 421 Storage Class (SC) 63, 68, 156, 191, 243, 627, 827
SMIT 87, 278–279, 367, 370, 377–378 ACS routine 156, 420–421
SMS 30, 32, 156, 185, 246, 279, 284–288, 296, attribute 63
300–301, 330, 361–362, 366, 368–370, 374–375, cache management effect 279
377–379, 386–387, 394–396, 398, 401, 404, 407–408, construct 67–68, 279, 376, 602–603, 700, 721, 786
410–411, 421–423, 427, 432, 436, 441, 585–587, 589, definition 63
613, 626–627, 743–744, 746–747, 764, 775, 793, 796, name 63, 589
799, 802, 824–825, 827 policy name 589
classes 157 Storage Expansion frame 344–345, 348
construct 69, 284, 432 Storage Expansion Frame Cache Controller 47
SMS Tape Design 184 Storage Group 238, 255
SMS-managed 156 storage group 240
SMS-managed z/OS environment 206 Storage Group (SG) 68, 70, 156–157, 431, 588, 827
SNMP 147, 252 construct name 81
destination 253 constructs 67
settings 253 definitions 395, 401, 404, 407, 410
traps 252 distributed libraries 308
version 253 library names 588
software 191 separate pools 78
enhancements 373 state 154
environment 156 Storage Groups table 239
maintenance 156 Storage Management Subsystem 589
requirements 155 Storage Pool 238
support 357 storage pool 68, 169, 440
Solutions Conversion (SC) 160 subnet 142
source cluster 153 subnet mask 143
Index 947
drives 55 97–99, 111, 113, 117–119, 130, 134–139, 148–149, 152,
TS1120 Tape Drive 51–52, 55, 80, 131–132, 173, 176, 155, 157–160, 162, 176–184, 189–191, 217, 219,
389, 452, 583 228–229, 233, 235, 237, 287, 308, 331, 335, 420, 427,
Model E05 57, 173 431, 433, 439, 444, 447, 452, 457, 621, 637, 708, 714,
native mode 132 749–750, 753–754, 758, 777, 823–824, 910
TS1130 51, 55, 131 activity 228, 733
TS1130 Tape Drive 51–52, 57, 131, 169 architecture 17
3592-E06 131 availability characteristics 139
E06 132 back-end cartridges 207
Model E06 173 cache 180–181
native mode 132 hit 181
TS3000 Service Console 39–40, 118 Cluster 0 759
TS3000 System Console 40, 46, 86–88, 118, 120, Cluster 1 757
122–123, 131–132, 145, 183, 752 component upgrades 133
external 119 configuration 40
with Keyboard/Display 119 database 37
TS3500 18, 31, 39 default behavior 777
TS3500 Tape Library 15, 22, 29, 51–56, 69, 117–118, family 5
132, 156, 189–194, 206–207, 209–212, 217, 219, 403, grid 112, 144, 153, 182
444, 452–453, 630, 680–682, 684, 754, 823 configuration 21, 104, 154, 182, 693, 750, 779
3953 56, 190 failover test scenario 753
attachment 55 subsystem 625
Cleaning Cartridge Usage 585 Hold function 300
definitions 192 host definition 286
frames 53, 202 implementation 184, 190
High Density frame 54, 132 installation 185
IBM 3592 Tape Drives and FC switches 40 Integrated Library Manager 59
IBM System z internal management functions 229
attachment 55 key benefits 181
level 207 LAN/WAN requirements 140
logical library partition 207, 620 local 182
Model D23 54 logical volumes 58, 159
Model D53 54 main management concepts 59
Model L23 54 management 37
Model L53 54 management interface 58, 78, 86, 94, 209
Model S24 54 definitions 207
Model S54 54 modular design 36
needs cleaning 211 multi-cluster grid 143
operator panel 193 environment 144
operators 176 multiple ranges 161
physical cartridges 219 node 40, 85
TS7700 Virtualization Engine owned physical vol- nomenclature 5
umes 192 online drive 827
unassigned volumes 209 overall performance 735
TS3500 Tape Library Specialist 33, 183, 190, 192–193, overall throughput 182
211, 584, 681 physical tape drives 31
web interface 210 physical volumes 164
Welcome page 453 Policy Construct definitions 58
Welcome window 195 pre-installation 131, 185
windows 201 primary attributes 183
TS3500 Tape Library Web Specialist 201 R1.3 166
TS7700 15, 26, 67, 87, 105 R1.6 45
cluster 8, 66 R1.7 4, 15, 21, 39, 48, 59, 80, 115–116
grid TCP/IP network infrastructure 140 clusters 115
tuning 695 grid 20
TS7700 Introduction and Planning Guide 131 Release 1.6 82
TS7700 using Selective Write Protect 785 remote 182
TS7700 Virtualization 63 replication priority 67
TS7700 Virtualization Engine 3–6, 8–13, 15, 17, 24, 26, same volume 719
31–33, 35–37, 39–41, 55, 58–61, 63, 67–68, 72–73, 86, server 7, 41–42
Index 949
763, 765, 787, 791, 794 full capacity 440
configuration 22, 103–104, 153, 406, 765 initial creation 260
in multi-cluster grid 12 insert processing 330
two-cluster hybrid 382 remaining space 434
virtualization 10
Virtualization Engine 4, 10
U performance 680
Ultra2/Wide High 53 R1.6 members 115
unassigned category 210 virtualization node
unassigned drives 200 See vNode
unassigned physical volumes 217 vital information 13
unencrypted 166 VM 156
physical volume 166 system 334
ungrid 111 VM guest (VGS) 334, 336
Unit Control Block (UCB) 295 VM/ESA
UNITNAME 848, 852, 859 DGTVCNTL DATA 331
Universal Time (UTC) 231 LIBRCMS 334
update 359 LIBSERV-VGS 334
upgrade 356 RMCONFIG DATA 331
configuration 348 specific mount 331
grid capacity 115 VGS 334
requirements 344 VSE/ESA guest 334
URL 144, 213 VMA 179
URL line 583 vNode 17, 24, 60, 91–92, 96–97, 99, 101, 350, 689, 707
library URL 211 cluster 98
usable capacity 348 vNode Host Adapter Activity 736
user roles 26 VOLID 162
VOLSER 32, 72–73, 158–159, 161, 192, 205–208,
V 217–219, 255, 280, 282, 302, 317, 337, 375, 421,
V07 133, 136 424–425, 436, 438, 472, 480–481, 484, 486, 488, 490,
valid copy 101, 168, 263, 753, 778 492, 494–495, 498–499, 507, 509–510, 512, 514–515,
VEB 133, 136 517–518, 520–521, 572, 581, 609, 619–620, 622, 631,
VEHGRXCL 732 682, 709, 719, 726, 836, 859, 912–913
tool 739 entry fields 219
VEHSTATS 695, 710, 732 range 70, 191, 218–219, 782, 785
VEHSTATS_Model 740 ranges 219
VEPSTATS 732 VOLSER Ranges Table 217–219
virtual device 20, 85, 155, 286, 290, 442, 753–754 volume
first set 290 category, defining 905
IPL volume 443 copying 153
second set 291 counts by media type 165
TS7700 Virtualization Engine Clusters 751 map 712
TS7740 Virtualization Engine Clusters 85 ownership 85
virtual volume 11 ownership takeover 86
Virtual Device Activity 735 premigration requests 72
Virtual Device Allocation 653 processing times 182
virtual devices 155 Volume Copy Retention Group 245, 376
virtual drive 9, 31, 104, 157, 309, 440, 735 Volume Copy Retention Time 245, 376
maximum number 754 Volume Ownership 256
preferred path 31 Volume Pool 229
virtual IP 144, 213 Volume Removal 267
virtual machine (VM) 334 Volume removal policies 272
z/VM guest 334 Volume Removal Policy 10
virtual tape devices 153 volume Rewind/Unload time 92
virtual tape drives 9 Volume Status
Virtual Tape Server Information 718
See VTS request 719
virtual tape subsystem 9 volumes tagged for erasure 167
virtual volume 10–11, 63, 157, 337, 434, 640 VPD 253
data 31 VSAM 447
W
WAN 142
interconnection 357
IP address 144
web browser 144, 193, 212–213
web interface 211
Welcome Page 211
wireless protocols 184
withdrawn 341
features 352
hardware 352
work item 200
workflow management 82
functions 66
workload balancing 373
World Wide Identifier 82
WORM 82, 132, 161
media 82
tape 82
wrapped data keys 170
Write Once Read Many
See WORM
Write Ownership Takeover (WOT) 86, 752
Write-Mount Count 82
Write-Protect Mode 535
Z
z/OS 156–157, 191, 254, 283, 583, 612
allocation process 90
DFSMS 234, 585, 745
environment 73
host 217
Host Console Request
function 68
host software 72–73
V1R11 88–89
workload 63
Index 951
952 IBM Virtualization Engine TS7700 with R2.0
IBM Virtualization Engine TS7700
with R2.0
(1.5” spine)
1.5”<-> 1.998”
789 <->1051 pages
IBM Virtualization Engine TS7700 with
R2.0
IBM Virtualization Engine TS7700 with R2.0
IBM Virtualization Engine TS7700 with R2.0
IBM Virtualization Engine TS7700
with R2.0
954
Back cover ®
Integrate tape drives This IBM Redbooks publication highlights TS7700 Virtualization Engine
Release 2.0. It is intended for system architects who want to integrate INTERNATIONAL
and IBM System p
their storage systems for smoother operation. The IBM Virtualization TECHNICAL
server into a storage
Engine TS7700 offers a modular, scalable, and high-performing SUPPORT
hierarchy architecture for mainframe tape virtualization for the IBM System z ORGANIZATION
environment. It integrates 3592 Tape Drives, high-performance disks,
Manage your storage and the new IBM System p server into a storage hierarchy. This
hierachy with storage hierarchy is managed by robust storage management
advanced functions firmware with extensive self-management capability. It includes the
following advanced functions: BUILDING TECHNICAL
Take advantage of Policy management to control physical volume pooling INFORMATION BASED ON
Cache management PRACTICAL EXPERIENCE
5-way and 6-way
Dual copy, including across a grid network
grids IBM Redbooks are developed
Copy mode control
by the IBM International
The TS7700 Virtualization Engine offers enhanced statistical reporting. Technical Support
It also includes a standards-based management interface for TS7700 Organization. Experts from
Virtualization Engine management. IBM, Customers and Partners
from around the world create
The new IBM Virtualization Engine TS7700 Release 2.0 introduces the timely technical information
next generation of TS7700 Virtualization Engine servers for based on realistic scenarios.
System z tape: Specific recommendations
IBM Virtualization Engine TS7720 Server Model VEB are provided to help you
IBM Virtualization Engine TS7740 Server Model V07 implement IT solutions more
effectively in your
These Virtualization Engines are based on IBM POWER7 technology. environment.
They offer improved performance for most System z tape workloads
compared to the first generation of TS7700 Virtualization
Engine servers.
For more information:
ibm.com/redbooks