IBM SVC Bestr Parctices MAY18 - sg247521
IBM SVC Bestr Parctices MAY18 - sg247521
IBM SVC Bestr Parctices MAY18 - sg247521
Jon Tate
Maximilian Hart
Hartmut Lonzer
Tarik Jose Maluf
Libor Miklas
Jon Parkes
Anthony Saine
Lev Sturmer
Marcin Tabinowski
Redbooks
Draft Document for Review February 4, 2016 8:01 am 7933edno.fm
December 2015
SG24-7933-04
7933edno.fm Draft Document for Review February 4, 2016 8:01 am
Note: Before using this information and the product it supports, read the information in “Notices” on
page xvii.
This edition applies to IBM SAN Volume Controller and IBM Spectrum Virtualize software Version 7.6 and the
IBM SAN Volume Controller 2145-DH8.
Note: This book is based on a pre-GA version of a product and may not apply when the product becomes
generally available. We recommend that you consult the product documentation or follow-on versions of
this redbook for more current information.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxv
iv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm
Contents v
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am
vi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm
Contents vii
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am
viii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm
Contents ix
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am
x Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm
Contents xi
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am
xii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm
Contents xiii
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am
xiv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933TOC.fm
Contents xv
7933TOC.fm Draft Document for Review February 4, 2016 8:01 am
xvi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933spec.fm
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® HACMP™ pureScale®
AIX 5L™ HyperSwap® Real-time Compression™
DB2® IBM® Redbooks®
developerWorks® IBM FlashSystem® Redbooks (logo) ®
DS4000® IBM Spectrum™ Storwize®
DS5000™ IBM Spectrum Accelerate™ System Storage®
DS8000® IBM Spectrum Control™ Tivoli®
Easy Tier® IBM Spectrum Protect™ WebSphere®
FlashCopy® IBM Spectrum Scale™ XIV®
FlashSystem™ IBM Spectrum Virtualize™
GPFS™ POWER®
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
xviii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933itsoad.fm
IBM REDBOOKS PROMOTIONS
Download
Android
iOS
Now
Preface
This IBM® Redbooks® publication is a detailed technical guide to the IBM System Storage®
SAN Volume Controller, powered by IBM Spectrum™ Virtualize Version 7.6.
The SAN Volume Controller (SVC) is a virtualization appliance solution, which maps
virtualized volumes that are visible to hosts and applications to physical volumes on storage
devices. Each server within the storage area network (SAN) has its own set of virtual storage
addresses that are mapped to physical addresses. If the physical addresses change, the
server continues running by using the same virtual addresses that it had before. Therefore,
volumes or storage can be added or moved while the server is still running.
The IBM virtualization technology improves the management of information at the “block”
level in a network, which enables applications and servers to share storage devices on a
network.
Authors
This book was produced by a team of specialists from around the world working at the
International Technical Support Organization, San Jose Center.
xxii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933pref.fm
Frank Enders
Torben Jensen
Hartmut Lonzer
Libor Miklas
Marcin Tabinowski
Ian Boden
Paul Cashman
John Fairhurst
Carlos Fuente
Katja Gebuhr
Paul Merrison
Nicholas Sunderland
Dominic Tomkins
Preface xxiii
7933pref.fm Draft Document for Review February 4, 2016 8:01 am
Barry Whyte
Stephen Wright
IBM Hursley, UK
Nick Clayton
IBM Systems & Technology Group, UK
Navin Manohar
IBM Systems, TX, US
Special thanks to the Brocade Communications Systems staff in San Jose, California for their
unparalleled support of this residency in terms of equipment and support in many areas:
Jim Baldyga
Silviano Gaona
Sangam Racherla
Brian Steffler
Marcus Thordal
Brocade Communications Systems
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
xxiv Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933pref.fm
Preface xxv
7933pref.fm Draft Document for Review February 4, 2016 8:01 am
xxvi Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933chang.fm
Summary of changes
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-7933-04
for Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum
Virtualize V7.6
as created or updated on February 4, 2016.
New information
Encryption
Distributed RAID
New software and hardware description
Changed information
IBM Spectrum Virtualize rebranding
GUI and CLI
Compression
xxviii Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm
The focus of this publication is virtualization at the disk layer, which is referred to as
block-level virtualization, or the block aggregation layer. A description of file system
virtualization is beyond the scope of this book.
For more information about file system virtualization, see the following resources:
IBM Spectrum Scale™ (based on IBM General Parallel File System, GPFS™):
http://www.ibm.com/systems/storage/spectrum/scale
The Storage Networking Industry Association’s (SNIA) block aggregation model provides a
useful overview of the storage domain and its layers, as shown in Figure 1-1 on page 3. It
illustrates three layers of a storage domain: the file, block aggregation, and block subsystem
layers.
The model splits the block aggregation layer into three sublayers. Block aggregation can be
realized within hosts (servers), in the storage network (storage routers and storage
controllers), or in storage devices (intelligent disk arrays).
The IBM implementation of a block aggregation solution is IBM Spectrum Virtualize software,
running on IBM SAN Volume Controller (SVC) and IBM Storwize family. The SVC is
implemented as a clustered appliance in the storage network layer. For more information
about the reasons why IBM chose to develop its SVC with IBM Spectrum Virtualize in the
storage network layer, see Chapter 2, “IBM SAN Volume Controller” on page 11.
2 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm
The key concept of virtualization is to decouple the storage from the storage functions that
are required in the storage area network (SAN) environment.
Decoupling means abstracting the physical location of data from the logical representation of
the data. The virtualization engine presents logical entities to the user and internally manages
the process of mapping these entities to the actual location of the physical storage.
The actual mapping that is performed depends on the specific implementation, such as the
granularity of the mapping, which can range from a small fraction of a physical disk up to the
full capacity of a physical disk. A single block of information in this environment is identified by
its logical unit number (LUN), which is the physical disk, and an offset within that LUN, which
is known as a logical block address (LBA).
The term physical disk is used in this context to describe a piece of storage that might be
carved out of a RAID array in the underlying disk subsystem.
Specific to the IBM Spectrum Virtualize implementation, the address space that is mapped
between the logical entity is referred to as a volume. The array of physical disks is referred to
as managed disks (MDisks).
The server and application are aware of the logical entities only, and they access these
entities by using a consistent interface that is provided by the virtualization layer.
The functionality of a volume that is presented to a server, such as expanding or reducing the
size of a volume, mirroring a volume, creating an IBM FlashCopy®, and thin provisioning, is
implemented in the virtualization layer. It does not rely in any way on the functionality that is
provided by the underlying disk subsystem. Data that is stored in a virtualized environment is
stored in a location-independent way, which allows a user to move or migrate data between
physical locations, which are referred to as storage pools.
The SVC delivers these functions in a homogeneous way on a scalable and highly available
platform over any attached storage and to any attached server.
4 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs relate to managing the storage system.
But how much of the management of multiple systems, with separate interfaces, can be
handled as a single entity? In a non-virtualized storage environment, every system is an
“island” that must be managed separately.
Because IBM Spectrum Virtualize provides advanced functions, such as mirroring and
FlashCopy, there is no need to purchase them again for each attached disk subsystem that
stands behind SVC.
Today, it is typical that open systems run at less than 50% of the usable capacity that is
provided by the RAID disk subsystems. The use of the installed raw capacity in the disk
subsystems shows usage numbers of less than 35%, depending on the RAID level that is
used. A block-level virtualization solution, such as IBM Spectrum Virtualize, can allow
capacity usage to increase to approximately 75 - 80%.
With the IBM Spectrum Virtualize, free space does not need to be maintained and managed
within each storage subsystem, which further increases capacity utilization.
We will cover the major hardware changes of the last two years as we still consider them
worth mentioning, and provide a brief summary of them.
The 2145-DH8 and SVC Small Form Factor (SFF) Expansion Enclosure Model 24F deliver
increased performance, expanded connectivity, compression acceleration, and additional
internal flash storage capacity.
A 2145-DH8, which is based on IBM System x server technology, consists of one Xeon E5 v2
eight-core 2.6 GHz processor and 32 GB of memory. It includes three 1 Gb Ethernet ports as
standard for 1 Gbps iSCSI connectivity and supports up to four 16 Gbps Fibre Channel (FC)
I/O adapter cards for 16 Gbps FC or 10 Gbps iSCSI/Fibre Channel over Ethernet (FCoE)
connectivity (Converged Network Adapters). Up to three
quad-port 16 Gbps native FC cards are supported. It also includes two integrated AC power
supplies and battery units, replacing the uninterruptible power supply feature that was
required on the previous generation storage engine models.
The front view of the two-node cluster based on the 2145-DH8 is shown in Figure 1-3.
6 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm
supports one four-port 10GbE card (iSCSI or FCoE) and one dual-port 12 Gbps
serial-attached SCSI (SAS) card for flash drive expansion unit attachment (model
2145-24F).
– Improved Real-time Compression engine (RACE) with the processing offloaded to the
secondary dedicated processor and using 36 GB of dedicated memory cache. At a
minimum, one Compression Accelerator card needs to be installed (up to 200
compressed volumes) or two Compression Accelerators allow up to 512 compressed
volumes.
– Optional 2U expansion enclosure 2145-24F with up to 24 flash drives (200, 400, 800,
or 1600 GB).
V7.3 includes the following software enhancements:
– Extended functionality of Easy Tier by storage pool balancing mode within the same
tier. It moves or exchanges extents between highly utilized and low-utilized managed
disks (MDisks) within a storage pool, therefore increasing the read and write
performance of the volumes. This function is enabled automatically in the SVC and
does not need any license. It cannot be disabled by the administrator.
– The SVC cache rearchitecture splits the original single cache into upper and lower
caches of different sizes. Upper cache uses up to 256 MB and lower cache uses up to
64 GB of installed memory allocated to both processors (if installed). And, 36 GB of
memory is always allocated for Real-time Compression if enabled.
– Near-instant prepare for FlashCopy due to the presence of lower cache. Multiple
snapshots of the golden image now share cache data (instead of a number of N
copies).
user. This behavior is enabled by system-wide policy settings. The detailed volume
view contains the new field that indicates when the volume was last accessed.
– A user can replace a failed flash drive by removing it from the 2145-24F expansion unit
and installing a new replacement drive, without requiring Directed Maintenance
Procedure (DMP) to supervise the action. The user determines that the fault LED is
illuminated for a drive, then they can expect to be able to reseat or replace the drive in
that slot. The system automatically performs the drive hardware validation tests and
promotes the unit into the configuration if these checks pass.
– The 2145-DH8 with SVC software version 7.4 and higher supports the T10 Data
Integrity Field between the internal RAID layer and the drives that are attached to
supported enclosures.
– The SVC supports 4096-bytes native drive block size without requiring clients to
change their block size. The SVC supports an intermix of 512 and 4096 drive native
block sizes within an array. The GUI recognizes drives with different block sizes and
represents them with different classes.
– The SVC 2145-DH8 improves the performance of Real-time Compression and
provides up to double I/O per second (IOPS) on the model DH8 when it is equipped
with both Compression Accelerator cards. It introduces two separate software
compression engines (RACE), taking advantage of multi-core controller architecture.
Hardware resources are shared between both RACE engines.
– Adds virtual LAN (VLAN) support for iSCSI and IP replication. When VLAN is enabled
and its ID is configured for the IP addresses that are used for either iSCSI host attach
or IP replication on the SVC, appropriate VLAN settings on the Ethernet network and
servers must also be correctly configured to avoid connectivity issues. After the VLANs
are configured, changes to their settings disrupt the iSCSI or IP replication traffic to
and from the SVC.
– New informational fields are added to the CLI output of the lsportip, lsportfc, and
lsportsas commands, indicating the physical port locations of each logical port in the
system.
8 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 01 Introduction Libor.fm
Software changes:
– The visual and functional enhancements in Graphical User Interface (GUI) with
changed layout of context menus and integrated performance meter on main page.
– Implementation of Distributed RAID (DRAID), that differs from traditional RAID arrays
by elimination of dedicated spare drives; spare capacity is rather spread across disks
making the reconstruction of failed disk faster.
– Introduced software encryption enabled by IBM Spectrum Virtualize and using
AES256-XTS algorithm. Encryption is enabled on storage pool level, all newly created
volumes in such pool are automatically encrypted. Encryption license with USB flash
drives is required.
– Developed Comprestimator tool, that is included in IBM Spectrum Virtualize software.
It provides the statistics to estimate potential storage savings. Available from CLI, it
does not need compression licenses and does not trigger any compression process. It
uses the same algorithm of estimation as external host-based application, so results
are similar.
– Enhanced GUI wizard for initial configuration of Hyperswap topology. IBM Spectrum
Virtualize now allows IP-attached quorum disk in Hyperswap system configuration.
– Increased maximum number of iSCSI hosts attached to the system to 2048 (512 host
IQNs per I/O group) with a maximum of four iSCSI sessions per SVC node (8 per I/O
group).
– Improved and optimized read I/O performance in Hyperswap system configuration by
parallel read from primary and secondary local volume copy. Both copies must be in a
synchronized state.
– Extends the support of VMware Virtual Volumes (VVols). Using IBM Spectrum
Virtualize you can manage one-to-one partnership of VM drives to SVC volumes. It
eliminates single, shared volume (datastore) I/O contention.
– Customizable login banner. Using CLI commands you can now define and show
welcome message or important disclaimer on login screen to users. This banner is
shown in GUI or CLI login window.
The IBM SAN Volume Controller 2145-DH8 ships with preloaded IBM Spectrum Virtualize 7.6
software. Downgrading the software to software version 7.2 or lower is not supported. The
2145-DH8 rejects any attempt to install a version that is lower than 7.3. With specific
hardware installed (4-port 16 Gbps FC HBAs), or in case of specific software configurations
(HyperSwap, enabled encryption, more than 512 iSCSI host, etc.), it is not possible to install
software versions lower than 7.6.
1.4 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. The use of storage virtualization as the
foundation for a flexible and reliable storage solution helps enterprises to better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.
The IBM Spectrum Virtualize running on IBM SAN Volume Controller is a mature,
eighth-generation virtualization solution that uses open standards and complies with SNIA
storage model. The SVC is an appliance-based, in-band block virtualization process in which
intelligence (including advanced storage functions) is migrated from individual storage
devices to the storage network.
IBM Spectrum Virtualize can improve the utilization of your storage resources, simplify your
storage management, and improve the availability of business applications.
10 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
We present a brief history of the SVC product, and then provide an architectural overview.
After we define SVC terminology, we describe software and hardware concepts and the other
functionalities that are available with the newest release.
Finally, we provide links to websites where you can obtain more information about the SVC.
One goal of this project was to create a system that was almost exclusively composed of
off-the-shelf standard parts. As with any enterprise-level storage control system, it had to
deliver a level of performance and availability that was comparable to the highly optimized
storage controllers of previous generations. The idea of building a storage control system that
is based on a scalable cluster of lower performance servers instead of a monolithic
architecture of two nodes is still a compelling idea.
COMPASS also had to address a major challenge for the heterogeneous open systems
environment, namely to reduce the complexity of managing storage on block devices.
The first documentation that covered this project was released to the public in 2003 in the
form of the IBM Systems Journal, Vol. 42, No. 2, 2003, “The software architecture of a SAN
storage control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales. The article is available
at this website:
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5386853
The results of the COMPASS project defined the fundamentals for the product architecture.
The first release of the IBM System Storage SAN Volume Controller was announced in July
2003.
Each of the following releases brought new and more powerful hardware nodes, which
approximately doubled the I/O performance and throughput of its predecessors, provided new
functionality, and offered more interoperability with new elements in host environments, disk
subsystems, and the storage area network (SAN).
The most recently released hardware node, the 2145-DH8, is based on an IBM System x
3650 M4 server technology with the following features:
One 2.6 GHz Intel Xeon Processor E5-2650 v2 with eight processor cores (A second
processor is optional.)
Up to 64 GB of cache
Up to three four-port 8 Gbps Fibre Channel (FC) cards
Up to four two-port 16 Gbps FC cards
Up to four four-port 16 Gbps FC cards
One four-port 10 Gbps iSCSI/Fibre Channel over Ethernet (FCoE) card
One 12 Gbps serial-attached SCSI (SAS) Expansion card for an additional two SAS
expansions
Three 1 Gbps ports for management and iSCSI host access
One Technican port
Two battery packs
The SVC node can support up to two external Expansion Enclosures for Flash Cards.
12 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The following major approaches are used today for the implementation of block-level
aggregation and virtualization:
Symmetric: In-band appliance
Virtualization splits the storage that is presented by the storage systems into smaller
chunks that are known as extents. These extents are then concatenated, by using various
policies, to make virtual disks (volumes). With symmetric virtualization, host systems can
be isolated from the physical storage. Advanced functions, such as data migration, can run
without the need to reconfigure the host. With symmetric virtualization, the virtualization
engine is the central configuration point for the SAN. The virtualization engine directly
controls access to the storage and to the data that is written to the storage. As a result,
locking functions that provide data integrity and advanced functions, such as cache and
Copy Services, can be run in the virtualization engine itself. Therefore, the virtualization
engine is a central point of control for device and advanced function management.
Symmetric virtualization allows you to build a firewall in the storage network. Only the
virtualization engine can grant access through the firewall.
Symmetric virtualization can have disadvantages. The main disadvantage that is
associated with symmetric virtualization is scalability. Scalability can cause poor
performance because all input/output (I/O) must flow through the virtualization engine. To
solve this problem, you can use an n-way cluster of virtualization engines that has failover
capacity. You can scale the additional processor power, cache memory, and adapter
bandwidth to achieve the level of performance that you want. Additional memory and
processing power are needed to run advanced services, such as Copy Services and
caching.
The SVC uses symmetric virtualization. Single virtualization engines, which are known as
nodes, are combined to create clusters. Each cluster can contain between two and eight
nodes.
Asymmetric: Out-of-band or controller-based
With asymmetric virtualization, the virtualization engine is outside the data path and
performs a metadata-style service. The metadata server contains all the mapping and the
locking tables; the storage devices contain only data. In asymmetric virtual storage
networks, the data flow is separated from the control flow. A separate network or SAN link
is used for control purposes. The metadata server contains all the mapping and locking
tables, and the storage devices contain only data. Because the flow of control is separated
from the flow of data, I/O operations can use the full bandwidth of the SAN. A separate
network or SAN link is used for control purposes. However, there are disadvantages to
asymmetric virtualization.
Asymmetric virtualization can have the following disadvantages:
– Data is at risk to increased security exposures, and the control network must be
protected with a firewall.
– Metadata can become complicated when files are distributed across several devices.
– Each host that accesses the SAN must know how to access and interpret the
metadata. Specific device drivers or agent software must therefore be running on each
of these hosts.
– The metadata server cannot run advanced functions, such as caching or Copy
Services, because it only knows about the metadata and not about the data itself.
Logical Entity
(Volume)
SAN
SAN
Virtualization
Virtualization
The controller-based approach has high functionality, but it fails in terms of scalability or
upgradeability. Because of the nature of its design, no true decoupling occurs with this
approach, which becomes an issue for the lifecycle of this solution, such as with a controller.
Data migration issues and questions are challenging, such as how to reconnect the servers to
the new controller, and how to reconnect them online without any effect on your applications.
Be aware that with this approach, you not only replace a controller but also implicitly replace
your entire virtualization solution. In addition to replacing the hardware, updating or
repurchasing the licenses for the virtualization feature, advanced copy functions, and so on
might be necessary.
Only the fabric-based appliance solution provides an independent and scalable virtualization
platform that can provide enterprise-class copy services and that is open for future interfaces
and protocols. By using the fabric-based appliance solution, you can choose the disk
subsystems that best fit your requirements, and you are not locked into specific SAN
hardware.
For these reasons, IBM chose the SAN or fabric-based appliance approach for the
implementation of the SVC.
14 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
On the SAN storage that is provided by the disk subsystems, the SVC can offer the following
services:
Creates a single pool of storage
Provides logical unit virtualization
Manages logical volumes
Mirrors logical volumes
The hosts cannot see or operate on the same physical storage (logical unit number (LUN))
from the RAID controller that is assigned to the SVC. Storage controllers can be shared
between the SVC and direct host access if the same LUNs are not shared. The zoning
capabilities of the SAN switch must be used to create distinct zones to ensure that this rule is
enforced.
SAN fabrics can include standard FC, FC over Ethernet, iSCSI over Ethernet, or possible
future types.
Figure 2-2 shows a conceptual diagram of a storage system that uses the SVC. It shows
several hosts that are connected to a SAN fabric or LAN. In practical implementations that
have high-availability requirements (most of the target clients for the SVC), the SAN fabric
“cloud” represents a redundant SAN. A redundant SAN consists of a fault-tolerant
arrangement of two or more counterpart SANs, which provide alternative paths for each
SAN-attached device.
Both scenarios (the use of a single network and the use of two physically separate networks)
are supported for iSCSI-based and LAN-based access networks to the SVC. Redundant
paths to volumes can be provided in both scenarios.
For simplicity, Figure 2-2 shows only one SAN fabric and two zones: host and storage. In a
real environment, it is a preferred practice to use two redundant SAN fabrics. The SVC can be
connected to up to four fabrics. For more information about zoning, see Chapter 3, “Planning
and configuration” on page 83.
A clustered system of SVC nodes that are connected to the same fabric presents logical
disks or volumes to the hosts. These volumes are created from managed LUNs or managed
disks (MDisks) that are presented by the RAID disk subsystems.
16 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens
through the SVC nodes. This design is commonly referred to as symmetric virtualization.
LUNs that are not processed by the SVC can still be provided to the hosts.
For iSCSI-based host access, the use of two networks and separating iSCSI traffic within the
networks by using a dedicated virtual local area network (VLAN) path for storage traffic
prevents any IP interface, switch, or target port failure from compromising the host servers’
access to the volumes’ LUNs.
Storage pool (pool) Managed disk (MDisk) group A collection of storage that
identifies an underlying set of
resources. These resources
provide the capacity and
management requirements for
a volume or set of volumes.
For more information about the terms and definitions that are used in the SVC environment,
see Appendix B, “Terminology” on page 923.
The SAN is zoned so that the application servers cannot see the back-end physical storage,
which prevents any possible conflict between the SVC and the application servers that are
trying to manage the back-end storage. The SVC is based on the components that are
described next.
2.4.1 Nodes
Each SVC hardware unit is called a node. The node provides the virtualization for a set of
volumes, cache, and copy services functions. The SVC nodes are deployed in pairs (cluster)
and multiple pairs make up a clustered system or system. A system can consist of 1 - 4 SVC
node pairs.
One of the nodes within the system is known as the configuration node. The configuration
node manages the configuration activity for the system. If this node fails, the system chooses
a new node to become the configuration node.
Because the nodes are installed in pairs, each node provides a failover function to its partner
node if a node fails.
A specific volume is always presented to a host server by a single I/O Group of the system.
The I/O Group can be changed.
When a host server performs I/O to one of its volumes, all the I/Os for a specific volume are
directed to one specific I/O Group in the system. Also, under normal conditions, the I/Os for
that specific volume are always processed by the same node within the I/O Group. This node
is referred to as the preferred node for this specific volume.
Both nodes of an I/O Group act as the preferred node for their own specific subset of the total
number of volumes that the I/O Group presents to the host servers. A maximum of
2,048 volumes per I/O Group is allowed. In an eight-node cluster, the maximum is 8,192
volumes. However, both nodes also act as failover nodes for their respective partner node
within the I/O Group. Therefore, a node takes over the I/O workload from its partner node, if
required.
Therefore, in an SVC-based environment, the I/O handling for a volume can switch between
the two nodes of the I/O Group. For this reason, it is mandatory for servers that are connected
through FC to use multipath drivers to handle these failover situations.
18 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The SVC I/O Groups are connected to the SAN so that all application servers that are
accessing volumes from this I/O Group have access to this group. Up to 512 host server
objects can be defined per I/O Group. The host server objects can access volumes that are
provided by this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group within the SVC system;
therefore, they can access volumes from separate I/O Groups. You can move volumes
between I/O Groups to redistribute the load between the I/O Groups. Modifying the I/O Group
that services the volume can be done concurrently with I/O operations if the host supports
nondisruptive volume move. It also requires a rescan at the host level to ensure that the
multipathing driver is notified that the allocation of the preferred node changed and the ports
by which the volume is accessed changed. This modification can be done in the situation
where one pair of nodes becomes overused.
2.4.3 System
The system or clustered system consists of 1 - 4 I/O Groups. Certain configuration limitations
are then set for the individual system. For example, the maximum number of volumes that is
supported per system is 8,192 (having a maximum of 2,048 volumes per I/O Group), or the
maximum managed disk that is supported is ~28 PiB per system.
All configuration, monitoring, and service tasks are performed at the system level.
Configuration settings are replicated to all nodes in the system. To facilitate these tasks, a
management IP address is set for the system.
A process is provided to back up the system configuration data onto disk so that it can be
restored if there is a disaster. This method does not back up application data. Only SVC
system configuration information is backed up.
For the purposes of remote data mirroring, two or more systems must form a partnership
before relationships between mirrored volumes are created.
For more information about the maximum configurations that apply to the system, I/O Group,
and nodes, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1005251
Stretched systems
A stretched system is an extended high availability (HA) method that is supported by the SVC
to enable I/O operations to continue after the loss of half of the system. A stretched system is
also sometimes referred to as a split system because one-half of the system and I/O Group is
usually in a geographically distant location from the other, often 10 kilometers (6.2 miles) or
more. The maximum distance is approximately 300 km (186.4 miles). It depends on the
round-trip time, which must not be greater than 80 ms. With version 7.4, this round-trip time is
enhanced to 250 ms. A third site is required to host an FC storage system that provides a
quorum disk. This storage system can also be used for other purposes than to act only as a
quorum disk.
Note: The site attribute in the node and controller object needs to be set in an enhanced
stretched system.
2.4.5 MDisks
The SVC system and its I/O Groups view the storage that is presented to the SAN by the
back-end controllers as a number of disks or LUNs, which are known as managed disks or
MDisks. Because the SVC does not attempt to provide recovery from physical disk failures
within the back-end controllers, an MDisk often is provisioned from a RAID array. However,
the application servers do not see the MDisks at all. Instead, they see a number of logical
disks, which are known as virtual disks or volumes, which are presented by the SVC I/O
Groups through the SAN (FC/FCoE) or LAN (iSCSI) to the servers.
The MDisks are placed into storage pools where they are divided into a number of extents,
which are 16 MiB - 8192 MiB, as defined by the SVC administrator.
For more information about the total storage capacity that is manageable per system
regarding the selection of extents, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1005251#_Extents
A volume is host-accessible storage that was provisioned out of one storage pool; or, if it is a
mirrored volume, out of two storage pools.
20 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The maximum size of an MDisk is 1 PiB. An SVC system supports up to 4096 MDisks
(including internal RAID arrays). At any point, an MDisk is in one of the following three modes:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and has no metadata that is stored
on it. The SVC does not write to an MDisk that is in unmanaged mode, except when it
attempts to change the mode of the MDisk to one of the other modes. The SVC can see
the resource, but the resource is not assigned to a storage pool.
Managed MDisk
Managed mode MDisks are always members of a storage pool, and they contribute
extents to the storage pool. Volumes (if not operated in image mode) are created from
these extents. MDisks that are operating in managed mode might have metadata extents
that are allocated from them and can be used as quorum disks. This mode is the most
common and normal mode for an MDisk.
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume by
using virtualization. This mode is provided to satisfy the following major usage scenarios:
– Image mode allows the virtualization of MDisks that already contain data that was
written directly and not through an SVC; rather, it was created by a direct-connected
host.
This mode allows a client to insert the SVC into the data path of an existing storage
volume or LUN with minimal downtime. For more information about the data migration
process, see Chapter 6, “Data migration” on page 237.
Image mode allows a volume that is managed by the SVC to be used with the native
copy services function that is provided by the underlying RAID controller. To avoid the
loss of data integrity when the SVC is used in this way, it is important that you disable
the SVC cache for the volume.
– The SVC provides the ability to migrate to image mode, which allows the SVC to export
volumes and access them directly from a host without the SVC in the path.
Each MDisk that is presented from an external disk controller has an online path count
that is the number of nodes that has access to that MDisk. The maximum count is the
maximum number of paths that is detected at any point by the system. The current count
is what the system sees at this point. A current value that is less than the maximum can
indicate that SAN fabric paths were lost. For more information, see 2.5.1, “Image mode
volumes” on page 29.
SSDs that are in SVC 2145-CG8 or Flash space, which are presented by the external
Flash Enclosures of the SVC 2145-DH8 nodes, are presented to the cluster as MDisks. To
determine whether the selected MDisk is an SSD/Flash, click the link on the MDisk name
to display the Viewing MDisk Details panel.
If the selected MDisk is an SSD/Flash that is on an SVC, the Viewing MDisk Details panel
displays values for the Node ID, Node Name, and Node Location attributes. Alternatively,
you can select Work with Managed Disks → Disk Controller Systems from the
portfolio. On the Viewing Disk Controller panel, you can match the MDisk to the disk
controller system that has the corresponding values for those attributes.
Three candidate quorum disks exist. However, only one quorum disk is active at any time. For
more information about quorum disks, see 2.11.1, “Quorum disks” on page 55.
Therefore, a storage tier attribute is assigned to each MDisk, with the default being
generic_hdd.
At any point, an MDisk can be a member in one storage pool only, except for image mode
volumes. For more information, see 2.5.1, “Image mode volumes” on page 29.
Figure 2-3 on page 23 shows the relationships of the SVC entities to each other.
22 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Pool_SSDN7 Pool_SSDN8
Storage_Pool_01
Storage_Pool_02
SSD SSD SSD SSD
MD1 MD2 MD3 MD4 MD5 SSD SSD SSD SSD
Each MDisk in the storage pool is divided into a number of extents. The size of the extent is
selected by the administrator when the storage pool is created and cannot be changed later.
The size of the extent is 16 MiB - 8192 MiB.
It is a preferred practice to use the same extent size for all storage pools in a system. This
approach is a prerequisite for supporting volume migration between two storage pools. If the
storage pool extent sizes are not the same, you must use volume mirroring (2.5.4, “Mirrored
volumes” on page 32) to copy volumes between pools.
The SVC limits the number of extents in a system to 222 = ~4 million. Because the number of
addressable extents is limited, the total capacity of an SVC system depends on the extent
size that is chosen by the SVC administrator. The capacity numbers that are specified in
Table 2-2 on page 24 for an SVC system assume that all defined storage pools were created
with the same extent size.
128 16,384 (16 TiB) 16,000 16,384 (16 TiB) 512 TiB
a
The total capacity values assumes that all of the storage pools in the system use the same
extent size.
For most systems, a capacity of 1 - 2 PiB is sufficient. A preferred practice is to use 256 MiB
for larger clustered systems. The default extent size is 1,024 MB.
For more information, see IBM System Storage SAN Volume Controller and Storwize V7000
Best Practices and Performance Guidelines, SG24-7521, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
24 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Multitiered storage pools are used to enable the automatic migration of extents between disk
tiers by using the IBM SVC Easy Tier function.
2.4.9 Volumes
Volumes are logical disks that are presented to the host or application servers by the SVC.
The hosts cannot see the MDisks; they can see only the logical volumes that are created from
combining extents from a storage pool.
Striped mode is the best method to use for most cases. However, sequential extent allocation
mode can slightly increase the sequential performance for certain workloads.
Figure 2-4 shows the striped volume mode and sequential volume mode. How the extent
allocation from the storage pool differs also is shown.
You can allocate the extents for a volume in many ways. The process is under full user control
when a volume is created and the allocation can be changed at any time by migrating single
extents of a volume to another MDisk within the storage pool.
have a three-tier implementation. We support three kinds of tier attributes. Easy Tier monitors
the host I/O activity and latency on the extents of all volumes with the Easy Tier function that
is turned on in a multitier storage pool over a 24-hour period.
Next, it creates an extent migration plan that is based on this activity and then dynamically
moves high-activity or hot extents to a higher disk tier within the storage pool. It also moves
extents whose activity dropped off or cooled down from the high-tier MDisks back to a
lower-tiered MDisk.
Easy Tier: The Easy Tier function can be turned on or off at the storage pool level and
volume level.
The Easy Tier function can make it more appropriate to use smaller storage pool extent sizes.
The usage statistics file can be offloaded from the SVC nodes. Then, you can use IBM
Storage Advisor Tool (STAT) to create a summary report. STAT is free available on the web
under following link:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000935
Easy Tier creates a migration report every 24 hours on the number of extents that can be
moved if the pool were a multitiered storage pool. Therefore, although Easy Tier extent
migration is not possible within a single-tier pool, the Easy Tier statistical measurement
function is available.
The usage statistics file can be offloaded from the SVC configuration node by using the GUI
(click Settings → Support). Figure 2-5 on page 27 shows you an example of what the file
looks like.
26 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Be aware that if you cannot find the heatmap file check the other node to see if it is there.
Select the other node to discover the heatmap file (Figure 2-6).
Then, you can use IBM Storage Advisor Tool (STAT) to create the statistics report. A web
browser is used to view the STAT output. For more information about the STAT utility, see
following web page:
https://ibm.biz/BdEzve
For more information about Easy Tier functionality and generating statistics by using IBM
STAT, see Chapter 8, “Advanced features for storage efficiency” on page 425.
2.4.12 Hosts
Volumes can be mapped to a host to allow access for a specific server to a set of volumes. A
host within the SVC is a collection of host bus adapter (HBA) worldwide port names
(WWPNs) or iSCSI-qualified names (IQNs) that are defined on the specific server.
Note: iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are
generated by the SVC. Volumes can be mapped to multiple hosts, for example, a volume
that is accessed by multiple hosts of a server system.
iSCSI is an alternative means of attaching hosts. However, all communication with back-end
storage subsystems (and with other SVC systems) is still through FC.
Node failover can be handled without having a multipath driver that is installed on the iSCSI
server. An iSCSI-attached server can reconnect after a node failover to the original target
IP address, which is now presented by the partner node. To protect the server against link
failures in the network or HBA failures, the use of a multipath driver is mandatory.
Volumes are LUN-masked to the host’s HBA WWPNs by a process called host mapping.
Mapping a volume to the host makes it accessible to the WWPNs or iSCSI names (IQNs) that
are configured on the host object.
For a SCSI over Ethernet connection, the IQN identifies the iSCSI target (destination)
adapter. Host objects can have IQNs and WWPNs.
Certain configuration limits exist in the SVC, including the following list of important limits. For
the most current information, see the SVC support site.
Sixteen worldwide node names (WWNNs) per storage subsystem
One PB MDisk
8192 MiB extents
Long object names can be up to 63 characters
28 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Volume extents can be migrated at run time to another MDisk or storage pool.
Volumes can be created as fully allocated or thin-provisioned. A conversion from a fully
allocated to a thin-provisioned volume and vice versa can be done at run time.
Volumes can be stored in multiple storage pools (mirrored) to make them resistant to disk
subsystem failures or to improve the read performance.
Volumes can be mirrored synchronously or asynchronously for longer distances. An SVC
system can run active volume mirrors to a maximum of three other SVC systems, but not
from the same volume.
Volumes can be copied by using FlashCopy. Multiple snapshots and quick restore from
snapshots (reverse FlashCopy) are supported.
Volumes can be compressed.
Volumes can be virtual. The system supports VMware vSphere Virtual Volumes,
sometimes referred to as VVols, which allow VMware vCenter to manage system objects
like volumes and pools. The system administrator can create these objects and assign
ownership to VMware administrators to simplify management of these objects.
Volumes have two major modes: managed mode and image mode. Managed mode volumes
have two policies: the sequential policy and the striped policy. Policies define how the extents
of a volume are allocated from a storage pool.
Image mode provides a one-to-one mapping between the logical block addresses (LBAs)
between a volume and an MDisk. Image mode volumes have a minimum size of one block
(512 bytes) and always occupy at least one extent.
An image mode MDisk is mapped to one, and only one, image mode volume.
The volume capacity that is specified must be equal to the size of the image mode MDisk.
When you create an image mode volume, the specified MDisk must be in unmanaged mode
and must not be a member of a storage pool. The MDisk is made a member of the specified
storage pool (Storage Pool_IMG_xxx) as a result of creating the image mode volume.
The SVC also supports the reverse process in which a managed mode volume can be
migrated to an image mode volume. If a volume is migrated to another MDisk, it is
represented as being in managed mode during the migration and is only represented as an
image mode volume after it reaches the state where it is a straight-through mapping.
An image mode MDisk is associated with exactly one volume. The last extent is partial (not
filled) if the (image mode) MDisk is not a multiple of the MDisk Group’s extent size. An image
mode volume is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and
it does not have any SVC metadata extents that are assigned to it. Managed or image mode
MDisks are always members of a storage pool.
It is a preferred practice to put image mode MDisks in a dedicated storage pool and use a
special name for it (for example, Storage Pool_IMG_xxx). The extent size that is chosen for
this specific storage pool must be the same as the extent size into which you plan to migrate
the data. All of the SVC copy services functions can be applied to image mode disks. See
Figure 2-7 on page 30.
Figure 2-8 on page 31 shows this mapping. It also shows a volume that consists of several
extents that are shown as V0 - V7. Each of these extents is mapped to an extent on one of the
MDisks: A, B, or C. The mapping table stores the details of this indirection. Several of the
MDisk extents are unused. No volume extent maps to them. These unused extents are
available for use in creating volumes, migration, expansion, and so on.
30 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The allocation of a specific number of extents from a specific set of MDisks is performed by
the following algorithm: if the set of MDisks from which to allocate extents contains more than
one MDisk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no
free extents when its turn arrives, its turn is missed and the round-robin moves to the next
MDisk in the set that has a free extent.
When a volume is created, the first MDisk from which to allocate an extent is chosen in a
pseudo-random way rather than by choosing the next disk in a round-robin fashion. The
pseudo-random algorithm avoids the situation where the “striping effect” that is inherent in a
round-robin algorithm places the first extent for many volumes on the same MDisk. Placing
the first extent of a number of volumes on the same MDisk can lead to poor performance for
workloads that place a large I/O load on the first extent of each volume, or that create multiple
sequential streams.
Having cache-disabled volumes makes it possible to use the native copy services in the
underlying RAID array controller for MDisks (LUNs) that are used as SVC image mode
volumes. Using SVC Copy Services instead of the underlying disk controller copy services
gives better results.
The two copies of the volume often are allocated from separate storage pools or by using
image-mode copies. The volume can participate in FlashCopy and remote copy relationships;
it is serviced by an I/O Group; and it has a preferred node.
Each copy is not a separate object and cannot be created or manipulated except in the
context of the volume. Copies are identified through the configuration interface with a copy ID
of their parent volume. This copy ID can be 0 or 1.
This feature provides a point-in-time copy functionality that is achieved by “splitting” a copy
from the volume. However, the mirrored volume feature does not address other forms of
mirroring that are based on remote copy, which is sometimes called IBM HyperSwap, that
mirrors volumes across I/O Groups or clustered systems. It is also not intended to manage
mirroring or remote copy functions in back-end controllers.
A second copy can be added to a volume with a single copy or removed from a volume with
two copies. Checks prevent the accidental removal of the only remaining copy of a volume. A
newly created, unformatted volume with two copies initially has the two copies in an
out-of-synchronization state. The primary copy is defined as “fresh” and the secondary copy
is defined as “stale”.
The synchronization process updates the secondary copy until it is fully synchronized. This
update is done at the default “synchronization rate” or at a rate that is defined when the
volume is created or modified. The synchronization status for mirrored volumes is recorded
on the quorum disk.
32 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
If a two-copy mirrored volume is created with the format parameter, both copies are formatted
in parallel and the volume comes online when both operations are complete with the copies in
sync.
If mirrored volumes are expanded or shrunk, all of their copies are also expanded or shrunk.
If it is known that MDisk space (which is used for creating copies) is already formatted or if the
user does not require read stability, a “no synchronization” option can be selected that
declares the copies as “synchronized” (even when they are not).
To minimize the time that is required to resynchronize a copy that is out of sync, only the
256 KiB grains that were written to since the synchronization was lost are copied. This
approach is known as an incremental synchronization. Only the changed grains must be
copied to restore synchronization.
Important: An unmirrored volume can be migrated from one location to another by adding
a second copy to the wanted destination, waiting for the two copies to synchronize, and
then removing the original copy 0. This operation can be stopped at any time. The two
copies can be in separate storage pools with separate extent sizes.
Where there are two copies of a volume, one copy is known as the primary copy. If the
primary is available and synchronized, reads from the volume are directed to it. The user can
select the primary when the volume is created or can change it later.
Placing the primary copy on a high-performance controller maximizes the read performance
of the volume.
Figure 2-10 Data flow for write I/O processing in a mirrored volume in the SVC
As shown in Figure 2-10, all the writes are sent by the host to the preferred node for each
volume (1); then, the data is mirrored to the cache of the partner node in the I/O Group (2),
and acknowledgment of the write operation is sent to the host (3). The preferred node then
destaged the written data to the two volume copies (4).
With version 7.3, the cache architecture changed from an upper-cache design to a two-layer
cache design. With this change, the data is only written once and is then directly destaged
from the controller to the locally attached disk system. Figure 2-11 shows the data flow in a
stretched environment.
Site1 Site2
Preferred Node IO group Node Pair Non-Preferred Node
Write Data with location
UCA UCA
For more information about the change, see Chapter 6 of IBM System Storage SAN Volume
Controller and Storwize V7000 Best Practices and Performance Guidelines, SG24-7521.
A volume with copies can be checked to see whether all of the copies are identical or
consistent. If a medium error is encountered while it is reading from one copy, it is repaired by
using data from the other copy. This consistency check is performed asynchronously with
host I/O.
Important: Mirrored volumes can be taken offline if there is no quorum disk available. This
behavior occurs because the synchronization status for mirrored volumes is recorded on
the quorum disk.
Mirrored volumes use bitmap space at a rate of 1 bit per 256 KiB grain, which translates to
1 MiB of bitmap space supporting 2 TiB of mirrored volumes. The default allocation of bitmap
space is 20 MiB, which supports 40 TiB of mirrored volumes. If all 512 MiB of variable bitmap
space is allocated to mirrored volumes, 1 PiB of mirrored volumes can be supported.
34 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The sum of all bitmap memory allocation for all functions except FlashCopy must not exceed
552 MiB.
Therefore, the real capacity determines the quantity of MDisk extents that is initially allocated
to the volume. The virtual capacity is the capacity of the volume that is reported to all other
SVC components (for example, FlashCopy, cache, and remote copy) and to the host servers.
The real capacity is used to store the user data and the metadata for the thin-provisioned
volume. The real capacity can be specified as an absolute value or a percentage of the virtual
capacity.
Thin-provisioned volumes can be used as volumes that are assigned to the host, by
FlashCopy to implement thin-provisioned FlashCopy targets, and with the mirrored volumes
feature.
When a thin-provisioned volume is initially created, a small amount of the real capacity is
used for initial metadata. I/Os are written to grains of the thin volume that were not previously
written, which causes grains of the real capacity to be used to store metadata and the actual
user data. I/Os are written to grains that were previously written, which updates the grain
where data was previously written.
The grain size is defined when the volume is created. The grain size can be 32 KiB, 64 KiB,
128 KiB, or 256 KiB. The default grain size is 256 KiB, which is the recommended option. If
you select 32 KiB for the grain size, the volume size cannot exceed 260TiB. The grain size
cannot be changed after the thin-provisioned volume is created. Generally, smaller grain sizes
save space, but they require more metadata access, which can adversely affect performance.
If you do not use the thin-provisioned volume as a FlashCopy source or target volume, use
256 KiB to maximize performance. If you use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the FlashCopy
function.
Thin-provisioned volumes store user data and metadata. Each grain of data requires
metadata to be stored. Therefore, the I/O rates that are obtained from thin-provisioned
volumes are less than the I/O rates that are obtained from fully allocated volumes.
The metadata storage overhead is never greater than 0.1% of the user data. The overhead is
independent of the virtual capacity of the volume. If you are using thin-provisioned volumes in
a FlashCopy map, use the same grain size as the map grain size for the best performance. If
you are using the thin-provisioned volume directly with a host system, use a small grain size.
The real capacity of a thin volume can be changed if the volume is not in image mode.
Increasing the real capacity allows a larger amount of data and metadata to be stored on the
volume. Thin-provisioned volumes use the real capacity that is provided in ascending order as
36 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
new data is written to the volume. If the user initially assigns too much real capacity to the
volume, the real capacity can be reduced to free storage for other uses.
A thin-provisioned volume can be configured to autoexpand. This feature causes the SVC to
automatically add a fixed amount of more real capacity to the thin volume as required.
Therefore, autoexpand attempts to maintain a fixed amount of unused real capacity for the
volume, which is known as the contingency capacity.
The contingency capacity is initially set to the real capacity that is assigned when the volume
is created. If the user modifies the real capacity, the contingency capacity is reset to be the
difference between the used capacity and real capacity.
A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and it must expand.
Autoexpand does not cause the real capacity to grow much beyond the virtual capacity. The
real capacity can be manually expanded to more than the maximum that is required by the
current virtual capacity, and the contingency capacity is recalculated.
To support the auto expansion of thin-provisioned volumes, the storage pools from which they
are allocated have a configurable capacity warning. When the used capacity of the pool
exceeds the warning capacity, a warning event is logged. For example, if a warning of 80% is
specified, the event is logged when 20% of the free capacity remains.
This governing feature can be used to satisfy a quality of service (QoS) requirement or a
contractual obligation (for example, if a client agrees to pay for I/Os performed, but does not
pay for I/Os beyond a certain rate). Only Read, Write, and Verify commands that access the
physical medium are subject to I/O governing.
The governing rate can be set in I/Os per second or MB per second. It can be altered by
changing the throttle value by running the chvdisk command and specifying the -rate
parameter.
I/O governing: I/O governing on Metro Mirror or Global Mirror secondary volumes does
not affect the data copy rate from the primary volume. Governing has no effect on
FlashCopy or data migration I/O rates.
An I/O budget is expressed as a number of I/Os (or MBs) over a minute. The budget is evenly
divided among all SVC nodes that service that volume, which means among the nodes that
form the I/O Group of which that volume is a member.
The algorithm operates two levels of policing. While a volume on each SVC node receives I/O
at a rate lower than the governed level, no governing is performed. However, when the I/O
rate exceeds the defined threshold, the policy is adjusted. A check is made every minute to
see that each node is receiving I/O below the threshold level. Whenever this check shows that
the host exceeded its limit on one or more nodes, policing begins for new I/Os.
This algorithm might cause I/O to backlog in the front end, which might eventually cause a
Queue Full Condition to be reported to hosts that continue to flood the system with I/O. If a
host stays within its 1-second budget on all nodes in the I/O Group for 1 minute, the policing is
relaxed and monitoring takes place over the 1-minute period as before.
2.6 HyperSwap
The IBM HyperSwap function is a high availability feature that provides dual-site, active-active
access to a volume. Active-active volumes have a copy in one site and a copy at another site.
Data that is written to the volume is automatically sent to both copies so that either site can
provide access to the volume if the other site becomes unavailable. Active-active
relationships are made between the copies at each site. These relationships automatically
run and switch direction according to which copy or copies are online and up to date.
Relationships can be grouped into consistency groups just like Metro Mirror and Global Mirror
relationships. The consistency groups fail over consistently as a group based on the state of
all copies in the group. An image that can be used in the case of a disaster recovery is
maintained at each site.When the system topology is set to hyperswap, each node, controller,
and host map object in the system configuration must have a site attribute set to 1 or 2. Both
nodes of an I/O group must be at the same site. This site must be the same site as the site of
the controllers that provide the managed disks to that I/O group. When managed disks are
added to storage pools, their site attributes must match. This requirement ensures that each
copy in the active-active relationship is fully independent and is located at a distinct site.
When you plan your network, consideration must be given to the type of RAID configuration
that you use. SAN Volume Controller supports either a non-distributed array or a distributed
array configuration. We discuss DRAID in greater depth in Implementing the IBM Storwize
V7000 and IBM Spectrum Virtualize V7.6, SG24-7938
38 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The concept of distributed RAID is to distribute an array with width W across a set of X drives.
For example, you might have a 2+P RAID-5 array that is distributed across a set of 40 drives.
The array type and width define the level of redundancy. In the previous example, there is a
33% capacity overhead for parity. If an array stride needs to be rebuilt, two component strips
must be read to rebuild the data for the third component. The set size defines how many
drives are used by the distributed array. It is obviously a requirement that performance and
usable capacity scales according to the number of drives in the set. The other key feature of a
distributed array is that instead of having a hot spare, the set includes spare strips that are
also distributed across the set of drives. The data and spares are distributed such that if one
drive in the set fails redundancy can be restored by rebuilding data on to the spare strips at a
rate much greater than the rate of a single component.
Distributed arrays are used to create large-scale internal managed disks. They can manage 4
- 128 drives and contain their own rebuild areas to accomplish error recovery when drives fail.
As a result, rebuild times are dramatically reduced, which lowers the exposure volumes have
to the extra load of recovering redundancy. Because the capacity of these managed disks is
potentially so great, when they are configured in the system the overall limits change in order
to allow them to be virtualized. For every distributed array, the space for 16 MDisk extent
allocations is reserved and therefore 15 other MDisk identities are removed from the overall
pool of 4096. Distributed arrays also aim to provide a uniform performance level. A distributed
array can contain multiple drive classes if the drives are similar (for example, the drives have
the same attributes, but the capacities are larger) to achieve this performance. All the drives
in a distributed array must come from the same I/O group to maintain a simple configuration
model.
Distributed RAID 5
Distributed RAID 5 arrays stripe data over the member drives with one parity strip on every
stripe. These distributed arrays can support 4 - 128 drives. RAID 5 distributed arrays can
tolerate the failure of one member drive.
Distributed RAID 6
Distributed RAID 6 arrays stripe data over the member drives with two parity strips on every
stripe. These distributed arrays can support 6 - 128 drives. A RAID 6 distributed array can
tolerate any two concurrent member drive failures.
40 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Figure 2-14 on page 42 shows a distributed array that contains a failed drive. To recover data,
the data is read from multiple drives. The recovered data is then written to the rebuild areas,
which are distributed across all of the drives in the array. The remaining rebuild areas are
distributed across all drives.
2.8 Encryption
SAN Volume Controller 2145-DH8 system provides optional encryption of data at rest, which
protects against the potential exposure of sensitive user data and user metadata that is
stored on discarded, lost, or stolen storage devices. Encryption of system data and system
metadata is not required, so system data and metadata are not encrypted.
Encryption-enabled describes a system where a secret key is configured and used. The key
must be used to unlock the encrypted data enabling access control.
42 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
flash module, to unlock and operate that entity. The system permits access control
enablement only when it is encryption-enabled. A system that is encryption-enabled can
optionally also be access-control-enabled to provide functional security.
The Protection Enablement Process (PEP) transitions the system from a state that is not
protection-enabled to a state that is protection-enabled. The PEP requires that the customer
provide a secret key to access the system. This secret key must be resiliently stored and
backed up externally to the system; for example, it may be stored on USB flash drives. PEP is
not merely activating a feature using the management GUI or CLI. To avoid loss of data that
was written to the system before the PEP occurs, the customer must move all of the data to
be retained off of the system before the PEP is initiated. After PEP has completed, the
customer must move the data back onto the system. The PEP is performed during the system
initialization process, if encryption is activated. The system does not support Application
Managed Encryption (AME).
To encrypt data that is stored on drives, the nodes capable of encryption must be licensed
and configured to use encryption. When encryption is activated and enabled on the system,
valid encryption keys must be present on the system when the system unlocks the drives or
the user generates a new key. The encryption key must be stored on USB flash drives that
contain a copy of the key that was generated when encryption was enabled. Without these
keys, user data on the drives cannot be accessed.
Before activating and enabling encryption, you must determine the method of accessing key
information during times when the system requires an encryption key to be present. The
system requires an encryption key to be present during the following operations:
System power-on
System reboot
User initiated re-key operations
USB flash drives are never inserted into the system except as required
For the most secure operation, do not keep the USB flash drives inserted into the nodes on
the system. However, this method requires that you manually insert the USB flash drives that
contain copies of the encryption key in the nodes during operations which the system
requires an encryption key to be present. USB flash drives that contain the keys must be
stored securely to prevent theft or loss. During operations which the system requires an
encryption key to be present, the USB flash drives must be inserted manually into each node
so data can be accessed. After the system has completed unlocking the drives, the USB flash
drives must be removed and stored securely to prevent theft or loss.
The iSCSI function is a software function that is provided by the SVC code, not hardware.
In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP
network, which is based on IP routers and Ethernet switches. iSCSI is a block-level protocol
that encapsulates SCSI commands into TCP/IP packets; therefore, it uses an existing IP
network instead of requiring expensive FC HBAs and a SAN fabric infrastructure.
A pure SCSI architecture is based on the client/server model. A client (for example, server or
workstation) starts read or write requests for data from a target server (for example, a data
storage system). Commands, which are sent by the client and processed by the server, are
put into the Command Descriptor Block (CDB). The server runs a command and completion
is indicated by a special signal alert.
The major functions of iSCSI include encapsulation and the reliable delivery of CDB
transactions between initiators and targets through the TCP/IP network, especially over a
potentially unreliable IP network.
44 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The following concepts of names and addresses are carefully separated in iSCSI:
An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An
iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms
initiator name and target name also refer to an iSCSI name.
An iSCSI address specifies not only the iSCSI name of an iSCSI node, but a location of
that node. The address consists of a host name or IP address, a TCP port number (for the
target), and the iSCSI name of the node. An iSCSI node can have any number of
addresses, which can change at any time, particularly if they are assigned by way of
Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node
and provides statically allocated IP addresses.
Each iSCSI node, that is, an initiator or target, has a unique IQN, which be can up to
255 bytes. The IQN is formed according to the rules that were adopted for Internet nodes.
The iSCSI qualified name format is defined in RFC3720 and contains (in order) the following
elements:
The string iqn
A date code that specifies the year and month in which the organization registered the
domain or subdomain name that is used as the naming authority string
The organizational naming authority string, which consists of a valid, reversed domain or a
subdomain name
Optional: A colon (:), followed by a string of the assigning organization’s choosing, which
must make each assigned iSCSI name unique
For the SVC, the IQN for its iSCSI target is specified as shown in the following example:
iqn.1986-03.com.ibm:2145.<clustername>.<nodename>
On a Microsoft Windows server, the IQN (that is, the name for the iSCSI initiator), can be
defined as shown in the following example:
iqn.1991-05.com.microsoft:<computer name>
The IQNs can be abbreviated by using a descriptive name, which is known as an alias. An
alias can be assigned to an initiator or a target. The alias is independent of the name and
does not have to be unique. Because it is not unique, the alias must be used in a purely
informational way. It cannot be used to specify a target at login or during authentication.
Targets and initiators can have aliases.
An iSCSI name provides the correct identification of an iSCSI device irrespective of its
physical location. The IQN is an identifier, not an address.
Caution: Before you change system or node names for an SVC system that has servers
that are connected to it by way of iSCSI, be aware that because the system and node
name are part of the SVC’s IQN, you can lose access to your data by changing these
names. The SVC GUI displays a warning, but the CLI does not display a warning.
The iSCSI session, which consists of a login phase and a full feature phase, is completed with
a special command.
The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to
adjust various parameters between two network entities and to confirm the access rights of
an initiator.
If the iSCSI login phase is completed successfully, the target confirms the login for the
initiator; otherwise, the login is not confirmed and the TCP connection breaks.
When the login is confirmed, the iSCSI session enters the full feature phase. If more than one
TCP connection was established, iSCSI requires that each command and response pair must
go through one TCP connection. Therefore, each separate read or write command is carried
out without the necessity to trace each request for passing separate flows. However, separate
transactions can be delivered through separate TCP connections within one session.
Figure 2-15 shows an overview of the various block-level storage protocols and the position of
the iSCSI layer.
SVC nodes have up to six standard Ethernet ports. These ports are for 1 Gbps support or
with the optional Ethernet Card10 Gbps support. System management is possible only over
the 1 Gbps ports.
Figure 2-16 shows an overview of the IP addresses on an SVC node port and how these IP
addresses are moved between the nodes of an I/O Group.
46 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The management IP addresses and the iSCSI target IP addresses fail over to the partner
node N2 if node N1 fails (and vice versa). The iSCSI target IPs fail back to their corresponding
ports on node N1 when node N1 is running again.
It is a preferred practice to keep all of the eth0 ports on all of the nodes in the system on the
same subnet. The same practice applies for the eth1 ports; however, it can be a separate
subnet to the eth0 ports.
You can configure a maximum of 512 iSCSI hosts per I/O Group per SVC because of IQN
limits. 2048 host objects for the complete system.
A CHAP secret can be assigned to each SVC host object. The host must then use CHAP
authentication to begin a communications session with a node in the system. A CHAP secret
can also be assigned to the system.
Volumes are mapped to hosts, and LUN masking is applied by using the same methods that
are used for FC LUNs.
Because iSCSI can be used in networks where data security is a concern, the specification
allows for separate security methods. For example, you can set up security through a method,
such as IPSec, which is not apparent for higher levels, such as iSCSI, because it is
implemented at the IP level. For more information about securing iSCSI, see Securing Block
Storage Protocols over IP, RFC3723, which is available at this website:
http://tools.ietf.org/html/rfc3723
If FC-attached hosts see their FC target and volumes go offline, for example, because of a
problem in the target node, its ports, or the network, the host must use a separate SAN path
to continue I/O. Therefore, a multipathing driver is always required on the host.
SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the
key difference) the host is reconnected to the same IP target that reappears after a short
period and its volumes continue to be available for I/O. iSCSI allows failover without host
multipathing. To achieve this failover without host multipathing, the partner node in the I/O
Group takes over the port IP addresses and iSCSI names of a failed node.
A host multipathing driver for iSCSI is required if you want the following capabilities:
Protecting a server from network link failures
48 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Protecting a server from network failures if the server is connected through two separate
networks
Providing load balancing on the server’s network links
Copy services functions are implemented within an SVC system (FlashCopy and image mode
migration) or between the SVC or SVC and Storwize systems (Metro Mirror and Global
Mirror). To use Metro Mirror and Global Mirror functions, you must have the remote copy
license installed on each side.
You can create partnerships with the SVC and Storwize systems to allow Metro Mirror and
Global Mirror to operate between the two systems. To create these partnerships, both
clustered systems must be at version 6.3.0 or later.
A clustered system is in one of two layers: the replication layer or the storage layer. The SVC
system is always in the replication layer. The Storwize system is in the storage layer by
default, but the system can be configured to be in the replication layer instead.
Figure 2-17 shows an example of the layers in an SVC and Storwize clustered-system
partnership.
Within the SVC, both intracluster copy services functions (FlashCopy and image mode
migration) operate at the block level. Intercluster functions (Global Mirror and Metro Mirror)
operate at the volume layer. A volume is the container that is used to present storage to host
systems. Operating at this layer allows the Advanced Copy Services functions to benefit from
caching at the volume layer and helps facilitate the asynchronous functions of Global Mirror
and lessen the effect of synchronous Metro Mirror.
Operating at the volume layer also allows Advanced Copy Services functions to operate
above and independently of the function or characteristics of the underlying disk subsystems
that are used to provide storage resources to an SVC system. Therefore, if the physical
storage is virtualized with an SVC or Storwize and the backing array is supported by the SVC
or Storwize, you can use disparate backing storage.
FlashCopy: Although FlashCopy operates at the block level, this level is the block level of
the SVC, so the physical backing storage can be anything that the SVC supports. However,
performance is limited to the slowest performing storage that is involved in FlashCopy.
Metro Mirror is the IBM branded term for synchronous remote copy function. Global Mirror
is the IBM branded term for the asynchronous remote copy function.
Synchronous remote copy ensures that updates are physically committed (not in volume
cache) in both the primary and the secondary SVC clustered systems before the application
considers the updates complete. Therefore, the secondary SVC clustered system is fully
up-to-date if it is needed in a failover.
50 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
However, the application is fully exposed to the latency and bandwidth limitations of the
communication link to the secondary system. In a truly remote situation, this extra latency can
have a significantly adverse effect on application performance; therefore, a limitation of 300
kilometers (~186 miles) exists on the distance of Metro Mirror. This distance induces latency
of approximately 5 microseconds per kilometer, which does not include the latency that is
added by the equipment in the path.
The nature of synchronous remote copy is that latency for the distance and the equipment in
the path is added directly to your application I/O response times. The overall latency for a
complete round trip must not exceed 80 milliseconds.
With version 7.4, the round-trip time was improved to 250 ms. Figure 2-18 shows a list of the
round-trip times.
Special configuration guidelines for SAN fabrics are used for data replication. The distance
and available bandwidth of the intersite links must be considered. For more information about
these guidelines, see the SVC Support Portal, which is available at this website:
https://ibm.biz/BdEzB5
In asynchronous remote copy, the application is provided acknowledgment that the write is
complete before the write is committed (written to backing storage) at the secondary site.
Therefore, on a failover, certain updates (data) might be missing at the secondary site.
The application must have an external mechanism for recovering the missing updates or
recovering to a consistent point (which is usually a few minutes in the past). This mechanism
can involve user intervention, but in most practical scenarios, it must be at least partially
automated.
Recovery on the secondary site involves assigning the Global Mirror targets from the SVC
target system to one or more hosts (which depends on your disaster recovery design) and
making those volumes visible on the host and creating any required multipath device
definitions.
The application must then be started and a recovery procedure to either a consistent point in
time or recovery of the missing updates must be performed. For this reason, the initial state of
Global Mirror targets is called crash consistent. This term might sound daunting, but it merely
means that the data on the volumes appears to be in the same state as though an application
crash occurred.
In asynchronous remote copy with cycling mode (Change Volumes), changes are tracked and
copied to intermediate Change Volumes where needed. Changes are transmitted to the
secondary site periodically. The secondary volumes are much further behind the primary
volume, and more data must be recovered if there is a failover. Because the data transfer can
be smoothed over a longer time period, however, lower bandwidth is required to provide an
effective solution.
Because most applications, such as databases, have mechanisms for dealing with this type of
data state for a long time, it is a fairly mundane operation (depending on the application). After
this application recovery procedure is finished, the application starts normally.
RPO: When you are planning your Recovery Point Objective (RPO), you must account for
application recovery procedures, the length of time that they take, and the point to which
the recovery procedures can roll back data.
Although Global Mirror on an SVC can provide typically subsecond RPO times, the
effective RPO time can be up to 5 minutes or longer, depending on the application
behavior.
Most clients aim to automate the failover or recovery of the remote copy through failover
management software. The SVC provides Simple Network Management Protocol (SNMP)
traps and interfaces to enable this automation. IBM Support for automation is provided by IBM
Spectrum Control™.
The documentation is available at the IBM Spectrum Control Knowledge Center at this
website:
http://www.ibm.com/support/knowledgecenter/SS5R93/welcome
2.10.2 FlashCopy
FlashCopy is the IBM branded name for Point-in-Time, which is sometimes called Time-Zero,
or T0 copy. This function makes a copy of the blocks on a source volume and can duplicate
them on 1 - 256 target volumes.
FlashCopy: When the multiple target capability of FlashCopy is used, if any other copy (C)
is started while an existing copy is in progress (B), C has a dependency on B. Therefore, if
you end B, C becomes invalid.
FlashCopy works by creating one or two (for incremental operations) bitmaps to track
changes to the data on the source volume. This bitmap is also used to present an image of
the source data at the point that the copy was taken to target hosts while the actual data is
being copied. This capability ensures that copies appear to be instantaneous.
Bitmap: In this context, bitmap refers to a special programming data structure that is used
to compactly store Boolean values. Do not confuse this definition with the popular image
file format.
If your FlashCopy targets have existing content, the content is overwritten during the copy
operation. Also, the “no copy” (copy rate 0) option, in which only changed data is copied,
overwrites existing content. After the copy operation starts, the target volume appears to have
the contents of the source volume as it existed at the point that the copy was started.
Although the physical copy of the data takes an amount of time that varies based on system
52 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
activity and configuration, the resulting data at the target appears as though the copy was
made instantaneously.
The SVC also permits source and target volumes for FlashCopy to be thin-provisioned
volumes. FlashCopies to or from thinly provisioned volumes allow the duplication of data
while using less space. These types of volumes depend on the rate of change of the data.
Typically, these types of volumes are used in situations where time is limited. Over time, they
might fill the physical space that they were allocated. Reverse FlashCopy enables target
volumes to become restore points for the source volume without breaking the FlashCopy
relationship and without having to wait for the original copy operation to complete. The SVC
supports multiple targets and therefore multiple rollback points.
In most practical scenarios, the FlashCopy functionality of the SVC is integrated into a
process or procedure that allows the benefits of the point-in-time copies to be used to
address business needs. IBM offers IBM Spectrum Protect Snapshot for this functionality. For
more information about IBM Spectrum Protect Snapshot, see this website:
http://www.ibm.com/software/products/en/spectrum-protect-snapshot
Most clients aim to integrate the FlashCopy feature for point-in-time copies and quick
recovery of their applications and databases.
Image mode migration works by establishing a one-to-one static mapping of volumes and
MDisks. This mapping allows the data on the MDisk to be presented directly through the
volume layer and allows the data to be moved between volumes and the associated backing
MDisks. This function provides a facility to use the SVC as a migration tool. Otherwise, you
have no recourse, such as migrating from Vendor A hardware to Vendor B hardware,
assuming that the two systems have no other compatibility.
Volume mirroring migration is a clever use of the facility that the SVC offers to mirror data on
a volume between two sets of storage pools. As with the logical volume management portion
of certain operating systems, the SVC can mirror data transparently between two sets of
physical hardware. You can use this feature to move data between MDisk groups with no host
I/O interruption by removing the original copy after the mirroring is completed. This feature is
much more limited than FlashCopy and must not be used where FlashCopy is appropriate.
Instead, use this function as an infrequent-use, hardware-refresh aid, because you now can
move between your old storage system and new storage system without interruption.
Careful planning: When you are migrating by using the volume mirroring migration, your
I/O rate is limited to the slowest of the two MDisk groups that are involved. Therefore,
planning carefully to avoid affecting the live systems is imperative.
Resources on the clustered system act as highly available versions of unclustered resources.
If a node (an individual computer) in the system is unavailable or too busy to respond to a
request for a resource, the request is passed transparently to another node that can process
the request. The clients are unaware of the exact locations of the resources that they use.
The SVC is a collection of up to eight nodes, which are added in pairs that are known as I/O
Groups. These nodes are managed as a set (system), and they present a single point of
control to the administrator for configuration and service activity.
The eight-node limit for an SVC system is a limitation that is imposed by the microcode and
not a limit of the underlying architecture. Larger system configurations might be available in
the future.
Although the SVC code is based on a purpose-optimized Linux kernel, the clustered system
feature is not based on Linux clustering code. The clustered system software within the SVC,
that is, the event manager cluster framework, is based on the outcome of the COMPASS
research project. It is the key element that isolates the SVC application from the underlying
hardware nodes. The clustered system software makes the code portable. It provides the
means to keep the single instances of the SVC code that are running on separate systems’
nodes in sync. Therefore, restarting nodes during a code upgrade, adding new nodes, or
removing old nodes from a system or failing nodes cannot affect the SVC’s availability.
All active nodes of a system must know that they are members of the system, especially in
situations where it is key to have a solid mechanism to decide which nodes form the active
system, such as the split-brain scenario where single nodes lose contact with other nodes. A
worst case scenario is a system that splits into two separate systems.
Within an SVC system, the voting set and a quorum disk are responsible for the integrity of
the system. If nodes are added to a system, they are added to the voting set. If nodes are
removed, they are removed quickly from the voting set. Over time, the voting set and the
nodes in the system can completely change so that the system migrates onto a separate set
of nodes from the set on which it started.
The SVC clustered system implements a dynamic quorum. Following a loss of nodes, if the
system can continue to operate, it adjusts the quorum requirement so that further node failure
can be tolerated.
54 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The lowest Node Unique ID in a system becomes the boss node for the group of nodes, and it
proceeds to determine (from the quorum rules) whether the nodes can operate as the
system. This node also presents the maximum two-cluster IP addresses on one or both of its
nodes’ Ethernet ports to allow access for system management.
Quorum disk placement: If possible, the SVC places the quorum candidates on separate
disk subsystems. However, after the quorum disk is selected, no attempt is made to ensure
that the other quorum candidates are presented through separate disk subsystems.
You can list the quorum disk candidates and the active quorum disk in a system by using the
lsquorum command.
When the set of quorum disk candidates is chosen, it is fixed. However, a new quorum disk
candidate can be chosen in one of the following conditions:
When the administrator requests that a specific MDisk becomes a quorum disk by using
the chquorum command
When an MDisk that is a quorum disk is deleted from a storage pool
When an MDisk that is a quorum disk changes to image mode
For disaster recovery purposes, a system must be regarded as a single entity, so the system
and the quorum disk must be colocated.
Special considerations are required for the placement of the active quorum disk for a
stretched or split cluster and split I/O Group configurations. For more information check the
Information Center.
Important: Running an SVC system without a quorum disk can seriously affect your
operation. A lack of available quorum disks for storing metadata prevents any migration
operation (including a forced MDisk delete).
Mirrored volumes can be taken offline if no quorum disk is available. This behavior occurs
because the synchronization status for mirrored volumes is recorded on the quorum disk.
During the normal operation of the system, the nodes communicate with each other. If a node
is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the system. If a
node fails for any reason, the workload that is intended for the node is taken over by another
node until the failed node is restarted and readmitted into the system (which happens
automatically).
If the microcode on a node becomes corrupted, which results in a failure, the workload is
transferred to another node. The code on the failed node is repaired, and the node is
readmitted into the system (which is an automatic process).
IP quorum configuration
In a stretched configuration or HyperSwap configuration, you must use a third, independent
site to house quorum devices.To use a quorum disk as the quorum device, this third site must
use Fibre Channel connectivity together with an external storage system. Sometimes, Fibre
Channel connectivity is not possible. In a local environment, no extra hardware or networking
such as Fibre Channel or SAS attached storage is required beyond what is normally always
provisioned within a system. To use an IP-based quorum application as the quorum device for
the third site, no Fibre Channel connectivity is used. Java applications are run on hosts at the
third site. However, there are strict requirements on the IP network and some disadvantages
with using IP quorum applications. Unlike quorum disks, all IP quorum applications must be
reconfigured and redeployed to hosts when certain aspects of the system configuration
change. These aspects include adding or removing a node from the system or when node
56 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
service IP addresses are changed. For stable quorum resolutions, an IP network must
provide the following requirements: Connectivity from the hosts to the service IP addresses of
all nodes. The network must also deal with possible security implications of exposing the
service IP addresses, as this connectivity can also be used to access the service GUI if IP
quorum is configured incorrectly. Port 1260 is used by IP quorum applications to
communicate from the hosts to all nodes. The maximum round-trip delay must not exceed 80
ms, which means 40 ms each direction. A minimum bandwidth of 2 megabytes per second is
guaranteed for node-to-quorum traffic. Even with IP quorum applications at the third site,
quorum disks at site one and site two are required, as they are used to store metadata. To
provide quorum resolution, use the mkquorumapp command to generate a Java application
that is copied from the system and run on a host at a third site. The maximum number of
applications that can be deployed is five. Currently, supported Java Runtime Environments
are IBM Java 7.1 and IBM Java 8.
2.11.3 Cache
The primary benefit of storage cache is to improve I/O response time. Reads and writes to a
magnetic disk drive experience seek time and latency time at the drive level, which can result
in 1 ms - 10 ms of response time (for an enterprise-class disk).
The 2145-DH8 nodes that are combined with the SVC provide 32 GiB (optional 64 GiB, with
the second CPU installed, which offers more processor power and memory for the Real-time
Compression (RtC) feature) memory per node, or 128 GiB per I/O Group, or 512 GiB per
SVC system. The SVC provides a flexible cache model, and the node’s memory can be used
as read or write cache. The size of the write cache is limited to a maximum of 12 GiB of the
node’s memory. Depending on the current I/O conditions on a node, the entire 32 GiB of
memory can be fully used as read cache.
Cache is allocated in 4 KiB segments. A segment holds part of one track. A track is the unit of
locking and destaging granularity in the cache. The cache virtual track size is 32 KiB (eight
segments). A track might be only partially populated with valid pages. The SVC coalesces
writes up to a 256 KiB track size if the writes are in the same tracks before destage. For
example, if 4 KiB is written into a track, another 4 KiB is written to another location in the
same track. Therefore, the blocks that are written from the SVC to the disk subsystem can be
any size between 512 bytes up to 256 KiB. The large cache and advanced cache
management algorithms within the 2145-DH8 allow it to improve on the performance of many
types of underlying disk technologies. The SVC’s capability to manage, in the background,
the destaging operations that are incurred by writes (in addition to still supporting full data
integrity) assists with SVC’s capability in achieving good database performance.
The cache is separated into two layers: an upper cache and a lower cache.
Figure 2-19 shows the separation of the upper and lower cache.
Lower Cache –
algorithm intelligence
The upper cache delivers the following functionality, which allows the SVC to streamline data
write performance:
Provides fast write response times to the host by being as high up in the I/O stack as
possible
Provides partitioning
Combined, the two levels of cache also deliver the following functionality:
Pins data when the LUN goes offline
Provides enhanced statistics for Tivoli® Storage Productivity Center and maintains
compatibility with an earlier version
Provides trace for debugging
Reports medium errors
Re synchronizes cache correctly and provides the atomic write functionality
Ensures that other partitions continue operation when one partition becomes 100% full of
pinned data
Supports fast-write (two-way and one-way), flush-through, and write-through
Integrates with T3 recovery procedures
58 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Depending on the size, age, and technology level of the disk storage system, the total
available cache in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in either the SVC or the
disk controller level of the overall system, the system as a whole can take advantage of the
larger amount of cache wherever the cache is located. Therefore, if the storage controller
level of the cache has the greater capacity, expect hits to this cache to occur, in addition to
hits in the SVC cache.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on both the underlying storage technology and the degree to which the workload
exhibits hotspots or sensitivity to cache size or cache algorithms.
The GUI and a web server are installed in the SVC system nodes. Therefore, any browser
can access the management GUI if the browser is pointed at the system IP address.
Management console
The management console for the SVC is referred to as the IBM System Storage Productivity
Center (SSPC). This appliance is no longer needed. The SVC can be reached through the
internal management GUI.
Remote authentication means that the validation of a user’s permission to access the
SVC’s management CLI or GUI is performed at a remote authentication server. That is,
except for the superuser account, no local user account administration is necessary on the
SVC.
You can use an existing user management system in your environment to control the SVC
user access, which implements a single sign-on (SSO) for the SVC.
Users that are authenticated by an LDAP server can log in to the SVC web-based GUI and
the CLI. Unlike remote authentication through Tivoli Integrated Portal, users do not need to be
configured locally for CLI access. An SSH key is not required for CLI login in this scenario,
either. However, locally administered users can coexist with remote authentication enabled.
The default administrative user that uses the name superuser must be a local user. The
superuser cannot be deleted or manipulated, except for the password and SSH key.
If multiple LDAP servers are available, you can assign multiple LDAP servers to improve
availability. Authentication requests are processed by those LDAP servers that are marked as
preferred unless the connections fail or a user is not found. Requests are distributed across
all preferred servers for load balancing in a round-robin fashion.
A user that is authenticated remotely by an LDAP server is granted permissions on the SVC
according to the role that is assigned to the group of which it is a member. That is, any SVC
user group with its assigned role, for example, CopyOperator, must exist with an identical
name on the SVC system and on the LDAP server, if users in that role are to be authenticated
remotely.
In the following example, we demonstrate LDAP user authentication that uses a Microsoft
Windows Server domain controller that is acting as an LDAP server.
60 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
4. You must configure the following parameters in the Configure Remote Authentication
window, as shown in Figure 2-22 and Figure 2-23 on page 62:
– For LDAP Type, select Microsoft Active Directory. (For an OpenLDAP server, select
Other for the type of LDAP server.)
– For Security, choose None. (If your LDAP server requires a secure connection, select
Transport Layer Security; the LDAP server’s certificate is configured later.)
– Click Advanced Settings to expand the bottom part of the window. Leave the User
Name and Password fields empty if your LDAP server supports anonymous bind. For
our MS AD server, we enter the credentials of an existing user on the LDAP server with
permission to query the LDAP directory. You can enter this information in the format of
an email address, for example, [email protected], or in the distinguished
format, for example, cn=Administrator,cn=users,dc=itso,dc=corp. Note the common
name portion cn=users for MS AD servers.
– If your LDAP server uses separate attributes from the predefined attributes, you can
edit them here. You do not need to edit the attributes when MS AD is used as the LDAP
service.
5. Figure 2-24 shows the Configure Remote Authentication window, where we configure the
following LDAP server details:
– Enter the IP address of at least one LDAP server.
– Although it is marked as optional, it might be required to enter a Base DN in the
distinguished name format, which defines the starting point in the directory at which to
search for users, for example, dc=itso,dc=corp.
– You can add more LDAP servers by clicking the plus (+) icon.
– Check Preferred if you want to use preferred LDAP servers.
62 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Now that the SVC for Remote Authentication is enabled and configured, we work with the
user groups. For remote authentication through LDAP, no local SVC users are maintained, but
the user groups must be set up correctly. The existing built-in SVC user groups can be used
and groups that are created in SVC user management can be used. However, the use of
self-defined groups might be advisable to avoid the SVC default groups from interfering with
the existing group names on the LDAP server. Any user group, whether built-in or
self-defined, must be enabled for remote authentication.
2. In the Create User Group window that is shown in Figure 2-25, complete the following
steps:
Next, we complete the following steps to create a group with the same name on the LDAP
server, that is, in the Active Directory Domain:
1. On the Domain Controller, start the Active Directory Users and Computers management
console and browse your domain structure to the entity that contains the user groups.
Click the Create new user group icon as highlighted in Figure 2-26 on page 64 to create
a group.
2. Enter the same name, SVC_LDAP_CopyOperator, in the Group Name field, as shown in
Figure 2-27. (The name is case sensitive.) Select the correct Group scope for your
environment and select Security for Group type. Click OK.
3. Edit the user’s properties so that the user can log in to the SVC. Make the user a member
of the appropriate user group for the intended SVC role, as shown in Figure 2-28 on
page 65, and click OK to save and apply the settings.
64 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
We are now ready to authenticate the users for the SVC through the remote server. To ensure
that everything works correctly, we complete the following steps to run a few tests to verify the
communication between the SVC and the configured LDAP service:
1. Select Settings → Security, and then select Global Actions → Test LDAP
Connections, as shown in Figure 2-29.
2. We test a real user authentication attempt. Select Settings → Security, then select
Global Actions → Test LDAP Authentication, as shown in Figure 2-31.
3. As shown in Figure 2-32, enter the User Credentials of a user that was defined on the
LDAP server, and then click Test.
66 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Both the LDAP connection test and the LDAP authentication test must complete successfully
to ensure that the LDAP authentication works correctly. In our example, an error message
points to user authentication problems during the LDAP authentication test. It might help to
analyze the LDAP server’s response outside of the SVC. You can use any native LDAP query
tool, for example, the no-charge software LDAPBrowser tool, which is available at this
website:
http://www.ldapbrowser.com/
For a pure MS AD environment, you can use the Microsoft Sysinternals ADExplorer tool,
which is available at this website:
http://technet.microsoft.com/en-us/sysinternals/bb963907
Assuming that the LDAP connection and the authentication test succeeded, users can log in
to the SVC GUI and CLI by using their network credentials, for example, their Microsoft
Windows domain user name and password.
Figure 2-33 shows the web GUI login window with the Windows domain credentials entered.
A user can log in with their short name (that is, without the domain component) or with the
fully qualified user name in the form of an email address.
After a successful login, the user name is displayed in a welcome message in the upper-right
corner of the window, as highlighted in Figure 2-34 on page 68.
Logging in by using the CLI is possible with the short user name or the fully qualified name.
The lscurrentuser CLI command displays the user name of the currently logged in user and
their role.
Forbidden characters are the single quotation mark (‘), colon (:), percent symbol (%),
asterisk (*), comma (,), and double quotation marks (“).
Passwords for local users can be up to 64 printable ASCII characters. There are no forbidden
characters; however, passwords cannot begin or end with blanks.
68 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Volume Controller 2145-DH8 nodes or the front panel on earlier models of the system. To
meet varying security requirements, this functionality can be enabled or disabled using the
CLI. However, disabling the superuser makes the system inaccessible if all the users forget
their passwords or lose their SSH keys.
To register an SSH key for the superuser to provide command-line access, select Service
Assistant Tool → Configure CLI Access to assign a temporary key. However, the key is lost
during a node restart. The permanent way to add the key is through the normal GUI; select
User Management → superuser → Properties to register the SSH key for the superuser.
The superuser is always a member of user group 0, which has the most privileged role within
the SVC.
User groups are used for local and remote authentication. Because the SVC knows of five
roles, by default, five user groups are defined in an SVC system, as shown in Table 2-4.
0 SecurityAdmin SecurityAdmin
1 Administrator Administrator
2 CopyOperator CopyOperator
3 Service Service
4 Monitor Monitor
The access rights for a user who belongs to a specific user group are defined by the role that
is assigned to the user group. It is the role that defines what a user can or cannot do on an
SVC system.
Table 2-5 on page 70 shows the roles ordered (from the top) by the least privileged Monitor
role down to the most privileged SecurityAdmin role. The NasSystem role has no special user
group.
VASA Provider Users with this role can manage VMware vSphere Virtual Volumes.
chauthservice,chldap,chldapserver,chsecurity,chuser,chusergrp,
mkldapserver,mkuser,mkusergrp,rmldapserver,rmuser,rmusergro,
setpwdreset
Service All commands that are allowed for the Monitor role and applysoftware,
setlocale, addnode, rmnode, cherrstate, writesernum, detectmdisk,
includemdisk, clearerrlog, cleardumps, settimezone, stopcluster,
startstats, stopstats, and setsystemtime
CopyOperator All commands allowed for the Monitor role and prestartfcconsistgrp,
startfcconsistgrp, stopfcconsistgrp, chfcconsistgrp, prestartfcmap,
startfcmap, stopfcmap, chfcmap, startrcconsistgrp, stoprcconsistgrp,
switchrcconsistgrp, chrcconsistgrp, startrcrelationship,
stoprcrelationship, switchrcrelationship, chrcrelationship, and
chpartnership
SecurityAdmin All commands, except those commands that are allowed by the NasSystem
role
70 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Local users: Local users are created for each SVC system. Each user has a name, which
must be unique across all users in one system.
If you want to allow access for a user on multiple systems, you must define the user in each
system with the same name and the same privileges.
A local user always belongs to only one user group. Figure 2-36 on page 71 shows an
overview of local authentication within the SVC.
Remote users must be defined only in the SVC system if command-line access is required.
No local user is required for GUI-only remote access. For command-line access, the remote
authentication flag must be set and its password must be defined for the user. For users that
require CLI access with remote authentication, the password must be defined locally for the
users.
Remote users cannot belong to any user group because the remote authentication service,
for example, an LDAP directory server, such as IBM Tivoli Directory Server or Microsoft
Active Directory, delivers the user group information.
The authentication service that is supported by the SVC is the Tivoli Embedded Security
Services server component level 6.2.
The Tivoli Embedded Security Services server provides the following key features:
Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in
use, which means that the SVC communicates only with Tivoli Embedded Security
Services to get its authentication information. The type of protocol that is used to access
the central directory or the kind of the directory system that is used is not apparent to the
SVC.
Tivoli Embedded Security Services provides a secure token facility that is used to enable
single sign-on (SSO). SSO means that users do not have to log in multiple times when
they are using what appears to them to be a single system. SSO is used within Tivoli
Productivity Center. When the SVC access is started from within Tivoli Productivity
Center, the user does not have to log in to the SVC because the user logged in to Tivoli
Productivity Center.
72 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The SVC supports an HTTP or HTTPS connection to the Tivoli Embedded Security
Services server. If the HTTP option is used, the user and password information is
transmitted in clear text over the IP network.
2. Configure user groups on the system that match those user groups that are used by the
authentication service. For each group of interest that is known to the authentication
service, an SVC user group must exist with the same name and the remote setting
enabled.
For example, you can have a group that is called sysadmins, whose members require the
SVC Administrator role. Configure this group by using the following command:
svctask mkusergrp -name sysadmins -remote -role Administrator
If none of a user’s groups match any of the SVC user groups, the user is not permitted to
access the system.
3. Configure users that do not require SSH access. Any SVC users that use the remote
authentication service and do not require SSH access must be deleted from the system.
The superuser cannot be deleted; it is a local user and cannot use the remote
authentication service.
4. Configure users that require SSH access. Any SVC users that use the remote
authentication service and require SSH access must have their remote setting enabled
and the same password set on the system and the authentication service. The remote
setting instructs the SVC to consult the authentication service for group information after
the SSH key authentication step to determine the user’s role. The need to configure the
user’s password on the system in addition to the authentication service is because of a
limitation in the Tivoli Embedded Security Services server software.
5. Configure the system time. For correct operation, the SVC system and the system that is
running the Tivoli Embedded Security Services server must have the same view of the
current time. The easiest way is to have them both use the same Network Time Protocol
(NTP) server.
Note: Failure to follow this step can lead to poor interactive performance of the SVC
user interface or incorrect user-role assignments.
Also, Tivoli Storage Productivity Center uses the Tivoli Integrated Portal infrastructure and its
underlying IBM WebSphere® Application Server capabilities to use an LDAP registry and
enable SSO.
The new SVC 2145-DH8 Storage Engine has the following key hardware features:
One or two Intel Xeon E5 v2 Series eight-core processors, each with 32 GB memory
16 Gb FC, 8 Gb FC, 10 Gb Ethernet, and 1 Gb Ethernet I/O ports for FC, iSCSI, and Fibre
Channel over Ethernet (FCoE) connectivity
Optional feature: Hardware-assisted compression acceleration
Optional feature: 12 Gb SAS expansion enclosure attachment for internal flash storage)
Two integrated battery units
Model 2145-DH8 includes three 1 Gb Ethernet ports standard for iSCSI connectivity. Model
2145-DH8 can be configured with up to four I/O adapter features that provide up to sixteen
16 Gb FC ports, or up to twelve 8 Gb FC ports, or up to four 10 Gb Ethernet (iSCSI/Fibre
Channel over Ethernet (FCoE)) ports or a mixture of all above. For more information, see the
optional feature section in the knowledge center:
https://ibm.biz/BdHnKF
Real-time Compression workloads can benefit from Model 2145-DH8 configurations with two
eight-core processors with 64 GB of memory (total system memory). Compression workloads
can also benefit from the hardware-assisted acceleration that is offered by the addition of up
to two compression accelerator cards. The SVC Storage Engines can be clustered to help
deliver greater performance, bandwidth, and scalability. An SVC clustered system can contain
up to four node pairs or I/O Groups. Model 2145-DH8 storage engines can be added into
existing SVC clustered systems that include previous generation storage engine models.
For more information, see IBM SAN Volume Controller Software Installation and
Configuration Guide, GC27-2286.
For more information about integration into existing clustered systems, compatibility, and
interoperability with installed nodes and uninterruptible power supplies, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1002999
The SVC 2145-DH8 includes preinstalled IBM Spectrum Virtualize 7.6 software.
Figure 2-38 shows the front view of the SVC 2145-DH8 node.
74 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The actual port speed for each of the ports can be displayed through the GUI, CLI, the node’s
front panel, and by light-emitting diodes (LEDs) that are placed at the rear of the node.
For more information, see SAN Volume Controller Model 2145-DH8 Hardware Installation
Guide, GC27-6490. The PDF is at this website:
https://ibm.biz/BdEzM7
The SVC imposes no limit on the FC optical distance between SVC nodes and host servers.
FC standards, with small form-factor pluggable optics (SFP) capabilities and cable type,
dictate the maximum FC distances that are supported.
If longwave SFPs are used in the SVC nodes, the longest supported FC link between the
SVC and switch is 40 km (24.85 miles).
Table 2-6 shows the cable length that is supported by shortwave SFPs.
2 Gbps FC 150 m (492.1 ft) 300 m (984.3 ft) 500 m (1640.5 ft) N/A
4 Gbps FC 70 m (229.7 ft) 150 m (492.1 ft) 380 m (1246.9 ft) 400 m (1312.34 ft)
8 Gbps FC limiting 20 m (68.10 ft) 50 m (164 ft) 150 m (492.1 ft) 190 m (623.36 ft)
16 Gbps FC 15 m (49.21 ft) 35 m (114.82 ft) 100 m (382.08 ft) 125 m (410.10 ft)
Table 2-7 shows the applicable rules that relate to the number of inter-switch link (ISL) hops
that is allowed in a SAN fabric between the SVC nodes or the system.
0 0 1 Maximum 3
(Connect to the same (Connect to the same (Recommended: 0,
switch.) switch.) connect to the same
switch.)
The system configuration node can be accessed over the Technican Port. Therefore, the
clustered system can be managed by SSH clients or GUIs on System Storage Productivity
Centers on separate physical IP networks. This capability provides redundancy if one of these
IP networks fails.
Support for iSCSI introduces one other IPv4 and one other IPv6 address for each SVC node
port. These IP addresses are independent of the system configuration IP addresses. An IP
address overview is shown in Figure 2-16 on page 47.
If you have SVC with the 10 Gbit features, FCoE support is added with an upgrade to version
6.4. The same 10 Gbit ports are iSCSI and FCoE capable. For performance, the FCoE ports
compare (regarding transport speed) with the native Fibre Channel ports (8 Gbit versus 10
Gbit), and recent enhancements to the iSCSI support mean that performance levels are
similar to iSCSI and Fibre Channel performance levels.
76 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
The actual times that are shown are not that important, but a dramatic difference exists
between accessing data that is in cache and data that is on an external disk.
We added a second scale to Figure 2-39, which gives you an idea of how long it takes to
access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you
an idea of the importance of future storage technologies closing or reducing the gap between
access times for data that is stored in cache/memory versus access times for data that is
stored on an external medium.
Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they showed
remarkable performance regarding capacity growth, form factor, and size reduction, price
decrease (cost per GB), and reliability.
However, the number of I/Os that a disk can handle and the response time that it takes to
process a single I/O did not improve at the same rate, although they certainly did improve. In
actual environments, we can expect from today’s enterprise-class FC serial-attached SCSI
(SAS) disk up to 200 IOPS per disk with an average response time (a latency) of
approximately 6 ms per I/O.
Nearline - SAS 90
Today’s rotating disks continue to advance in capacity (several TBs), form factor/footprint
(8.89 cm (3.5 inches), 6.35 cm (2.5 inches), and 4.57 cm (1.8 inches)), and price (cost per
GB), but they are not getting much faster.
The limiting factor is the number of revolutions per minute (RPM) that a disk can perform
(approximately 15,000). This factor defines the time that is required to access a specific data
block on a rotating device. Small improvements likely will occur in the future; however, a
significant step, such as doubling the RPM (if technically even possible), inevitably has an
associated increase in power usage and price that will be an inhibitor.
Enterprise-class Flash Drives typically deliver 85,000 read and 36,000 write IOPS with typical
latencies of 50 µs for reads and 800 µs for writes. Their form factors of 4,57 cm(1,8
inches)/6.35 cm (2.5 inches)/8.89 cm (3.5 inches) and their interfaces (FC/SAS/SATA) make
them easy to integrate into existing disk shelves.
Today’s Flash Drive technology is only a first step into the world of high-performance
persistent semiconductor storage. A group of the approximately 10 most promising
technologies is collectively referred to as Storage Class Memory (SCM).
For a comprehensive overview of the Flash Drive technology in a subset of the well-known
Storage Networking Industry Association (SNIA) Technical Tutorials, see these websites:
http://www.snia.org/education/tutorials/2010/spring#solid
http://www.snia.org/education/tutorials/fms2015
When these technologies become a reality, it will fundamentally change the architecture of
today’s storage infrastructures.
78 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
Internal Flash Drives can be configured in the following two RAID levels:
RAID 1 - RAID 10: In this configuration, one half of the mirror is in each node of the I/O
Group, which provides redundancy if a node failure occurs.
RAID 0: In this configuration, all the drives are assigned to the same node. This
configuration is intended to be used with Volume Mirroring because no redundancy is
provided if a node failure occurs.
The Flash MDisks can then be placed into a single Flash Drive tier storage pool.
High-workload volumes can be manually selected and placed into the pool to gain the
performance benefits of Flash Drives.
For a more effective use of Flash Drives, place the Flash Drive MDisks into a multitiered
storage pool that is combined with HDD MDisks (generic_hdd tier). Then, with Easy Tier
turned on, Easy Tier automatically detects and migrates high-workload extents onto the
solid-state MDisks.
For more information about IBM Flash Storage, see this website:
http://www.ibm.com/systems/storage/flash/
2.15.2 SAN Volume Controller 7.6 supported hardware list, device driver, and
firmware levels
As with all new software versions, 7.6 offers functional enhancements and new hardware that
can be integrated into existing or new SVC systems and interoperability enhancements or
new support for servers, SAN switches, and disk subsystems. For the most current
information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
80 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 02 SVC Overview Hartmut.fm
There have been changes in the way that you can work with volumes. These changes
include:
– Changing the system topology
– Enabling volume protection
– Disabling volume protection
– Adding a copy to a volume
– Adding a copy to a basic volume
– Deleting a copy from a volume
– Deleting a copy from a basic volume
– Deleting a volume
– Deleting an image mode volume
– Adding a copy to a HyperSwap volume or a stretched volume
– Deleting a copy from a HyperSwap volume or a stretched volume
82 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
We also review the implications for your storage network and describe performance
considerations.
Important: At the time of writing, the statements provided in this book are correct, but they
might change. Always verify any statements that are made in this book with the SAN
Volume Controller supported hardware list, device driver, firmware, and recommended
software levels that are available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
To achieve the most benefit from the SVC, pre-installation planning must include several
important steps. These steps ensure the SVC provides the best possible performance,
reliability, and ease of management for your application needs. The correct configuration also
helps minimize downtime by avoiding changes to the SVC and the storage area network
(SAN) environment to meet future growth needs.
Note: For more information, see the Pre-sale Technical and Delivery Assessment (TDA)
document that is available at this website:
https://www.ibm.com/partnerworld/wps/servlet/mem/ContentHandler/salib_SA572/lc=
en_ALL_ZZ
A pre-sale TDA needs to be conducted before a final proposal is submitted to a client and
must be conducted before an order is placed to ensure that the configuration is correct and
the solution that is proposed is valid. The preinstall System Assurance Planning Review
(SAPR) Package includes various files that are used in preparation for an SVC preinstall
TDA. A preinstall TDA needs to be conducted shortly after the order is placed and before
the equipment arrives at the client’s location to ensure that the client’s site is ready for the
delivery and responsibilities are documented regarding the client and IBM or the IBM
Business Partner roles in the implementation.
Tip: For more information about the topics that are described, see the following resources:
IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551
SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which
is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
Complete the following tasks when you are planning for the SVC:
Collect and document the number of hosts (application servers) to attach to the SVC, the
traffic profile activity (read or write, sequential, or random), and the performance
requirements, which are I/O per second (IOPS).
Collect and document the following storage requirements and capacities:
– The total back-end storage that is present in the environment to be provisioned on the
SVC
– The total back-end new storage to be provisioned on the SVC
– The required virtual storage capacity that is used as a fully managed virtual disk
(volume) and used as a Space-Efficient (SE) volume
– The required storage capacity for local mirror copy (volume mirroring)
– The required storage capacity for point-in-time copy (FlashCopy)
84 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
– The required storage capacity for remote copy (Metro Mirror and Global Mirror)
– The required storage capacity for compressed volumes
– The required storage capacity for encrypted volumes
– Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes
Define the local and remote SAN fabrics and systems in the cluster if a remote copy or a
secondary site is needed.
Define the number of systems in the cluster and the number of pairs of nodes (1 - 4) for
each system. Each pair of nodes (an I/O Group) is the container for the volume. The
number of necessary I/O Groups depends on the overall performance requirements.
Design the SAN according to the requirement for high availability and best performance.
Consider the total number of ports and the bandwidth that is needed between the host and
the SVC, the SVC and the disk subsystem, between the SVC nodes, and for the
inter-switch link (ISL) between the local and remote fabric.
Note: Check and carefully count the required ports for extended links. Especially in a
stretched cluster environment, you might need many of the higher-cost longwave
gigabit interface converters (GBICs).
Design the iSCSI network according to the requirements for high availability and best
performance. Consider the total number of ports and bandwidth that is needed between
the host and the SVC.
Determine the SVC service IP address.
Determine the IP addresses for the SVC system and for the host that connects through
iSCSI.
Determine the IP addresses for IP replication.
Define a naming convention for the SVC nodes, host, and storage subsystem.
Define the managed disks (MDisks) in the disk subsystem.
Define the storage pools. The storage pools depend on the disk subsystem that is in place
and the data migration requirements.
Plan the logical configuration of the volume within the I/O Groups and the storage pools to
optimize the I/O load between the hosts and the SVC.
Plan for the physical location of the equipment in the rack.
You must plan for two separate power sources if you have a redundant ac-power switch,
which is available as an optional feature.
An SVC node (valid for 2145-CF8, and 2145-CG8) is one Electronic Industries Association
(EIA) unit high, an SVC 2145-DH8 node is two units high.
Other hardware devices can be in the same SVC rack, such as IBM Storwize V7000, IBM
Storwize V3700, SAN switches, an Ethernet switch, and other devices.
You must consider the maximum power rating of the rack; do not exceed it. For more
information about the power requirements, see this website:
https://ibm.biz/BdHnKF
2145 UPS-1U
The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high, is included,
and can operate on the following node types only:
SVC 2145-CF8
SVC 2145-CG8
When the 2145 UPS-1U is configured, the voltage that is supplied to it must be 200 - 240 V,
single phase.
Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external
protection.
2145-DH8
This model includes two integrated AC power supplies and battery units, replacing the
uninterruptible power supply feature that was required on the previous generation storage
engine models.
The functionality of uninterruptible power supply units is provided by internal batteries, which
are delivered with each node’s hardware. The batteries ensure that there is sufficient internal
power to keep a node operational in the event of a disruption or external power loss to copy
the physical memory to a file in the file system on the node’s internal disk drive, so that the
contents can be recovered when external power is restored.
After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts
down.
For more information about the 2145-DH8 Model refer to IBM SAN Volume Controller
2145-DH8 Introduction and Implementation, SG24-8229, on this website:
http://www.redbooks.ibm.com/abstracts/SG248229.html
86 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
Figure 3-1 on page 88 shows a power cabling example for the 2145-CG8.
You must follow the guidelines for Fibre Channel (FC) cable connections. Occasionally, the
introduction of a new SVC hardware model means internal changes. One example is the
worldwide port name (WWPN) mapping in the port mapping. The 2145-CF8 and 2145-CG8
have the same mapping.
88 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Figure 3-3 on page 90 shows a sample layout in which nodes within each I/O Group are split
between separate racks. This layout protects against power failures and other events that
affect only a single rack.
90 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
The SVC 2145-DH8 node introduces a new feature called a Technician port. Ethernet port 4
is allocated as the Technician service port, and is marked with a T. All initial configuration for
each node is performed via the Technician port. The port broadcasts a DHCP service so that
a notebook or computer is automatically assigned an IP address on connection to the port.
After the cluster configuration has been completed, the Technician port automatically routes
the connected user directly to the service GUI.
Note: The default IP address for the Technician port on a 2145-DH8 Node is 192.168.0.1.
If the Technician port is connected to a switch, it is disabled and an error is logged.
Each SVC node requires one Ethernet cable to connect it to an Ethernet switch or hub. The
cable must be connected to port 1. A 10/100/1000 Mb Ethernet connection is required for
each cable. Both Internet Protocol Version 4 (IPv4) and Internet Protocol Version 6 (IPv6) are
supported.
Note: For increased redundancy, an optional second Ethernet connection is supported for
each SVC node. This cable is connected to Ethernet port 2.
To ensure system failover operations, Ethernet port 1 on all nodes must be connected to the
same set of subnets. If used, Ethernet port 2 on all nodes must also be connected to the
same set of subnets. However, the subnets for Ethernet port 1 do not have to be the same as
Ethernet port 2.
Each SVC cluster has a Cluster Management IP address as well as a Service IP address for
each node in the cluster. See Example 3-1 for details.
Each node in an SVC clustered system needs to have at least one Ethernet connection.
Support for iSCSI provides one other IPv4 and one other IPv6 address for each Ethernet port
on every node. These IP addresses are independent of the clustered system configuration
IP addresses.
The SVC Model 2145-CG8 optionally can have a serial-attached SCSI (SAS) adapter with
external ports disabled or a high-speed 10 Gbps Ethernet adapter with two ports. Two more
IPv4 or IPv6 addresses are required in both cases.
When accessing the SVC through the GUI or Secure Shell (SSH), choose one of the
available IP addresses to which to connect. No automatic failover capability is available. If one
network is down, use an IP address on the alternate network. Clients might be able to use the
intelligence in domain name servers (DNSs) to provide partial fail over.
92 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
The SAN fabric is zoned to allow the SVC to “see” each other’s nodes and the disk
subsystems, and for the hosts to “see” the SVC nodes. The hosts cannot directly see or
operate LUNs on the disk subsystems that are assigned to the SVC system. The SVC nodes
within an SVC system must see each other and all of the storage that is assigned to the SVC
system.
The zoning capabilities of the SAN switch are used to create three distinct zones. The
software version 7.6 supports 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps FC fabric, depending on
the hardware platform and on the switch where the SVC is connected. In an environment
where you have a fabric with multiple-speed switches, the preferred practice is to connect the
SVC and the disk subsystem to the switch operating at the highest speed.
All SVC nodes in the SVC clustered system are connected to the same SANs, and they
present volumes to the hosts. These volumes are created from storage pools that are
composed of MDisks that are presented by the disk subsystems.
Additionally, there is benefit in isolating remote replication traffic on dedicated ports as well to
ensure that problems impacting the cluster-to-cluster interconnect do not adversely impact
ports on the primary cluster and thereby impact the performance of workloads running on the
primary cluster.
IBM recommends the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 3-1 on page 94.
Important: Be careful when you perform the zoning so that inter-node ports are not used
for Host/Storage traffic in the 8-port and 12-port configurations.
This recommendation provides the wanted traffic isolation while also simplifying migration
from existing configurations with only 4 ports, or even later migrating from 8-port or 12-port
configurations to configurations with additional ports. More complicated port mapping
configurations that spread the port traffic across the adapters are supported and can be
considered, but these approaches do not appreciably increase availability of the solution
since the mean time between failures (MTBF) of the adapter is not significantly less than that
of the non-redundant node components.
Note that while it is true that alternate port mappings that spread traffic across HBAs may
allow adapters to come back online following a failure, they will not prevent a node from going
offline temporarily to reboot and attempt to isolate the failed adapter and then rejoin the
cluster. Our recommendation takes all these considerations into account with a view that the
greater complexity may lead to migration challenges in the future and the simpler approach is
best.
94 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
The SAN configurations that use intercluster Metro Mirror and Global Mirror relationships
require the following other switch zoning considerations:
For each node in a clustered system, zone exactly two FC ports to exactly two FC ports
from each node in the partner clustered system.
If dual-redundant ISLs are available, evenly split the two ports from each node between
the two ISLs. That is, exactly one port from each node must be zoned across each ISL.
Local clustered system zoning continues to follow the standard requirement for all ports on
all nodes in a clustered system to be zoned to one another.
Important: Failure to follow these configuration rules exposes the clustered system to
an unwanted condition that can result in the loss of host access to volumes.
If an intercluster link becomes severely and abruptly overloaded, the local FC fabric can
become congested so that no FC ports on the local SVC nodes can perform local
intracluster heartbeat communication. This situation can, in turn, result in the nodes
experiencing lease expiry events. In a lease expiry event, a node reboots to attempt to
reestablish communication with the other nodes in the clustered system. If the leases
for all nodes expire simultaneously, a loss of host access to volumes can occur during
the reboot events.
Configure your SAN so that FC traffic can be passed between the two clustered systems.
To configure the SAN this way, you can connect the clustered systems to the same SAN,
merge the SANs, or use routing technologies.
Configure zoning to allow all of the nodes in the local fabric to communicate with all of the
nodes in the remote fabric.
Optional: Modify the zoning so that the hosts that are visible to the local clustered system
can recognize the remote clustered system. This capability allows a host to have access to
data in the local and remote clustered systems.
Verify that clustered system A cannot recognize any of the back-end storage that is owned
by clustered system B. A clustered system cannot access logical units (LUs) that a host or
another clustered system can also access.
Figure 3-6 on page 96 shows an example of the SVC, host, and storage subsystem
connections.
96 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
A storage controller can present LUNs to the SVC (as MDisks) and to other hosts in the
SAN. However, in this case, it is better to avoid SVC and hosts to share storage ports.
Mixed port speeds are not permitted for intracluster communication. All node ports within
a clustered system must be running at the same speed.
ISLs are not to be used for intracluster node communication or node-to-storage controller
access.
The switch configuration in an SVC fabric must comply with the switch manufacturer’s
configuration rules, which can impose restrictions on the switch configuration. For
example, a switch manufacturer might limit the number of supported switches in a SAN.
Operation outside of the switch manufacturer’s rules is not supported.
Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host must be
in separate zones. For example, IBM AIX and Microsoft hosts must be in separate zones.
In this case, “dissimilar” means that the hosts are running separate operating systems or
are using separate hardware platforms. Therefore, various levels of the same operating
system are regarded as similar. This requirement is a SAN interoperability issue, rather
than an SVC requirement.
Host zones are to contain only one initiator (HBA) each, and as many SVC node ports as
you need, depending on the high availability and performance that you want from your
configuration.
You can use the lsfabric command to generate a report that displays the connectivity
between nodes and other controllers and hosts. This report is helpful for diagnosing SAN
problems.
Zoning examples
Figure 3-7 shows an SVC clustered system zoning example.
1 1 1 1 SVC 2 2 2 2 SVC
SVC 1 1 2 3 4 Port #
SVC 2
1 2 3 4 Port #
Fabric ID 21 Fabric ID 22
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 Ports 0 1 2 3
1 2 1 2 SVC # 1 2 1 2 SVC #
Storwize Family
1 2 1 2 SVC # 1 2 1 2 SVC #
Fabric Fabric
1 2
ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
V1
V2
E1 E2
Storwize Family
EMC
²
98 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
P1 P2
Fabric ID 21 Fabric ID 22
ISL
ISL
Ports 0 1 2 3 8 9 Ports 0 1 2 3 8 9
Fabric ID 11 Fabric ID 12
AC AC
DC DC
SVC-Power System Zone P1: Zoning Info: SVC-Power System Zone P2:
Fabric Domain ID, Port One Power System Fabric Domain ID, Port
21,1 - 11,0 - 11,1 Port and one SVC 22,1 - 12,2 - 12,3
Port per SVC Node
Standard iSCSI host connection procedures can be used to discover and configure the
SVC as an iSCSI target.
Next, we describe several ways in which you can configure the SVC 6.1 or later.
Figure 3-10 shows the use of IPv4 management and iSCSI addresses in the same subnet.
You can set up the equivalent configuration with only IPv6 addresses.
Figure 3-11 shows the use of IPv4 management and iSCSI addresses in two separate
subnets.
100 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Figure 3-13 on page 101 shows the use of a redundant network and a third subnet for
management.
Figure 3-14 shows the use of a redundant network for iSCSI data and management.
Important: During the individual VLAN configuration for each IP address, if the VLAN
settings for the local and failover ports on two nodes of an I/O Group differ, the switches
must be configured so that failover VLANs are configured on the local switch ports, too.
Then, the failover of IP addresses from the failing node to the surviving node succeeds. If
this configuration is not done, paths are lost to the SVC storage during a node failure.
3.3.4 IP Mirroring
One of the most important new functions of version 7.2 in the Storwize family is IP replication,
which enables the use of lower-cost Ethernet connections for remote mirroring. The capability
is available as a licensable option (Metro or Global Mirror) on all Storwize family systems. The
new function is transparent to servers and applications in the same way that traditional
FC-based mirroring is transparent. All remote mirroring modes (Metro Mirror, Global Mirror,
and Global Mirror with changed volumes) are supported.
102 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
The configuration of the system is straightforward. Storwize family systems normally can find
each other in the network and can be selected from the GUI. IP replication includes
Bridgeworks SANSlide network optimization technology and is available at no additional
charge. Remote mirror is a licensable option but the price does not change with IP replication.
Existing remote mirror users have access to the new function at no additional charge.
IP connections that are used for replication can have a long latency (the time to transmit a
signal from one end to the other), which can be caused by distance or by many hops between
switches and other appliances in the network. Traditional replication solutions transmit data,
wait for a response, and then transmit more data, which can result in network usage as low as
20% (based on IBM measurements). This situation gets worse as the latency gets longer.
Bridgeworks SANSlide technology that is integrated with the IBM Storwize family requires no
separate appliances; therefore, no other costs and configuration are necessary. It uses AI
technology to transmit multiple data streams in parallel and adjusts automatically to changing
network environments and workloads.
Because SANSlide does not use compression, it is independent of application or data type.
Most importantly, SANSlide improves network bandwidth usage up to 3x, so clients might be
able to deploy a less costly network infrastructure or use faster data transfer to speed
replication cycles, improve remote data currency, and recover more quickly.
Note: The limiting factor of the distance is the round-trip time. The maximum supported
round-trip time between sites is 80 milliseconds (ms) for a 1 Gbps link. For a 10 Gbps link,
the maximum supported round-trip time between sites is 10 ms.
Figure 3-15 shows the schematic way to connect two sides through IP mirroring.
Figure 3-16 on page 104 and Figure 3-17 on page 105 show configuration possibilities for
connecting two sites through IP mirroring. Figure 3-16 on page 104 shows the configuration
with single links.
The administrator must configure at least one port on each site to use with the link.
Configuring more than one port means that replication continues, even if a node fails.
Figure 3-17 shows a redundant IP configuration with two links.
104 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
As shown in Figure 3-17, the following replication group setup for dual redundant links is
used:
Replication Group 1: Four IP addresses, each on a different node (green)
Replication Group 2: Four IP addresses, each on a different node (orange)
Figure 3-18 shows the configuration of an IP partnership. You can obtain the requirements to
set up an IP partnership at:
https://ibm.biz/BdHnKF
106 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
clusters must have at least one port that is configured with the same group ID and they
must be accessible to each other.
RC logic
RC logic is a bidirectional full-duplex data path between two SVC clusters that are Remote
Copy partners. This path is between an IP address pair, one local and one remote. An RC
login carries Remote Copy traffic that consists of host WRITEs, background copy traffic
during initial sync within a relationship, periodic updates in Global Mirror with changed
volumes relationships, and so on.
Path configuration
Path configuration is the act of setting up RC logins between two partnered SVC systems.
The selection of IP addresses to be used for RC logins is based on certain rules that are
specified in the Preferred practices section. Most of those rules are driven by constraints
and requirements from a vendor-supplied link management library. A simple algorithm is
run by each SVC system to arrive at the list of RC logins that must be established. Local
and remote SVC clusters are expected to arrive at the same IP address pairs for RC login
creation, even though they run the algorithm independently.
Preferred practices
The following preferred practices are suggested for IP replication:
Configure two physical links between sites for redundancy.
Configure Ethernet ports that are dedicated for Remote Copy. Do not allow iSCSI host
attach for these Ethernet ports.
Configure remote copy port group IDs on both nodes for each physical link to survive node
failover.
A minimum of four nodes are required for dual redundant links to work across node
failures. If a node failure occurs on a two-node system, one link is lost.
Do not zone in two SVC systems over FC/FCOE when an IP partnership exists.
Configure CHAP secret-based authentication, if required.
The maximum supported round-trip time between sites is 80 ms for a 1 Gbps link.
The maximum supported round-trip time between sites is 10 ms for a 10 Gbps link.
For IP partnerships, the recommended method of copying is Global Mirror with changed
volumes because of the performance benefits. Also, Global Mirror and Metro Mirror might
be more susceptible to the loss of synchronization.
The amount of inter-cluster heartbeat traffic is 1 Mbps per link.
The minimum bandwidth requirement for the inter-cluster link is 10 Mbps. However, this
bandwidth scales up with the amount of host I/O that you choose to use.
For more information, see IBM SAN Volume Controller and Storwize Family Native IP
Replication, REDP-5103:
http://www.redbooks.ibm.com/abstracts/redp5103.html
For more information about supported storage subsystems, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
Apply the following general guidelines for back-end storage subsystem configuration
planning:
In the SAN, storage controllers that are used by the SVC clustered system must be
connected through SAN switches. Direct connection between the SVC and the storage
controller is not supported.
Multiple connections are allowed from the redundant controllers in the disk subsystem to
improve data bandwidth performance. It is not mandatory to have a connection from each
redundant controller in the disk subsystem to each counterpart SAN, but it is a preferred
practice. Therefore, canister A in a Storwize V3700 subsystem can be connected to SAN
A only, or to SAN A and SAN B. Also, canister B in the Storwize V3700 subsystem can be
connected to SAN B only, or to SAN B and SAN A.
Stretched System configurations are supported by certain rules and configuration
guidelines. For more information, see 3.3.7, “Stretched cluster configuration” on page 111.
All SVC nodes in an SVC clustered system must be able to see the same set of ports from
each storage subsystem controller. Violating this guideline causes the paths to become
degraded. This degradation can occur as a result of applying inappropriate zoning and
LUN masking. This guideline has important implications for a disk subsystem, such as
DS3000, Storwize V3700, Storwize V5000, or Storwize V7000, which imposes exclusivity
rules as to which HBA WWPNs a storage partition can be mapped.
MDisks within storage pools: Software version 6.1 and later provide for better load
distribution across paths within storage pools.
In previous code levels, the path to MDisk assignment was made in a round-robin fashion
across all MDisks that are configured to the clustered system. With that method, no
attention is paid to how MDisks within storage pools are distributed across paths.
Therefore, it is possible and even likely that certain paths are more heavily loaded than
others.
This condition is more likely to occur with a smaller number of MDisks in the storage pool.
Starting with software version 6.1, the code contains logic that considers MDisks within
storage pools. Therefore, the code more effectively distributes their active paths that are
based on the storage controller ports that are available.
The Detect MDisk commands must be run following the creation or modification (addition
of or removal of MDisk) of storage pools for paths to be redistributed.
If you do not have a storage subsystem that supports the SVC round-robin algorithm, ensure
that the number of MDisks per storage pool is a multiple of the number of storage ports that
are available. This approach ensures sufficient bandwidth to the storage controller and an
even balance across storage controller ports.
In general, configure disk subsystems as though no SVC exists. However, we suggest the
following specific guidelines:
Disk drives:
– Exercise caution with large disk drives so that you do not have too few spindles to
handle the load.
– RAID 5 is suggested for most workloads.
108 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Array sizes:
– An array size of 8+P or 4+P is suggested for the IBM DS4000® and DS5000™
families, if possible.
– Use the DS4000 segment size of 128 KB or larger to help the sequential performance.
– Upgrade to EXP810 drawers, if possible.
– Create LUN sizes that are equal to the RAID array and rank size. If the array size is
greater than 2 TB and the disk subsystem does not support MDisks that are larger than
2 TB, create the minimum number of LUNs of equal size.
– An array size of 7+P is suggested for the V3700, V5000, and V7000 Storwize families.
– When you are adding more disks to a subsystem, consider adding the new MDisks to
existing storage pools versus creating more small storage pools.
– Auto balancing was introduced in version 7.3 to restripe volume extents evenly across
all MDisks in the storage pools.
– A maximum of 1,024 worldwide node names (WWNNs) are available per cluster:
• EMC DMX/SYMM, all HDS, and SUN/HP HDS clones use one WWNN per port.
Each WWNN appears as a separate controller to the SVC.
• IBM, EMC CLARiiON, and HP use one WWNN per subsystem. Each WWNN
appears as a single controller with multiple ports/worldwide port names (WWPNs),
for a maximum of 16 ports/WWPNs per WWNN.
DS8000 that uses four or eight of the four-port HA cards:
– Use ports 1 and 3 or ports 2 and 4 on each card. (It does not matter for 8 Gb cards.)
This setup provides eight or 16 ports for the SVC use.
– Use eight ports minimum, up to 40 ranks.
– Use 16 ports for 40 or more ranks. Sixteen is the maximum number of ports.
DS4000/DS5000 (EMC CLARiiON/CX):
– Both systems have the preferred controller architecture, and the SVC supports this
configuration.
– Use a minimum of four ports, and preferably eight or more ports, up to a maximum of
16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC.
– Support is available for mapping controller A ports to Fabric A and controller B ports to
Fabric B or cross-connecting ports to both fabrics from both controllers. The
cross-connecting approach is preferred to avoid Automatic Volume Transfer
(AVT)/Trespass from occurring if a fabric fails or all paths to a fabric fail.
DS3400 subsystems: Use a minimum of four ports.
Storwize family: Use a minimum of four ports, and preferably eight ports.
IBM XIV requirements and restrictions:
– The use of XIV extended functions, including snaps, thin provisioning, synchronous
replication (native copy services), and LUN expansion of LUNs that are presented to
the SVC, is not supported.
– A maximum of 511 LUNs from one XIV system can be mapped to an SVC clustered
system.
Full 15 module XIV recommendations (161 TB usable):
– Use two interface host ports from each of the six interface modules.
– Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC
node ports.
– Create 48 LUNs of equal size, each of which is a multiple of 17 GB. This configuration
creates approximately 1632 GB if you are using the entire full frame XIV with the SVC.
– Map LUNs to the SVC as 48 MDisks and add all of them to the single XIV storage pool
so that the SVC drives the I/O to four MDisks and LUNs for each of the 12 XIV FC
ports. This design provides a good queue depth on the SVC to drive XIV adequately.
Six module XIV recommendations (55 TB usable):
– Use two interface host ports from each of the two active interface modules.
– Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.)
Also, zone these four ports with all SVC node ports.
– Create 16 LUNs of equal size, each of which is a multiple of 17 GB. This configuration
creates approximately 1632 GB if you are using the entire XIV with the SVC.
– Map the LUNs to the SVC as 16 MDisks and add all of them to the single XIV storage
pool so that the SVC drives I/O to four MDisks and LUNs per each of the four XIV FC
ports. This design provides a good queue depth on the SVC to drive the XIV
adequately.
Nine module XIV recommendations (87 TB usable):
– Use two interface host ports from each of the four active interface modules.
– Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are
inactive.) Also, zone these eight ports with all of the SVC node ports.
– Create 26 LUNs of equal size, each of which is a multiple of 17 GB. This design
creates approximately 1632 GB approximately if you are using the entire XIV with the
SVC.
– Map the LUNs to the SVC as 26 MDisks and map all of them to the single XIV storage
pool so that the SVC drives I/O to three MDisks and LUNs on each of the six ports and
four MDisks and LUNs on the other two XIV FC ports. This design provides a useful
queue depth on the SVC to drive the XIV adequately.
Configure XIV host connectivity for the SVC clustered system:
– Create one host definition on the XIV, and include all SVC node WWPNs.
– You can create clustered system host definitions (one per I/O Group), but the
preceding method is easier to configure.
– Map all LUNs to all SVC node WWPNs.
110 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
configuration. The remaining node operates in write-through mode, which means that the
data is written directly to the disk subsystem (the cache is disabled for write IO).
The uninterruptible power supply (for CF8 and CG8) unit must be in the same rack as the
node to which it provides power, and each uninterruptible power supply unit can have only
one connected node.
The FC SAN connections between the SVC node and the switches are optical fiber. These
connections can run at 2 Gbps, 4 Gbps, 8 Gbps, or 16 Gbps (DH8), depending on your
SVC and switch hardware. The 2145-CG8, 2145-CF8 and 2145-DH8 SVC nodes
auto-negotiate the connection speed with the switch.
The SVC node ports must be connected to the FC fabric only. Direct connections between
the SVC and the host, or the disk subsystem, are unsupported.
Two SVC clustered systems cannot have access to the same LUNs within a disk
subsystem. Configuring zoning so that two SVC clustered systems have access to the
same LUNs (MDisks) will likely result in data corruption.
The two nodes within an I/O Group can be co-located (within the same set of racks) or can
be in separate racks and separate rooms. For more information, see 3.3.7, “Stretched
cluster configuration” on page 111.
The SVC uses three MDisks as quorum disks for the clustered system. A preferred
practice for redundancy is to have each quorum disk in a separate storage subsystem,
where possible. The current locations of the quorum disks can be displayed by using the
lsquorum command and relocated by using the chquorum command.
ISL configuration:
– ISLs are located between the SVC nodes.
– Maximum distance is similar to Metro Mirror distances.
– Physical requirements are similar to Metro Mirror requirements.
– ISL distance extension with active and passive WDM devices is supported.
Figure 3-20 shows an example of a stretched cluster with ISL configuration.
112 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Use the stretched cluster configuration with the volume mirroring option to realize an
availability benefit. After volume mirroring is configured, use the
lscontrollerdependentvdisks command to validate that the volume mirrors are on separate
storage controllers. Having the volume mirrors on separate storage controllers ensures that
access to the volumes is maintained if a storage controller is lost.
When you are implementing a stretched cluster configuration, two of the three quorum disks
can be co-located in the same room where the SVC nodes are located. However, the active
quorum disk must be in a separate room. This configuration ensures that a quorum disk is
always available, even after a single-site failure.
For stretched cluster configuration, configure the SVC in the following manner:
Site 1: Half of the SVC clustered system nodes and one quorum disk candidate
Site 2: Half of the SVC clustered system nodes and one quorum disk candidate
Site 3: Active quorum disk
When a stretched cluster configuration is used with volume mirroring, this configuration
provides a high-availability solution that is tolerant of a failure at a single site. If the primary or
secondary site fails, the remaining sites can continue performing I/O operations.
For more information about stretched cluster configurations, see Appendix C, “Stretched
Cluster” on page 939.
For more information, see IBM SAN Volume Controller Enhanced Stretched Cluster with
VMware, SG24-8211:
http://www.redbooks.ibm.com/abstracts/sg248211.html
MDisks in the SVC are LUNs that are assigned from the underlying disk subsystems to the
SVC and can be managed or unmanaged. A managed MDisk is an MDisk that is assigned to
a storage pool. Consider the following points:
A storage pool is a collection of MDisks. An MDisk can be contained only within a single
storage pool.
Since software version 7.5 the SVC supports up to 1,024 storage pools.
The number of volumes that can be allocated from a storage pool is unlimited; however, an
I/O Group is limited to 2,048, and the clustered system limit is 8,192.
Volumes are associated with a single storage pool, except in cases where a volume is
being migrated or mirrored between storage pools.
The SVC supports extent sizes of 16 MiB, 32 MiB, 64 MiB, 128 MiB, 256 MiB, 512 MiB, 1024
MiB, 2048 MiB, 4096 MiB, and 8192 MiB. Support for extent sizes 4096 MiB and 8192 MiB
was added in SVC 6.1. The extent size is a property of the storage pool and is set when the
storage pool is created. All MDisks in the storage pool have the same extent size, and all
volumes that are allocated from the storage pool have the same extent size. The extent size
of a storage pool cannot be changed. If you want another extent size, the storage pool must
be deleted and a new storage pool configured.
Table 3-2 on page 115 lists all of the available extent sizes in an SVC.
114 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Table 3-2 Extent size and total storage capacities per system
Extent size (MiB) Total storage capacity manageable per system
16 64 TiB
32 128 TiB
64 256 TiB
256 1 PiB
512 2 PiB
1024 4 PiB
2048 8 PiB
4096 16 PiB
8192 32 PiB
1 100%
2 66%
3 40%
4 30%
5 or more 25%
Consider the rule that no single partition can occupy more than its upper limit of cache
capacity with write data. These limits are upper limits, and they are the points at which the
SVC cache starts to limit incoming I/O rates for volumes that are created from the storage
pool. If a particular partition reaches this upper limit, the net result is the same as a global
cache resource that is full. That is, the host writes are serviced on a one-out-one-in basis
because the cache destages writes to the back-end disks.
However, only writes that are targeted at the full partition are limited. All I/O that is
destined for other (non-limited) storage pools continues as normal. The read I/O requests
for the limited partition also continue normally. However, because the SVC is destaging
write data at a rate that is greater than the controller can sustain (otherwise, the partition
does not reach the upper limit), read response times are also likely affected.
The storage pool defines which MDisks that are provided by the disk subsystem make up the
volume. The I/O Group, which is made up of two nodes, defines which SVC nodes provide I/O
access to the volume.
Important: No fixed relationship exists between I/O Groups and storage pools.
116 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Important: Keep a warning level on the used capacity so that it provides adequate
time to respond and provision more physical capacity.
– When you create a thin-provisioned volume, you can choose the grain size for
allocating space in 32 KiB, 64 KiB, 128 KiB, or 256 KiB chunks. The grain size that you
select affects the maximum virtual capacity for the thin-provisioned volume. The default
grain size is 256 KiB, which is the recommended option. If you select 32 KiB for the
grain size, the volume size cannot exceed 260,000 GiB.
The grain size cannot be changed after the thin-provisioned volume is created.
Generally, smaller grain sizes save space but require more metadata access, which
can adversely affect performance. If you are not going to use the thin-provisioned
volume as a FlashCopy source or target volume, use 256 KiB to maximize
performance. If you are going to use the thin-provisioned volume as a FlashCopy
source or target volume, specify the same grain size for the volume and for the
FlashCopy function.
– Thin-provisioned volumes require more I/Os because of directory accesses. For truly
random workloads with 70% read and 30% write, a thin-provisioned volume requires
approximately one directory I/O for every user I/O.
– The directory is two-way write-back-cached (as with the SVC fastwrite cache), so
certain applications perform better.
– Thin-provisioned volumes require more processor processing, so the performance per
I/O Group can also be reduced.
– A thin-provisioned volume feature that is called zero detect provides clients with the
ability to reclaim unused allocated disk space (zeros) when they are converting a fully
allocated volume to a thin-provisioned volume by using volume mirroring.
Volume mirroring guidelines:
– Create or identify two separate storage pools to allocate space for your mirrored
volume.
– Allocate the storage pools that contain the mirrors from separate storage controllers.
– If possible, use a storage pool with MDisks that share characteristics. Otherwise, the
volume performance can be affected by the poorer performing MDisk.
118 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
It is best to use zoning to limit the pathing to four paths. The hosts must run a multipathing
device driver to limit the pathing back to a single device. The multipathing driver that is
supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native
multipath I/O (MPIO) drivers on selected hosts are supported. For more operating
system-specific information about MPIO support, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
The actual version of the Subsystem Device Driver Device Specific Module (SDDDSM) for
IBM products is available at this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000350
The number of paths to a volume from a host to the nodes in the I/O Group that owns the
volume must not exceed eight, even if eight is not the maximum number of paths that are
supported by the multipath driver (SDD supports up to 32). To restrict the number of paths
to a host volume, the fabrics must be zoned so that each host FC port is zoned to no more
than two ports from each SVC node in the I/O Group that owns the volume.
Multipathing: We suggest the following number of paths per volume (n+1 redundancy):
With two HBA ports, zone the HBA ports to the SVC ports 1 - 2 for a total of four
paths.
With four HBA ports, zone the HBA ports to the SVC ports 1 - 1 for a total of four
paths.
Optional (n+2 redundancy): With four HBA ports, zone the HBA ports to the SVC
ports 1 - 2 for a total of eight paths.
We use the term HBA port to describe the SCSI Initiator. We use the term SAN Volume
Controller port to describe the SCSI target.
The maximum number of host paths per volume must not exceed eight.
If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports
to maximize high availability and performance.
To configure greater than 256 hosts, you must configure the host to I/O Group mappings
on the SVC. Each I/O Group can contain a maximum of 512 host, so it is possible to
create 2,048 host objects on an eight-node SVC clustered system. Volumes can be
mapped only to a host that is associated with the I/O Group to which the volume belongs.
Port masking
You can use a port mask to control the node target ports that a host can access, which
satisfies the following requirements:
– As part of a security policy to limit the set of WWPNs that can obtain access to any
volumes through an SVC port
– As part of a scheme to limit the number of logins with mapped volumes visible to a host
multipathing driver, such as SDD, and therefore limit the number of host objects that
are configured without resorting to switch zoning
The port mask is an optional parameter of the mkhost and chhost commands. The port
mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all
ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value
is 1111 (all ports enabled).
The SVC supports connection to the Cisco MDS family and Brocade family. For more
information, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
Layers: Software version 6.3 introduced a new property that is called layer for the
clustered system. This property is used when a copy services partnership exists between
an SVC and an IBM Storwize V7000. There are two layers: replication and storage. All
SVC clustered systems are replication layers and cannot be changed. By default, the IBM
Storwize V7000 is a storage layer, which must be changed by using CLI command
chsystem before you use it to make any copy services partnership with the SVC.
SVC Advanced Copy Services must apply the guidelines that are described next.
FlashCopy guidelines
Consider the following FlashCopy guidelines:
Identify each application that must have a FlashCopy function implemented for its volume.
FlashCopy is a relationship between volumes. Those volumes can belong to separate
storage pools and separate storage subsystems.
You can use FlashCopy for backup purposes by interacting with the Tivoli Storage
Manager Agent, or for cloning a particular environment.
Define which FlashCopy best fits your requirements: No copy, Full copy, Thin-Provisioned,
or Incremental.
Define which FlashCopy rate best fits your requirement in terms of the performance and
the amount of time to complete the FlashCopy. Table 3-4 shows the relationship of the
background copy rate value to the attempted number of grains to be split per second.
120 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Define the grain size that you want to use. A grain is the unit of data that is represented by
a single bit in the FlashCopy bitmap table. Larger grain sizes can cause a longer
FlashCopy elapsed time and a higher space usage in the FlashCopy target volume.
Smaller grain sizes can have the opposite effect. The data structure and the source data
location can modify those effects.
In an actual environment, check the results of your FlashCopy procedure in terms of the
data that is copied at every run and in terms of elapsed time, comparing them to the new
SVC FlashCopy results. Eventually, adapt the grain per second and the copy rate
parameter to fit your environment’s requirements. See Table 3-4.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
Figure 3-22 contains two redundant fabrics. Part of each fabric exists at the local clustered
system and at the remote clustered system. No direct connection exists between the two
fabrics.
Technologies for extending the distance between two SVC clustered systems can be broadly
divided into the following categories:
FC extenders
SAN multiprotocol routers
Because of the more complex interactions that are involved, IBM explicitly tests products of
this class for interoperability with the SVC. For more information about the current list of
supported SAN routers in the supported hardware list, see this website:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
IBM has tested a number of FC extenders and SAN router technologies with the SVC. You
must plan, install, and test FC extenders and SAN router technologies with the SVC so that
the following requirements are met:
The round-trip latency between sites must not exceed 80 - 250 ms. For Global Mirror, this
limit allows a distance between the primary and secondary sites of up to 25,000 km
(15534.28 miles).
Table 3-5 shows a table of maximum supported round-trip latency between different HW
types and installed software versions.
122 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
If you use remote mirroring between systems with 80-250 ms round-trip latency, you must
meet the following additional requirements:
– All nodes used for replication must be of a supported model (see Table 3-5 on
page 122).
– There must be a Fibre Channel partnership between systems, not an IP partnership.
– All systems in the partnership must have a minimum software level of 7.4.
– The rcbuffersize setting must be set to 512 MB on each system in the partnership.
This can be accomplished by running the chsystem -rcbuffersize 512 command on
each system (note that changing this setting is disruptive to Metro Mirror and Global
Mirror operations. Use this command only before partnerships have been created
between systems or when all partnerships with the system have been stopped.).
– Two Fibre Channel ports on each node that will be used for replication must be
dedicated for replication traffic. This can be achieved using SAN zoning and port
masking.
– SAN zoning should be applied to provide separate intra-system zones for each
local-remote IO group pair that will be used for replication.
The latency of long-distance links depends on the technology that is used to implement
them. A point-to-point dark fiber-based link often provides a round-trip latency of 1 ms per
100 km (62.13 miles) or better. Other technologies provide longer round-trip latencies,
which affect the maximum supported distance.
The configuration must be tested with the expected peak workloads.
When Metro Mirror or Global Mirror is used, a certain amount of bandwidth is required for
the IBM SVC intercluster heartbeat traffic. The amount of traffic depends on how many
nodes are in each of the two clustered systems.
Table 3-6 shows the amount of heartbeat traffic, in megabits per second, that is generated
by various sizes of clustered systems.
2 nodes 5 6 6 6
4 nodes 6 10 11 12
6 nodes 6 11 16 17
8 nodes 6 12 17 21
These numbers represent the total traffic between the two clustered systems when no I/O
is taking place to mirrored volumes. Half of the data is sent by one clustered system, and
half of the data is sent by the other clustered system. The traffic is divided evenly over all
available intercluster links. Therefore, if you have two redundant links, half of this traffic is
sent over each link during fault-free operation.
The bandwidth between sites must, at the least, be sized to meet the peak workload
requirements, in addition to maintaining the maximum latency that was specified
previously. You must evaluate the peak workload requirement by considering the average
write workload over a period of 1 minute or less, plus the required synchronization copy
bandwidth.
With no active synchronization copies and no write I/O disks in Metro Mirror or Global
Mirror relationships, the SVC protocols operate with the bandwidth that is indicated in
Table 3-6 on page 123. However, you can determine the true bandwidth that is required for
the link only by considering the peak write bandwidth to volumes that are participating in
Metro Mirror or Global Mirror relationships and adding it to the peak synchronization copy
bandwidth.
If the link between the sites is configured with redundancy so that it can tolerate single
failures, you must size the link so that the bandwidth and latency statements continue to
be true, even during single failure conditions.
The configuration is tested to simulate the failure of the primary site (to test the recovery
capabilities and procedures), including eventual failback to the primary site from the
secondary.
The configuration must be tested to confirm that any failover mechanisms in the
intercluster links interoperate satisfactorily with the SVC.
The FC extender must be treated as a normal link.
The bandwidth and latency measurements must be made by, or on behalf of, the client.
They are not part of the standard installation of the SVC by IBM. Make these
measurements during installation and record the measurements. Testing must be
repeated following any significant changes to the equipment that provides the intercluster
link.
124 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Users of Global Mirror must consider how to optimize the performance of the
long-distance link, which depends on the technology that is used to implement the link. For
example, when you are transmitting FC traffic over an IP link, you might want to enable
jumbo frames to improve efficiency.
The use of Global Mirror and Metro Mirror between the same two clustered systems is
supported.
The use of Global Mirror and Metro Mirror between the SVC clustered system and IBM
Storwize systems with a minimum code level of 6.3 is supported.
Support exists for cache-disabled volumes to participate in a Global Mirror relationship;
however, this design is not a preferred practice.
The gmlinktolerance parameter of the remote copy partnership must be set to an
appropriate value. The default value is 300 seconds (5 minutes), which is appropriate for
most clients.
During SAN maintenance, the user must choose to reduce the application I/O workload
during maintenance (so that the degraded SAN components can manage the new
workload); disable the gmlinktolerance feature; increase the gmlinktolerance value
(which means that application hosts might see extended response times from Global
Mirror volumes); or stop the Global Mirror relationships.
If the gmlinktolerance value is increased for maintenance lasting x minutes, it must be
reset only to the normal value x minutes after the end of the maintenance activity.
If gmlinktolerance is disabled during maintenance, it must be re-enabled after the
maintenance is complete.
Starting with software version 7.6 you can use the chsystem command to set the maximum
replication delay for the system. This value ensures that the single slow write operation
does not affect the entire primary site.
You can configure this delay for all relationships or consistency groups that exist on the
system by using the maxreplicationdelay parameter on the chsystem command. This
value indicates the amount of time (in seconds) that a host write operation can be
outstanding before replication is stopped for a relationship on the system. If the system
detects a delay in replication on a particular relationship or consistency group, only that
relationship or consistency group is stopped. In systems with large number of
relationships, a single slow relationship can cause delay for the remaining relationships on
the system. This setting isolates the potential relationship with delays so you can
investigate the cause of these issues. When the maximum replication delay is reached,
the system generates an error message that indicates the relationship that exceeded the
maximum replication delay.
Global Mirror volumes must have their preferred nodes evenly distributed between the
nodes of the clustered systems. Each volume within an I/O Group has a preferred node
property that can be used to balance the I/O load between nodes in that group.
Figure 3-23 on page 126 shows the correct relationship between volumes in a Metro
Mirror or Global Mirror solution.
The capabilities of the storage controllers at the secondary clustered system must be
provisioned to allow for the peak application workload to the Global Mirror volumes, plus
the client-defined level of background copy, plus any other I/O being performed at the
secondary site. The performance of applications at the primary clustered system can be
limited by the performance of the back-end storage controllers at the secondary clustered
system to maximize the amount of I/O that applications can perform to Global Mirror
volumes.
A complete review must be performed before Serial Advanced Technology Attachment
(SATA) for Metro Mirror or Global Mirror secondary volumes is used. The use of a slower
disk subsystem for the secondary volumes for high-performance primary volumes can
mean that the SVC cache might not be able to buffer all the writes, and flushing cache
writes to SATA might slow I/O at the production site.
Storage controllers must be configured to support the Global Mirror workload that is
required of them. You can dedicate storage controllers to only Global Mirror volumes,
configure the controller to ensure sufficient quality of service (QoS) for the disks that are
used by Global Mirror, or ensure that physical disks are not shared between Global Mirror
volumes and other I/O, for example, by not splitting an individual RAID array.
MDisks within a Global Mirror storage pool must be similar in their characteristics, for
example, RAID level, physical disk count, and disk speed. This requirement is true of all
storage pools, but maintaining performance is important when Global Mirror is used.
When a consistent relationship is stopped, for example, by a persistent I/O error on the
intercluster link, the relationship enters the consistent_stopped state. I/O at the primary
site continues, but the updates are not mirrored to the secondary site. Restarting the
relationship begins the process of synchronizing new data to the secondary disk. While
this synchronization is in progress, the relationship is in the inconsistent_copying state.
Therefore, the Global Mirror secondary volume is not in a usable state until the copy
completes and the relationship returns to a Consistent state. For this reason, it is highly
advisable to create a FlashCopy of the secondary volume before the relationship is
restarted. When started, the FlashCopy provides a consistent copy of the data, even while
the Global Mirror relationship is copying. If the Global Mirror relationship does not reach
the Synchronized state (for example, if the intercluster link experiences further persistent
I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery
purposes.
126 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
If you plan to use a Fibre Channel over IP (FCIP) intercluster link, it is important to design
and size the pipe correctly.
Example 3-2 shows a best-guess bandwidth sizing formula.
Because multiple data migration methods are available, choose the method that best fits your
environment, operating system platform, type of data, and application’s service-level
agreement (SLA).
128 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Tip: Technically, almost all storage controllers provide both striping (RAID 5 or RAID 10)
and a form of caching. The real benefit is the degree to which you can stripe the data
across all MDisks in a storage pool and therefore, have the maximum number of active
spindles at one time. The caching is secondary. The SVC provides additional caching to
the caching that midrange controllers provide (usually a couple of GB). Enterprise systems
have much larger caches.
To ensure the performance that you want and capacity of your storage infrastructure,
undertake a performance and capacity analysis to reveal the business requirements of your
storage environment. When this analysis is done, you can use the guidelines in this chapter to
design a solution that meets the business requirements.
When you are considering performance for a system, always identify the bottleneck and,
therefore, the limiting factor of a specific system. You must also consider the component for
whose workload you identify a limiting factor. The component might not be the same
component that is identified as the limiting factor for other workloads.
When you are designing a storage infrastructure with the SVC or implementing an SVC in an
existing storage infrastructure, you must consider the performance and capacity of the SAN,
disk subsystems, SVC, and the known or expected workload.
3.4.1 SAN
The SVC now has the following models:
2145-CF8
2145-CG8
2145-DH8
All of these models can connect to 2 Gbps, 4 Gbps, 8 Gbps, and 16 Gbps switches. From a
performance point of view, connecting the SVC to 8 Gbps or 16 Gbps switches is better.
Correct zoning on the SAN switch brings together security and performance. Implement a
dual HBA approach at the host to access the SVC.
Also, ensure that you configure the storage subsystem LUN-masking settings to map all
LUNs that are used by the SVC to all the SVC WWPNs in the clustered system.
The SVC is designed to handle large quantities of multiple paths from the back-end storage.
In most cases, the SVC can improve performance, especially on mid-sized to low-end disk
subsystems, older disk subsystems with slow controllers, or uncached disk systems, for the
following reasons:
The SVC can stripe across disk arrays, and it can stripe across the entire set of supported
physical disk resources.
The SVC 2145-CF8 and 2145-CG8 have a 24 GB (48 GB with the optional processor
card, 2145-CG8 only) cache. The SVC 2145-DH8 has 32 GB of cache (64 GB of cache
with a second CPU used for hardware-assisted compression acceleration for Real-time
Compression workloads).
The SVC can provide automated performance optimization of hot spots by using flash
drives and Easy Tier.
The SVC large cache and advanced cache management algorithms also allow it to improve
on the performance of many types of underlying disk technologies. The SVC capability to
manage (in the background) the destaging operations that are incurred by writes (in addition
to still supporting full data integrity) has the potential to be important in achieving good
database performance.
Depending on the size, age, and technology level of the disk storage system, the total cache
that is available in the SVC can be larger, smaller, or about the same as the cache that is
associated with the disk storage. Because hits to the cache can occur in the upper (SVC) or
the lower (disk controller) level of the overall system, the system as a whole can use the
larger amount of cache wherever it is located. Therefore, if the storage control level of the
cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the
SVC cache.
Also, regardless of their relative capacities, both levels of cache tend to play an important role
in allowing sequentially organized data to flow smoothly through the system. The SVC cannot
increase the throughput potential of the underlying disks in all cases because this increase
depends on the underlying storage technology and the degree to which the workload exhibits
hotspots or sensitivity to cache size or cache algorithms.
For more information about the SVC cache partitioning capability, see IBM SAN Volume
Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this website:
http://www.redbooks.ibm.com/abstracts/redp4426.html
Assuming that no bottlenecks exist in the SAN or on the disk subsystem, you must follow
specific guidelines when you perform the following tasks:
Creating a storage pool
130 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 03 Planning and Configuration Max_Lev.fm
Creating volumes
Connecting to or configuring hosts that must receive disk space from an SVC clustered
system
For more information about performance and preferred practices for the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
General recommendations:
Best results can be achieved if the data compression ratio stays at 25% or above. Volumes
can be scanned with the built-in Comprestimator and the appropriate decision can be
taken.
More concurrency within the workload will give a better result than single threaded
sequential I/O streams.
I/O is de-staged to RACE from the upper cache in 64 KiB pieces, and best results will be
achieved if the I/O size does not exceed this size.
Volumes used for only one purpose usually have the same work patterns. Mixing
database, virtualization and general purpose data within the same volume could make the
workload inconsistent. These may have no stable I/O size and no specific work pattern,
and a below average compression ratio, making these volumes hard to investigate in a
case of performance degradation. Real-time Compression development recommends not
mixing data types within the same volume whenever possible.
Pre-compressed data is best not to recompress, so volumes with compressed data can
stay as uncompressed volumes.
Volumes with encrypted data have a very low compression ratio and are not a good
candidate for compression.
For more information about using IBM Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html
132 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
You have full management control of the SVC regardless of which method you use. IBM Tivoli
Storage Productivity Center is a robust software product with various functions (including
performance and capacity features) that must be purchased separately.
If you have a previously installed SVC cluster in your environment, it is possible that you are
using the SVC Console, which is also known as the Hardware Management Console (HMC).
When you are using the specific, retail product that is called IBM System Storage Productivity
Center (SSPC), which is no longer offered, you can log in to only your SVC from one of them
at a time.
If you decide to manage your SVC cluster with the SVC CLI, it does not matter whether you
are using the SVC Console or IBM Tivoli Storage Productivity Center server because the
SVC CLI is on the cluster and accessed through Secure Shell (SSH), which can be installed
anywhere.
To access the SVC management GUI, direct a web browser to the system management IP
address.
134 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
Note: Ensure that your web browser is supported and has the appropriate settings
enabled, refer to IBM SAN Volume Controller Knowledge Center link below:
https://ibm.biz/BdHnKF
Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.
For more information about TCP/IP prerequisites, see Chapter 3, “Planning and
configuration” on page 83.
Ensure that the SAN Volume Controller nodes are physically installed and Ethernet and Fibre
Channel (FC) connectivity has been correctly configured.
Before configuring the cluster, ensure that the following information is available:
License
The license indicates whether the client is permitted to use FlashCopy, Metro Mirror,
Encryption, and the Real-time Compression features. It also indicates how much capacity
the client is licensed to virtualize.
For IPv4 addressing:
– IPv4 addresses: These addresses include one address for the cluster and one address
for each node of the cluster to be used as the service address.
– IPv4 subnet mask.
– Gateway IPv4 address.
For IPv6 addressing:
– IPv6 addresses: These addresses include one address for the cluster and one address
for each node of the cluster to be used as the service address.
136 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
– IPv6 prefix.
– Gateway IPv6 address.
To assist you in starting an SVC initial configuration, Figure 4-3 shows a common flowchart
that covers all of the types of management.
In the next sections, we describe each of the steps shown in Figure 4-3.
Note: The 2145-DH8 does not provide IPv6 IP addresses for the Technician Port.
Notes: During the initial configuration, you will see certificate warnings because the 2145
certificates are self-issued. Accept these warnings because they are not harmful.
Note: If the system cannot be initialized, you are directed to the service assistant
interface. Refer to error codes in the service console status to troubleshoot the problem.
3. Figure 4-5 on page 139 shows the Welcome panel and starts the wizard that allows you to
configure a new system. Click Next to start the wizard.
138 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
4. This chapter focuses on setting up a new system, so we select the first option and then
click Next as shown in Figure 4-6.
Important: If you are adding 2145-DH8 nodes to an existing system, ensure that the
existing systems are running software version 7.3 or higher. The 2145-DH8 only
supports software version 7.3 or higher.
5. The next panel prompts you to set an IP address for the cluster. You can choose between
an IPv4 or IPv6 address. In Figure 4-7, we set an IPv4 address.
6. When you click Next, the Restarting Web Server panel opens, as shown in Figure 4-8.
Wait until the clock reaches the end and click Next.
7. When the system initialization completes, follow the on-screen instructions (Figure 4-9):
Disconnect the Ethernet cable from the Technician port as well as from your PC or
notebook, connect the same PC or notebook to the same network as the system, and
upon clicking Finish you are redirected to the GUI for completion of the system setup. You
can connect to the System IP address from any management console that is connected to
the same network as the system.
140 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
8. Whether you are redirected from your personal computer or notebook or connecting to the
Management IP address of the system, the License Agreement panel opens as shown in
Figure 4-10.
9. Read the license agreement and then click the Accept arrow. The initial password setup
panel for superuser opens (Figure 4-11 on page 142). Type a new password and type it
again to confirm it. The password length is 6 - 63 characters. The password cannot begin
or end with a space. After you type the password twice, click the Log in arrow.
Note: Default password for the account superuser is passw0rd (zero and not o).
10.The Welcome to System Setup panel opens (Figure 4-12). Click Next to continue the
installation process.
142 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
11.Click Next. You can choose to give the cluster a new name. We used ITSO_SVC_DH8, as
shown in Figure 4-13 on page 144.
12.Click Apply and Next after you type the name of the cluster and the Licensed Functions
panel opens, as shown in (Figure 4-14 on page 145). Enter the total purchased capacity
for your system as authorized by your license agreement (Figure 4-14 on page 145 is only
an example).
144 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
13.The next step is to set the time and date, as shown in Figure 4-15. In this case, date and
time were set manually using browser settings. At this time, you cannot choose to use the
24-hour clock. You can change to the 24-hour clock after you complete the initial
configuration. We recommend that you use a Network Time Protocol (NTP) server so that
all of your SAN and storage devices have a common time stamp for troubleshooting.
14.Click Apply and Next. The Encryption panel opens, so you can select if Encryption is
enabled or not as shown in Figure 4-16 on page 147 (for this system configuration, we will
not enable encryption features in the initial setup). Click Next to continue the initialization
process.
146 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
15.Next step is to configure the Call Home settings. In the first panel, set the System Location
information (Figure 4-17 on page 148), all fields are required in this step.
148 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
17.Setting up email server is optional, but we recommend that you set them up. The next
panel show how to set up the email server. Enter the IP addresses of the email server, as
shown in Figure 4-19 on page 150.
Important: A valid Simple Mail Transfer Protocol (SMTP) server IP address must be
available to complete this step.
18.You can click Ping to verify whether network access exists to the email server (SMTP
server).
Note: Notification alerts and warnings are configured after the system initialization.
Refer to Chapter 11, “Operations using the GUI” on page 715.
19.Click Apply and Next. The Summary panel opens (Figure 4-20 on page 151).
150 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
20.Click Finish to complete the initial configuration and Close in the message that will
pop-up (Figure 4-21 on page 152), and you will be automatically redirected to the System
overview panel.
21.The System Overview window will appear showing your system configuration is complete
and you can start configure Pools and Hosts as shown in Figure 4-22.
Figure 4-23 on page 153 shows the SVC node 2145-CF8 front panel.
152 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
Note: Software version 6.1 and later code levels introduce a new method for performing
service tasks. In addition to performing service tasks from the front panel, you can service
a node through an Ethernet connection by using a web browser or the CLI. A service IP
address for each node canister is required.
For more information, see 4.2.2, “SVC 2145-CF8 and 2145-CG8 service panels” on page 152
Note: To create a system, do not repeat these instructions on more than one node.
After you complete the steps for initiating system creation from the front panel, use the
management GUI to create the system and add additional nodes to complete system
configuration.
When you create the system, you must specify either an IPv4 or an IPv6 system address
for port 1. After the system is created, you can specify additional IP addresses for port 1
and port 2 until both ports have an IPv4 address and an IPv6 address.
2. Press and release the Up or Down button until Actions is displayed.
Important: During these steps, if a timeout occurs while you are entering the input for
the fields, you must begin again from step 2. All of the changes are lost, so ensure that
you have all of the information available before you begin again.
If the New Cluster IPv4? or New Cluster IPv6? action is displayed, move to step 5.
154 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
If the New Cluster IPv4? or New Cluster IPv6? action is not displayed, this node is
already a member of a cluster. Complete the following steps:
a. Press and release the Up or Down button until Actions is displayed.
b. Press and release the Select button to return to the Main Options menu.
c. Press and release the Up or Down button until Cluster: is displayed. The name of the
cluster to which the node belongs is displayed on line two of the panel.
In this case, you have two options:
Your first option is to delete this node from the cluster by completing the following
steps:
i. Press and release the Up or Down button until Actions is displayed.
ii. Press and release the Select button.
iii. Press and release the Up or Down button until Remove Cluster? is displayed.
iv. Press and hold the Up button.
v. Press and release the Select button.
vi. Press and release the Up or Down button until Confirm remove? is displayed.
vii. Press and release the Select button.
viii.Release the Up button, which deletes the cluster information from the node.
ix. Return to step 1 on page 154 and start again.
Your second option (if you do not want to remove this node from an existing cluster) is
to review the situation to determine the correct nodes to include in the new cluster.
5. Press and release the Select button to create the cluster.
6. Press and release the Select button again to modify the IP address.
7. Use the Up or Down navigation button to change the value of the first field of the
IP address to the value that was chosen.
8. Use the Right navigation button to move to the next field. Use the Up or Down navigation
button to change the value of this field.
9. Repeat step 7 for each of the remaining fields of the IP address.
10.When the last field of the IP address is changed, press the Select button.
11.Press the Right arrow button:
– For IPv4, IPv4 Subnet: is displayed.
– For IPv6, IPv6 Prefix: is displayed.
12.Press the Select button to enter edit mode.
13.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were
changed. There is only a single field for IPv6 Prefix.
14.When the last field of IPv4 Subnet/IPv6 Mask is changed, press the Select button.
15.Press the Right navigation button:
– For IPv4, IPv4 Gateway: is displayed.
– For IPv6, IPv6 Gateway: is displayed.
16.Press the Select button.
17.Change the fields for the appropriate gateway in the same way that the IPv4/IPv6 address
fields were changed.
18.When the changes to all of the Gateway fields are made, press the Select button.
19.To review the settings before the cluster is created, use the Right and Left buttons. Make
any necessary changes, use the Right and Left buttons to see “Confirm Created?”, and
then press the Select button.
20.After you complete this task, the following information is displayed on the service display
panel:
– Cluster: is displayed on line one.
– A temporary, system-assigned cluster name that is based on the IP address is
displayed on line two.
If the cluster is not created, Create Failed: is displayed on line one of the service display.
Line two contains an error code. For more information about the error codes and to
identify the reason why the cluster creation failed and the corrective action to take, see the
product support website
http://www.ibm.com/storage/support/2145
When you create the cluster from the front panel with the correct IP address format, you can
finish the cluster configuration by accessing the management GUI, completing the Create
Cluster wizard, and adding other nodes to the cluster.
Important: At this time, do not repeat this procedure to add other nodes to the cluster.
To add nodes to the cluster, follow the steps that are described in Chapter 10, “Operations
using the CLI” on page 565 and Chapter 11, “Operations using the GUI” on page 715.
Note: Ensure that the SVC cluster IP address (svcclusterip) can be reached successfully
by using a ping command from the network.
156 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
4.3.2 Post-requisites
The following steps are optional to complete the SVC cluster configuration, we strongly
recommend you complete them at some point during the SVC implementation phase.
1. Configure the SSH keys for the command-line user, as shown in 4.4, “Secure Shell
overview” on page 157.
2. Configure user authentication and authorization as shown in Chapter 11, “Operations
using the GUI” on page 715.
3. Set up event notifications and inventory reporting as shown in Chapter 11, “Operations
using the GUI” on page 715.
4. Create the storage pools.
5. Add MDisk to the storage pool.
6. Identify and create volumes.
7. Create hosts objects.
8. Map volumes to hosts.
9. Identify and configure the FlashCopy mappings and Metro Mirror relationship.
10.Back up configuration data as shown in Chapter 10, “Operations using the CLI” on
page 565.
When SSH client (A) attempts to connect to SSH server (B), the SSH password (if you require
command-line access without entering a password, the key pair) authenticates the
connection. The key consists of two halves: the public keys and private keys. The SSH client
public key is put onto SSH Server (B) using some means outside of the SSH session. When
SSH client (A) tries to connect, the private key on SSH client (A) is able to authenticate with
its public half on SSH server (B).
Note: After one hour, a fixed SSH interactive session times out, which means the SSH
session is automatically closed. This session timeout limit is not configurable.
You can choose between password or SSH key authentication, or you can choose both
password and SSH key authentication for the SVC CLI. We describe SSH in the following
sections.
Tip: If you choose not to create an SSH key pair, you can still access the SVC cluster by
using the SVC CLI, if you have a user password. You are authenticated through the user
name and password.
The connection is secured by using a private key and a public key pair. Securing the
connection includes the following steps:
1. A public key and a private key are generated together as a pair.
2. A public key is uploaded to the SSH server (SVC cluster).
3. A private key identifies the client. The private key is checked against the public key during
the connection. The private key must be protected.
4. Also, the SSH server must identify itself with a specific host key.
5. If the client does not have that host key yet, it is added to a list of known hosts.
The SSH client provides a secure environment from which to connect to a remote machine. It
uses the principles of public and private keys for authentication.
SSH keys are generated by the SSH client software. The SSH keys include a public key,
which is uploaded and maintained by the cluster, and a private key that is kept private to the
workstation that is running the SSH client. These keys authorize specific users to access the
administrative and service functions on the cluster. Each key pair is associated with a
user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored
on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.
To use the CLI, an SSH client must be installed on that system, the SSH key pair must be
generated on the client system, and the client’s SSH public key must be stored on the SVC
clusters.
You must install an SSH client program on the machine you intend to use to manage SVC
clusters. For this book we use PuTTY which is a free SSH client and can provide all functions
needed for a SSH connection. This software provides the SSH client function for users who
are logged in to the SVC Console and who want to start the CLI to manage the SVC cluster.
4.4.1 Generating public and private SSH key pairs by using PuTTY
Complete the following steps to generate SSH keys on the SSH client system:
1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client
desktop, select Start → Programs → PuTTY → PuTTYgen.
158 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
Tip: You can find the PuTTYgen application under the installation directory of PuTTY if
it is not showing in the Windows Programs menu.
2. In the PuTTY Key Generator GUI window (Figure 4-26), complete the following steps to
generate the keys:
a. Select SSH-2 RSA.
b. Leave the number of bits in a generated key value at 2048.
c. Click Generate.
3. Move the cursor onto the blank area to generate the keys as shown in Figure 4-27.
To generate keys: The blank area is the large blank rectangle on the GUI inside the
section of the GUI labeled Key. Continue to move the mouse pointer over the blank area
until the progress bar reaches the far right. This action generates random characters to
create a unique key pair.
4. After the keys are generated, save them for later use by completing the following steps:
a. Click Save public key, as shown in Figure 4-28.
b. You are prompted for a name, for example, pubkey, and a location for the public key, for
example, C:\Support Utils\PuTTY. Click Save.
If another name and location are chosen, ensure that you maintain a record of the
name and location. You must specify the name and location of this SSH public key in
the steps that are described in 4.4.2, “Uploading the SSH public key to the SAN
Volume Controller cluster” on page 161.
Tip: The PuTTY Key Generator saves the public key with no extension, by default.
Use the string pub in naming the public key, for example, pubkey, to differentiate the
SSH public key from the SSH private key easily.
160 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
d. You are prompted with a warning message, as shown in Figure 4-29. Click Yes to save
the private key without a passphrase.
e. When prompted, enter a name, for example, icat, and a location for the private key, for
example, C:\Support Utils\PuTTY. Click Save.
We suggest that you use the default name icat.ppk because this key was used for icat
application authentication and must have this default name in SVC clusters that are
running on versions before SVC 5.1.
Private key extension: The PuTTY Key Generator saves the private key with the
PPK extension.
4.4.2 Uploading the SSH public key to the SAN Volume Controller cluster
After you create your SSH key pair, you must upload your SSH private key onto the SVC
cluster. Complete the following steps:
1. From your browser, enter https://svcclusteripaddress/.
In the GUI interface, go to the Access Management interface and select Users as shown
in Figure 4-30.
2. In the next window, as shown in Figure 4-31, select Create User to open a new window for
user creation.
3. From the window to create a user, as shown in Figure 4-32, you need to provide the
following information:
a. Select Authentication Mode as Local (for Remote users configuration, see 2.12, “User
authentication” on page 59).
b. Name (user ID) that you want to create
c. Select the access level you want to assign to user. The Security Administrator
(SecurityAdmin) is the maximum access level.
d. Type the password twice.
e. Select the location from which you want to upload the SSH Public Key file that you
created for this user. Click Create.
162 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
You completed the user creation process and uploaded the users’ SSH public key that is
paired later with the users’ private.ppk key, as described in 4.4.3, “Configuring the PuTTY
session for the CLI” on page 163. Figure 4-35 on page 165 shows the successful upload of
the SSH admin key.
The requirements for the SVC cluster setup by using the SVC cluster web interface are
complete.
Complete the following steps to configure the PuTTY session on the SSH client system:
1. From the management workstation you want to connect to SVC, select Start →
Programs → PuTTY → PuTTY to open the PuTTY Configuration GUI window.
2. From the Category pane on the left in the PuTTY Configuration window (Figure 4-33), click
Session if it is not selected.
Tip: The items that you select in the Category pane affect the content that appears in
the right pane.
3. Under the “Specify the destination you want to connect to” section in the right pane, select
SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures
that if any connection errors occur, they are displayed in the user’s window.
4. From the Category pane on the left, select Connection → SSH to display the PuTTY SSH
connection configuration window, as shown in Figure 4-34.
164 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
5. In the right pane, for the Preferred SSH protocol version, select 2.
6. From the Category pane on the left side of the PuTTY Configuration window, select
Connection → SSH → Auth.
7. As shown in Figure 4-35, in the “Private key file for authentication:” field under the
Authentication parameters section in the right pane, browse to or enter the fully qualified
directory path and file name of the SSH client private key file (for example, C:\Support
Utils\Putty\icat.ppk) that was created earlier.
You can skip the Connection → SSH → Auth part of the process if you created the user
only with password authentication and no SSH key.
Figure 4-35 PuTTY Configuration: Private key file location for authentication
8. From the Category pane on the left side of the PuTTY Configuration window, click
Session.
9. In the right pane, complete the following steps, as shown in Figure 4-36 on page 166:
a. Under the “Load, save, or delete a stored session” section, select Default Settings,
and then click Save.
b. For the Host name (or IP address) field, enter the IP address of the SVC cluster.
c. In the Saved Sessions field, enter a name (for example, SVC) to associate with this
session.
d. Click Save again.
You can now close the PuTTY Configuration window or leave it open to continue.
166 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
4. If this is the first time that you use the PuTTY application since you generated and
uploaded the SSH key pair, a PuTTY Security Alert window with a prompt opens that
warns that a mismatch exists between the private and public keys, as shown in
Figure 4-38. Click Yes. The CLI starts.
5. As shown in Example 4-1, the private key that is used in this PuTTY session is now
authenticated against the public key that was uploaded to the SVC cluster.
You completed the required tasks to configure the CLI for SVC administration from the SVC
Console. You can close the PuTTY session.
Note: You must reach the SVC cluster IP address successfully by using the ping command
from the AIX workstation from which cluster access is wanted.
1. OpenSSL must be installed for OpenSSH to work. Complete the following steps to install
OpenSSH on the AIX client:
a. You can obtain the installation images from the following websites:
• https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp
• http://sourceforge.net/projects/openssh-aix
b. Follow the instructions carefully because OpenSSL must be installed before SSH is
used.
2. Complete the following steps to generate an SSH key pair:
a. Run the cd command to browse to the /.ssh directory.
b. Run the ssh-keygen -t rsa command. The following message is displayed:
Generating public/private rsa key pair. Enter file in which to save the key
(//.ssh/id_rsa)
Note: This process generates two user named files. If you select the name key, the
files are named key and key.pub. Where key is the name of the private key and
key.pub is the name of the public key.
c. Pressing Enter uses the default file that is shown in parentheses. Otherwise, enter a
file name (for example, aixkey), and then press Enter. The following prompt is
displayed:
Enter a passphrase (empty for no passphrase)
d. When you use the CLI interactively, enter a passphrase because no other
authentication exists when you are connecting through the CLI. After you enter the
passphrase, press Enter. The following prompt is displayed:
Enter same passphrase again:
Enter the passphrase again. Press Enter.
e. A message is displayed indicating that the key pair was created. The private key file
has the name that was entered previously, for example, aixkey. The public key file has
the name that was entered previously with an extension of .pub, for example,
aixkey.pub.
The use of a passphrase: If you are generating an SSH key pair so that you can use
the CLI interactively, use a passphrase so that you must authenticate whenever you
connect to the cluster. You can have a passphrase-protected key for scripted usage, but
you must use the expect command or a similar command to have the passphrase
parsed into the ssh command.
168 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
The user configuration steps for uploading the SVC key is the same as shown in 4.4.2,
“Uploading the SSH public key to the SAN Volume Controller cluster” on page 161.
Using IPv6: To remotely access the SVC clusters that are running IPv6, you are required
to run a supported web browser and have IPv6 configured on your local workstation.
2. Select Management IP Addresses and click on port 1 of one of the nodes, as shown in
Figure 4-40.
170 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 04 Initial Configuration Tarik.fm
3. In the window that is shown in Figure 4-41, complete the following steps:
a. Select Show IPv6.
b. Enter an IPv6 address in the IP Address field.
c. Enter an IPv6 prefix in the Subnet Mask/Prefix field. The Prefix field can have a value
of 0 - 127.
d. Enter an IPv6 gateway in the Gateway field.
e. Click OK.
5. The Change Management task is started on the server, as shown in Figure 4-43 on
page 172. Click Close when the task completes.
6. Test the IPv6 connectivity to the cluster by using a compatible IPv6 and SVC web browser
on your local workstation.
7. Remove the IPv4 address in the SVC GUI that is accessing the same windows, as shown
in Figure 4-41 on page 171. Validate this change by clicking OK.
172 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
The ability to consolidate storage for attached open systems hosts provides the following
benefits:
Unified, easier storage management.
Increased utilization rate of the installed storage capacity.
Advanced Copy Services functions offered across storage systems from separate
vendors.
Consider only one kind of multipath driver for attached hosts.
Starting with SVC V6.4, Fibre Channel over Ethernet (FCoE) is supported on models
2145-CG8 and newer. Only 10 GbE lossless Ethernet or faster is supported.
Redundant paths to volumes can be provided for both SAN-attached and iSCSI-attached
hosts. Figure 5-1 on page 174 shows the types of attachments that are supported by SVC
release 7.4.
174 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
The SVC imposes no particular limit on the actual distance between the SVC nodes and host
servers. Therefore, a server can be attached to an edge switch in a core-edge configuration
and the SVC cluster is at the core of the fabric.
For host attachment, the SVC supports up to three inter-switch link (ISL) hops in the fabric,
which means that the server to the SVC can be separated by up to five FC links, four of which
can be 10 km long (6.2 miles) if longwave small form-factor pluggables (SFPs) are used.
The zoning capabilities of the SAN switch are used to create three distinct zones. SVC V7.6
supports 2 GBps, 4 GBps, 8 GBps, or 16 Gbps FC fabric, depending on the hardware
platform and on the switch where the SVC is connected. In an environment where you have a
fabric with multiple-speed switches, the preferred practice is to connect the SVC and the disk
storage system to the switch that is operating at the highest speed.
The SVC nodes contain shortwave SFPs; therefore, they must be within the allowed distance
depending of the speed of the switch to which they attach. Therefore, the configuration that is
shown in Figure 5-2 on page 175 is supported.
Table 5-1 shows the fabric type that can be used for communicating between hosts, nodes,
and RAID storage systems. These fabric types can be used at the same time.
In Figure 5-2, the optical distance between SVC Node 1 and Host 2 is slightly over 40 km
(24.85 miles).
To avoid latencies that lead to degraded performance, we suggest that you avoid ISL hops
whenever possible. That is, in an optimal setup, the servers connect to the same SAN switch
as the SVC nodes.
Remember the following limits when you are connecting host servers to an SVC:
Up to 512 hosts per I/O Group are supported, which results in a total of 2048 hosts per
cluster.
If the same host is connected to multiple I/O Groups of a cluster, it counts as a host in
each of these groups.
A total of 2048 distinct, configured host worldwide port names (WWPNs) are supported
per I/O Group.
This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is
generated for each iSCSI name) that are associated with all of the hosts that are
associated with a single I/O Group.
Access from a server to an SVC cluster through the SAN fabric is defined by using switch
zoning.
Consider the following rules for zoning hosts with the SVC:
Homogeneous HBA port zones
Switch zones that contain HBAs must contain HBAs from similar host types and similar
HBAs in the same host. For example, AIX and Microsoft Windows hosts must be in
separate zones, and QLogic and Emulex adapters must also be in separate zones.
Optional (n+2 redundancy): With four HBA ports, zone HBA ports to SVC ports 1:2 for a
total of eight paths.
Here, we use the term HBA port to describe the SCSI initiator and SVC port to describe
the SCSI target.
176 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Important: The maximum number of host paths per LUN must not exceed eight.
Figure 5-3 shows an overview of a configuration where servers contain two single-port HBAs
each and the configuration includes the following characteristics:
Distribute the attached hosts equally between two logical sets per I/O Group, if possible.
Connect hosts from each set to the same group of SVC ports. This “port group” includes
exactly one port from each SVC node in the I/O Group. The zoning defines the correct
connections.
The port groups are defined in the following manner:
– Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both
nodes, for example, N1/N2 of I/O Group zero.
– Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both
nodes of an I/O Group.
You can create aliases for these port groups (per I/O Group):
– Fabric A: IOGRP0_PG1 → N1_P1;N2_P1,IOGRP0_PG2 → N1_P3;N2_P3
– Fabric B: IOGRP0_PG1 → N1_P4;N2_P4,IOGRP0_PG2 → N1_P2;N2_P2
Create host zones by always using the host port WWPN and the PG1 alias for hosts in the
first host set. Always use the host port WWPN and the PG2 alias for hosts from the
second host set. If a host must be zoned to multiple I/O Groups, add the PG1 or PG2
aliases from the specific I/O Groups to the host zone.
The use of this schema provides four paths to one I/O Group for each host and helps to
maintain an equal distribution of host connections on SVC ports.
When possible, use the minimum number of paths that are necessary to achieve a sufficient
level of redundancy. For the SVC environment, no more than four paths per I/O Group are
required to accomplish this layout.
All paths must be managed by the multipath driver on the host side. If we assume that a
server is connected through four ports to the SVC, each volume is seen through eight paths.
With 125 volumes mapped to this server, the multipath driver must support handling up to
1,000 active paths (8 x 125).
For more configuration and operational information about the IBM Subsystem Device Driver
(SDD), see the Multipath Subsystem Device Driver User’s Guide, S7000303, which is
available at this web site:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
For hosts that use four HBAs/ports with eight connections to an I/O Group, use the zoning
schema that is shown in Figure 5-4. You can combine this schema with the previous four-path
zoning schema.
178 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Additionally, isolating remote replication traffic on dedicated ports is beneficial and ensures
that problems that affect the cluster-to-cluster interconnection do not adversely affect the
ports on the primary cluster and therefore affect the performance of workloads running on the
primary cluster.
We recommend the following port designations for isolating both port to local and port to
remote node traffic, as shown in Table 5-2 on page 180.
Important: It is not recommended to zone host or storage ports to ports designated for
inter-node use or replication use in the 8/12/16 port configurations and in no case should
inter-node and replication use the same ports. This is to minimize any Buffer to Buffer
(B2B) credit exhaustion situations, due to long distance latencies introduced by replication,
for example, from tieing up buffers needed by hosts, storage or inter-node
communications.
180 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
This recommendation provides the traffic isolation that you want and also simplifies migration
from existing configurations with only four ports, or even later migrations from 8-port, 12-port
or 16-port configurations to configurations with additional ports. More complicated port
mapping configurations that spread the port traffic across the adapters are supported and can
be considered. However, these approaches do not appreciably increase availability of the
solution because the mean time between failures (MTBF) of the adapter is not significantly
less than that of the non-redundant node components.
Although alternate port mappings that spread traffic across HBAs can allow adapters to come
back online following a failure, they will not prevent a node from going offline temporarily to
reboot and attempt to isolate the failed adapter and then rejoin the cluster. Our
recommendation takes all these considerations into account with a view that the greater
complexity might lead to migration challenges in the future and the simpler approach is best.
5.3 iSCSI
The iSCSI protocol is a block-level protocol that encapsulates SCSI commands into TCP/IP
packets and therefore, uses an existing IP network instead of requiring the FC HBAs and
SAN fabric infrastructure. The iSCSI standard is defined by RFC 3720. The iSCSI
connectivity is a software feature that is provided by the SVC code.
The iSCSI-attached hosts can use a single network connection or multiple network
connections.
Restriction: Only hosts can iSCSI-attach to the SVC. The SVC back-end storage has to
use SAN.
Each SVC node is equipped with up to three onboard Ethernet network interface cards
(NICs), which can operate at a link speed of 10 Mbps, 100 Mbps, or 1000 Mbps. All NIC’s can
be used to carry iSCSI traffic. Each node’s NIC that is numbered 1 is used as the primary
SVC cluster management port. For optimal performance achievement, we advise that you
use a 1 Gb Ethernet connection between SVC-attached and iSCSI-attached hosts when the
SVC node’s onboard NICs are used.
Starting with the SVC 2145-CG8, an optional 10 Gbps 2-port Ethernet adapter (Feature Code
5700) is available. The required 10 Gbps shortwave SFPs are available as Feature Code
5711. If the 10 GbE option is installed, you cannot install any internal solid-state drives
(SSDs). The 10 GbE option is used solely for iSCSI traffic.
Starting with the SVC 2145-DH8, an optional 10 Gbps 4-port Ethernet adapter (Feature Code
AH12) is available. This feature provides one I/O adapter card with four 10 Gb Ethernet ports
and SFP+ transceivers. It is used to add 10 Gb iSCSI/FCoE connectivity to the SVC Storage
Engine.
You can use the following types of iSCSI initiators in host systems:
Software initiator: Available for most operating systems, for example, AIX, Linux, and
Windows
Hardware initiator: Implemented as a network adapter with an integrated iSCSI processing
unit, which is also known as an iSCSI HBA.
For more information about the supported operating systems for iSCSI host attachment and
the supported iSCSI HBAs, see the following websites:
IBM SAN Volume Controller v7.6 Support Matrix:
http://www.ibm.com/support/docview.wss?uid=ssg1S1003658
IBM SAN Volume Controller Knowledge Center:
http://www.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp
An iSCSI target refers to a storage resource that is on an iSCSI server. It also refers to one of
potentially many instances of iSCSI nodes that are running on that server.
An iSCSI node is identified by its unique iSCSI name and is referred to as an iSCSI qualified
name (IQN). The purpose of this name is for the identification of the node only, not for the
node’s address. In iSCSI, the name is separated from the addresses. This separation allows
multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the
same iSCSI node to use multiple addresses.
An iSCSI host in the SVC is defined by specifying its iSCSI initiator names. The following
example shows an IQN of a Windows server’s iSCSI software initiator:
iqn.1991-05.com.microsoft:itsoserver01
182 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
During the configuration of an iSCSI host in the SVC, you must specify the host’s initiator
IQNs.
An alias string can also be associated with an iSCSI node. The alias allows an organization to
associate a string with the iSCSI name. However, the alias string is not a substitute for the
iSCSI name.
A host that is accessing SVC volumes through iSCSI connectivity uses one or more Ethernet
adapters or iSCSI HBAs to connect to the Ethernet network.
Both onboard Ethernet ports of an SVC node can be configured for iSCSI. If iSCSI is used for
host attachment, we advise that you dedicate Ethernet port one for the SVC management
and port two for iSCSI use. This way, port two can be connected to a separate network
segment or virtual LAN (VLAN) for iSCSI because the SVC does not support the use of VLAN
tagging to separate management and iSCSI traffic.
Note: Ethernet link aggregation (port trunking) or “channel bonding” for the SVC nodes’
Ethernet ports is not supported for the 1 Gbps ports.
For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, you
can define two IPv4 and two IPv6 addresses or iSCSI network portals.
5.3.4 iSCSI setup of the SAN Volume Controller and host server
You must perform the following procedure when you are setting up a host server for use as an
iSCSI initiator with the SVC volumes. The specific steps vary depending on the particular host
type and operating system that you use.
To set up your host server for use as an iSCSI software-based initiator with the SVC volumes,
complete the following steps. (The CLI is used in this example.)
1. Complete the following steps to set up your SVC cluster for iSCSI:
a. Select a set of IPv4 or IPv6 addresses for the Ethernet ports on the nodes that are in
the I/O Groups that use the iSCSI volumes.
b. Configure the node Ethernet ports on each SVC node in the clustered system by
running the cfgportip command.
c. Verify that you configured the node and the clustered system’s Ethernet ports correctly
by reviewing the output of the lsportip command and lssystemip command.
d. Use the mkvdisk command to create volumes on the SVC clustered system.
e. Use the mkhost command to create a host object on the SVC. It defines the host’s
iSCSI initiator to which the volumes are to be mapped.
f. Use the mkvdiskhostmap command to map the volume to the host object in the SVC.
2. Complete the following steps to set up your host server:
a. Ensure that you configured your IP interfaces on the server.
b. Ensure that your iSCSI HBA is ready to use, or install the software for the iSCSI
software-based initiator on the server, if needed.
c. On the host server, run the configuration methods for iSCSI so that the host server
iSCSI initiator logs in to the SVC clustered system and discovers the SVC volumes.
The host then creates host devices for the volumes.
After the host devices are created, you can use them with your host applications.
184 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
5.3.6 Authentication
The authentication of hosts is optional; by default, it is disabled. The user can choose to
enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication,
which involves sharing a CHAP secret between the cluster and the host. If the correct key is
not provided by the host, the SVC does not allow it to perform I/O to volumes. Also, you can
assign a CHAP secret to the cluster.
A concept that is used for handling the iSCSI IP address failover is called a clustered Ethernet
port. A clustered Ethernet port consists of one physical Ethernet port on each node in the
cluster. The clustered Ethernet port contains configuration settings that are shared by all of
these ports.
Figure 5-6 on page 186 shows an example of an iSCSI target node failover. This example
provides a simplified overview of what happens during a planned or unplanned node restart in
an SVC I/O Group. The example refers to the SVC nodes with no optional 10 GbE iSCSI
adapter installed.
The commands for the configuration of the iSCSI IP addresses were separated from the
configuration of the cluster IP addresses.
The following commands are new commands that are used for managing iSCSI IP addresses:
The lsportip command lists the iSCSI IP addresses that are assigned for each port on
each node in the cluster.
The cfgportip command assigns an IP address to each node’s Ethernet port for iSCSI
I/O.
The following commands are new commands that are used for managing the cluster IP
addresses:
The lssystemip command returns a list of the cluster management IP addresses that are
configured for each port.
The chsystemip command modifies the IP configuration parameters for the cluster.
186 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
The parameters for remote services (SSH and web services) remain associated with the
cluster object. During an SVC code upgrade, the configuration settings for the clustered
system are applied to the node Ethernet port 1.
For iSCSI-based access, the use of redundant network connections and separating iSCSI
traffic by using a dedicated network or virtual LAN (VLAN) prevents any NIC, switch, or target
port failure from compromising the host server’s access to the volumes.
Because both onboard Ethernet ports of an SVC node can be configured for iSCSI, we
advise that you dedicate Ethernet port 1 for SVC management and port 2 for iSCSI usage. By
using this approach, port 2 can be connected to a dedicated network segment or VLAN for
iSCSI. Because the SVC does not support the use of VLAN tagging to separate management
and iSCSI traffic, you can assign the correct LAN switch port to a dedicated VLAN to separate
SVC management and iSCSI traffic.
Important: With Windows 2012, you can use native Microsoft device drivers, but we
strongly advise that you install IBM SDDDSM drivers. More information here:
http://www.ibm.com/support/docview.wss?uid=ssg1S4000350
Before you attach the SVC to your host, ensure that all of the following requirements are
fulfilled:
Check all prerequisites that are provided in section 2.0 of the SDDDSM readme file.
Check the LUN limitations for your host system. Ensure that enough FC adapters are
installed in the server to handle the total number of LUNs that you want to attach.
On this page, browse to section V7.6.x, select Supported Hardware, Device Driver,
Firmware and Recommended Software Levels, and then search for Windows.
At this website, you also can find the hardware list for supported HBAs and the driver levels
for Windows. Check the supported firmware and driver level for your HBA and follow the
manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA.
Most manufacturers’ driver readme files list the instructions for the Windows registry
parameters that must be set for the HBA driver.
Also, check the documentation that is provided for the server system for the installation
guidelines of FC HBAs regarding the installation in certain PCI(e) slots, and so on.
The detailed configuration settings that you must make for the various vendors’ FC HBAs are
available in the SVC Information Center by selecting Installing → Host attachment → Fibre
Channel host attachments → Hosts running the Microsoft Windows Server operating
system.
188 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
On your Windows Server hosts, complete the following steps to change the disk I/O timeout
value to 60 in the Windows registry:
1. In Windows, click Start, and then select Run.
2. In the dialog text box, enter regedit and press Enter.
3. In the registry browsing tool, locate the
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.
4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the
value to 60, as shown in Figure 5-7 on page 189.
MPIO is not installed with the Windows operating system, by default. Instead, storage
vendors must pack the MPIO drivers with their own DSMs. IBM SDDDSM is the IBM multipath
I/O solution that is based on Microsoft MPIO technology. It is a device-specific module that is
designed specifically to support IBM storage devices on Windows Server 2008 (R2) and
Windows 2012 servers.
The intention of MPIO is to achieve better integration of multipath storage with the operating
system. It also allows the use of multipathing in the SAN infrastructure during the boot
process for SAN boot hosts.
No SDDDSM support exists for Windows Server 2000 because SDDDSM requires the
STORPORT version of the HBA device drivers. Table 5-3 on page 190 lists the SDDDSM
driver levels that are supported at the time of this writing.
For more information about the levels that are available, see this website:
http://ibm.com/support/docview.wss?uid=ssg1S7001350#WindowsSDDDSM
After you download the appropriate archive (.zip file) from this URL, extract it to your local
hard disk and start setup.exe to install SDDDSM. A command prompt window opens, as
shown in Figure 5-8. Confirm the installation by entering Y.
After the setup completes, enter Y again to confirm the reboot request, as shown in
Figure 5-9.
190 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
After the reboot, the SDDDSM installation is complete. You can verify the installation
completion in Device Manager because the SDDDSM device appears (as shown in
Figure 5-10) and the SDDDSM tools are installed, as shown in Figure 5-11.
In this example, we mapped three SVC disks to the Windows Server 2008 R2 host that is
named Diomede, as shown in Example 5-1.
Complete the following steps to use the devices on your Windows Server 2008 R2 host:
1. Click Start → Run.
2. Run the diskmgmt.msc command, and then click OK. The Disk Management window
opens.
3. Select Action → Rescan Disks, as shown in Figure 5-12.
192 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
4. The SVC disks now appear in the Disk Management window, as shown in Figure 5-13 on
page 193.
After you assign the SVC disks, they are also available in Device Manager. The three
assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices
in the Device Manager, as shown in Figure 5-14.
5. To check that the disks are available, select Start → All Programs → Subsystem Device
Driver DSM, and then click Subsystem Device Driver DSM, as shown in Figure 5-15 on
page 194. The SDDDSM command-line utility appears.
Figure 5-15 Windows Server 2008 R2 Subsystem Device Driver DSM utility
6. Run the datapath query device command and press Enter. This command displays all of
the disks and the available paths, including their states, as shown in Example 5-2.
Total Devices : 3
194 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
C:\Program Files\IBM\SDDDSM>
SAN zoning: When the SAN zoning guidance is followed, we see this result, which
uses one volume and a host with two HBAs: (number of volumes) x (number of paths
per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.
7. Right-click the disk in Disk Management and then select Online to place the disk online,
as shown in Figure 5-16.
10.Mark all of the disks that you want to initialize and then click OK, as shown in Figure 5-18
on page 196.
11.Right-click the deallocated disk space and then select New Simple Volume, as shown in
Figure 5-19.
196 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
14.Assign a drive letter and then click Next, as shown in Figure 5-21.
15.Enter a volume label and then click Next, as shown in Figure 5-22.
16.Click Finish. Repeat steps 9 - 16 for every SVC disk on your host system (Figure 5-23 on
page 198).
198 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut
down all but one MSCS cluster node. Also, you must stop the applications in the resource that
access the volume to be expanded before the volume is expanded. Applications that are
running in other resources can continue to run. After the volume is expanded, start the
applications and the resource, and then restart the other nodes in the MSCS.
To expand a volume in use on a Windows Server host, you use the Windows DiskPart utility.
DiskPart was developed by Microsoft to ease the administration of storage on Windows hosts.
DiskPart is a command-line interface (CLI) that you can use to manage disks, partitions, and
volumes by using scripts or direct input on the command line. You can list disks and volumes,
select them, and after selecting them, get more detailed information, create partitions, extend
volumes, and so on. For more information about DiskPart, see this web site:
http://www.microsoft.com
For more information about expanding the partitions of a cluster-shared disk, see this web
site:
http://support.microsoft.com/kb/304736
Next, we show an example of how to expand a volume from the SVC on a Windows Server
2008 R2 host.
To list a volume size, use the lsvdisk <VDisk_name> command. This command provides the
volume size information for the Senegal_bas0001 volume before expanding the volume.
Here, we can see that the capacity is 10 GB, and we can see the value of the vdisk_UID. To
see on which vpath this volume is on the Windows Server 2008 R2host, we use the datapath
query device SDD command on the Windows host (Figure 5-24).
To see the size of the volume on the Windows host, we use Disk Management, as shown in
Figure 5-24.
This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use
the svctask expandvdisksize command to increase the capacity on the volume. In this
example, we expand the volume by 1 GB, as shown in Example 5-3 on page 200.
200 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
used_capacity 11.00GB
real_capacity 11.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 11.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 11.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
encrypt no
To check that the volume was expanded, we use the svcinfo lsvdisk command. In
Example 5-3, we can see that the Senegal_bas0001 volume capacity was expanded to
11 GB.
After a Disk Rescan in Windows is performed, you can see the new deallocated space in
Windows Disk Management, as shown in Figure 5-25 on page 201.
This window shows that Disk1 now has 1 GB deallocated new capacity. To make this capacity
available for the file system, use the following commands, as shown in Example 5-4:
diskpart: Starts DiskPart in a DOS prompt
list volume: Shows you all available volumes
select volume: Selects the volume to expand
detail volume: Displays details for the selected volume, including the deallocated
capacity
extend: Extends the volume to the available deallocated space
Readonly : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
Offline : No
Bitlocker Encrypted : No
Installable : No
Volume Capacity : 11 GB
Volume Free Space : 1024 MB
DISKPART> extend
Read-only : No
Hidden : No
No Default Drive Letter: No
Shadow Copy : No
Offline : No
Bitlocker Encrypted : No
Installable : yes
202 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
After the volume is extended, the detail volume command shows no free capacity on the
volume anymore. The list volume command shows the file system size. The Disk
Management window also shows the new disk size, as shown in Figure 5-26.
The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded
by expanding the underlying SVC volume. The new space appears as deallocated space at
the end of the disk.
In this case, you do not need to use the DiskPart tool. Instead, you can use Windows Disk
Management functions to allocate the new space. Expansion works irrespective of the volume
type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded
without stopping I/O, in most cases.
Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without
backing up your data. This operation is disruptive for the data because of a change in the
position of the logical block address (LBA) on the disks.
When the host mapping is removed, perform a rescan for the disk. Disk Management on the
server removes the disk, and the vpath goes into the status of CLOSE on the server. Verify
these actions by running the datapath query device SDD command, but the vpath that is
closed is first removed after a reboot of the server.
In the following examples, we show how to remove an SVC volume from a Windows server.
We show this example on a Windows Server 2008 operating system, but the steps also apply
to Windows Server 2008 R2 and Windows Server 2012.
Figure 5-24 on page 199 shows the Disk Management before removing the disk.
We now remove Disk 1. To find the correct volume information, we find the Serial/UID number
by using SDD, as shown in Example 5-5.
Example 5-5 Removing the SVC disk from the Windows server
C:\Program Files\IBM\SDDDSM>datapath query device
Total Devices : 3
Knowing the Serial/UID of the volume and that the host name is Senegal, we identify the host
mapping to remove by running the lshostvdiskmap command on the SVC. Then, we remove
the actual host mapping, as shown in Example 5-6.
204 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Here, we can see that the volume is removed from the server. On the server, we then perform
a disk rescan in Disk Management, and we now see that the correct disk (Disk1) was
removed, as shown in Figure 5-27.
SDDDSM also shows us that the status for all paths to Disk1 changed to CLOSE because the
disk is not available, as shown in Example 5-7 on page 206.
Total Devices : 3
The disk (Disk1) is now removed from the server. However, to remove the SDDDSM
information about the disk, you must reboot the server at a convenient time.
We can install the PuTTY SSH client software on a Windows host by using the PuTTY
installation program. You can download PuTTY from this website:
http://www.chiark.greenend.org.uk/~sgtatham/putty/
206 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Cygwin software features an option to install an OpenSSH client. You can download Cygwin
from this website:
http://www.cygwin.com/
In this section, we describe how to install VSS. The following operating system versions are
supported:
Windows Server 2008 with SP2 (x86 and x86_64)
Windows Server 2008 R2 with SP1
Windows Server 2012
IBM System Storage Support for Microsoft VSS (IBM VSS) is installed on the Windows host.
VSS maintains a free pool of volumes for use as a FlashCopy target and a reserved pool of
volumes. These pools are implemented as virtual host systems on the SVC.
5.6.2 System requirements for the IBM System Storage hardware provider
Ensure that your system satisfies the following requirements before you install IBM VSS and
Virtual Disk Service software on the Windows operating system:
SVC with FlashCopy enabled
IBM System Storage Support for Microsoft VSS and Virtual Disk Service (VDS) software
During the installation, you are prompted to enter information about the SVC Master Console,
including the location of the truststore file. The truststore file is generated during the
installation of the Master Console. You must copy this file to a location that is accessible to the
IBM System Storage hardware provider on the Windows server.
When the installation is complete, the installation program might prompt you to restart the
system. Complete the following steps to install the IBM System Storage hardware provider on
the Windows server:
1. Download the installation archive from the following IBM web site and extract it to a
directory on the Windows server where you want to install IBM System Storage Support
for VSS:
http://ibm.com/support/docview.wss?uid=ssg1S4000833
2. Log in to the Windows server as an administrator and browse to the directory where the
installation files were downloaded.
3. Run the installation program by double-clicking IBMVSSVDS_xx_xx_xx.exe.
4. The Welcome window opens, as shown in Figure 5-28. Click Next to continue with the
installation.
208 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Figure 5-28 IBM System Storage Support for VSS and VDS installation: Welcome
5. Accept the license agreement in the next window. The Choose Destination Location
window opens, as shown in Figure 5-29. Click Next to accept the default directory where
the setup program installs the files, or click Change to select another directory.
210 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Figure 5-30 IBM System Storage Support for VSS and VDS installation
7. The Enter CIM Server Details window opens. Enter the following information in the fields
(Figure 5-31 on page 212):
a. The CIM Server Address field is propagated with the URL according to the CIM server
address.
b. In the CIM User field, enter the user name that the IBM VSS software used to access
to the SVC.
c. In the CIM Password field, enter the password for the SVC user name. Click Next.
8. In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to
restart the system, as shown in Figure 5-32 on page 213.
212 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Additional information: If these settings change after installation, you can use the
ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services
software with the new settings.
If you do not have the CIM Agent server, port, or user information, contact your CIM Agent
administrator.
Provider name: 'IBM Storage Volume Shadow Copy Service Hardware Provider'
Provider type: Hardware
Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b}
Version: 4.10.0.1
If you can successfully perform all of these verification tasks, the IBM System Storage
Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was
successfully installed on the Windows server.
When a shadow copy is created, the IBM System Storage hardware provider selects a
volume in the free pool, assigns it to the reserved pool, and then removes it from the free
pool. This process protects the volume from being overwritten by other Volume Shadow Copy
Service users.
To successfully perform a Volume Shadow Copy Service operation, enough volumes must be
available that are mapped to the free pool. The volumes must be the same size as the source
volumes.
Use the SVC GUI or SVC CLI to complete the following steps:
1. Create a host for the free pool of volumes. You can use the default name VSS_FREE or
specify another name. Associate the host with the worldwide port name (WWPN)
5000000000000000 (15 zeros), as shown in Example 5-9.
2. Create a virtual host for the reserved pool of volumes. You can use the default name
VSS_RESERVED or specify another name. Associate the host with the WWPN
5000000000000001 (14 zeros), as shown in Example 5-10.
214 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
3. Map the logical units (volumes) to the free pool of volumes. The volumes cannot be
mapped to any other hosts. If you have volumes that are created for the free pool of
volumes, you must assign the volumes to the free pool.
4. Create host mappings between the volumes that were selected in step 3 and the
VSS_FREE host to add the volumes to the free pool. Alternatively, you can run the
ibmvcfg add command to add volumes to the free pool, as shown in Example 5-11.
5. Verify that the volumes were mapped. If you do not use the default WWPNs
5000000000000000 and 5000000000000001, you must configure the IBM System
Storage hardware provider with the WWPNs, as shown in Example 5-12.
Commands:
/h | /help | -? | /?
showcfg
list <all|free|reserved|assigned|unassigned|infc|pool|iogroup> <-l> (verbose)
add <volume serial number list> (separated by spaces if add more than one)
rem <volume serial number list> (separated by spaces if remove more than one)
del <enter the target volume serial number or UUID for SVC in order to delete
incremental flashcopy relationship> (separated by spaces)
clear <one of the configuration settings>
cleanupDependentMaps
testsnapshot <Drive letter and mount point list >
Configuration:
set user <CIMOM user name>
set password <CIMOM password>
set trace [0-7]
set usingSSL <YES | NO>
set vssFreeInitiator <WWPN>
set vssReservedInitiator <WWPN>
set cimomPort <PORTNUM>
set cimomHost <Hostname>
set targetSVC
set backgroundCopy <0-100>
set incrementalFC <YES | NO>
set cimomTimeout <second, zero for unlimit >
set rescanOnceArr <sec> [CAUTION] Default is 0, time length [0-300].
set rescanOnceRem <sec> [CAUTION] Default is 0, time length [0-300].
set rescanRemMin <sec> [CAUTION] Default is 0, time length [0-300].
set rescanRemMax <sec> [CAUTION] Default is 45, time length [0-300].
set storageProtocol <auto, fc or iscsi>
set storagePool <storage pool name>
set allocateOption <option for dynamically allocating target volumes, standard(0)
or se(1)>
set ioGroup <io group(SVC Only) for dynamically allocated target volumes>
set vmhost <vmware web service address>
set vmusername <vmware web service user name>
set vmpassword <vmware web service login password>
set vmcredential <vmware web service session credential location>
set vmtimeout <vmware web service connection timeout>
216 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
ibmvcfg set username This command sets the user ibmvcfg set username Dan
<username> name to access the SVC
Console.
ibmvcfg set password This command sets the ibmvcfg set password
<password> password of the user name that mypassword
accesses the SVC Console.
ibmvcfg set targetSVC This command specifies the IP set targetSVC 10.43.86.120
<ipaddress> address of the SVC on which
the volumes are located when
volumes are moved to and from
the free pool with the ibmvcfg
add and ibmvcfg rem
commands. The IP address is
overridden if you use the -s flag
with the ibmvcfg add and
ibmvcfg rem commands.
ibmvcfg set usingSSL This command specifies ibmvcfg set usingSSL yes
whether to use the Secure
Sockets Layer (SSL) protocol to
connect to the SVC Console.
ibmvcfg set cimomPort This command specifies the ibmvcfg set cimomPort 5999
<portnum> SVC Console port number. The
default value is 5999.
ibmvcfg set cimomHost This command sets the name of ibmvcfg set cimomHost
<server name> the server where the SVC cimomserver
Console is installed.
ibmvcfg set namespace This command specifies the ibmvcfg set namespace
<namespace> namespace value that the \root\ibm
Master Console uses. The
default value is \root\ibm.
ibmvcfg set vssFreeInitiator This command specifies the ibmvcfg set vssFreeInitiator
<WWPN> WWPN of the host. The default 5000000000000000
value is 5000000000000000.
Modify this value only if a host
exists in your environment with
a WWPN of
5000000000000000.
ibmvcfg listvols all This command lists all of the ibmvcfg listvols all
volumes, including information
about the size, location, and
host mappings.
ibmvcfg listvols free This command lists the ibmvcfg listvols free
volumes that are in the free
pool.
ibmvcfg listvols unassigned This command lists the ibmvcfg listvols unassigned
volumes that are currently not
mapped to any hosts.
ibmvcfg add -s ipaddress This command adds one or ibmvcfg add vdisk12 ibmvcfg
more volumes to the free pool add 600507 68018700035000000
of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.
ibmvcfg rem -s ipaddress This command removes one or ibmvcfg rem vdisk12 ibmvcfg
more volumes from the free rem 600507 68018700035000000
pool of volumes. Use the -s 0000000BA -s 66.150.210.141
parameter to specify the IP
address of the SVC where the
volumes are located. The -s
parameter overrides the default
IP address that is set with the
ibmvcfg set targetSVC
command.
218 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
This website provides the hardware list for supported HBAs and device driver levels for Linux.
Check the supported firmware and driver level for your HBA, and follow the manufacturer’s
instructions to upgrade the firmware and driver levels for each type of HBA.
Often, the automatic update process also upgrades the system to the latest kernel level. Old
hosts that are still running SDD must turn off the automatic update of kernel levels because
certain drivers that are supplied by IBM, such as SDD, depend on a specific kernel and cease
to function on a new kernel. Similarly, HBA drivers must be compiled against specific kernels
to function optimally. By allowing automatic updates of the kernel, you risk affecting your host
systems unexpectedly.
In SLES10, the multipath drivers and tools are installed, by default. However, for RHEL5, the
user must explicitly choose the multipath components during the operating system installation
to install them. Each of the attached SVC LUNs has a special device file in the Linux /dev
directory.
Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC
allows. The following web site provides the current information about the maximum
configuration for the SVC:
http://www.ibm.com/storage/support/2145
220 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Note: You can download example multipath.conf files from the following IBM
Subsystem Device Driver for Linux website:
http://ibm.com/support/docview.wss?uid=ssg1S4000107#DM
4. Run the multipath -dl command to see the MPIO configuration. You see two groups with
two paths each. All paths must have the state [active][ready], and one group shows
[enabled].
5. Run the fdisk command to create a partition on the SVC, as shown in Example 5-17.
222 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
6. Create a file system by running the mkfs command, as shown in Example 5-18.
[root@palau ~]#
7. Create a mount point and mount the drive, as shown in Example 5-19.
For more information about supported HBAs for older ESX versions, see this website:
http://ibm.com/storage/support/2145
Mostly, the supported HBA device drivers are included in the ESXi server build. However, for
various newer storage adapters, you might be required to load more ESX drivers. Check the
following VMware hardware compatibility list (HCL) if you must load a custom driver for your
adapter:
http://www.vmware.com/resources/compatibility/search.php
After the HBAs are installed, load the default configuration of your FC HBAs. You must use
the same model of HBA with the same firmware in one server. Configuring Emulex and
QLogic HBAs to access the same target in one server is not supported.
224 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
operating system must be on a SAN disk and SAN-attached to the SVC. iSCSI SAN boot is
also supported by VMware but will not be covered in this section.
If you are unfamiliar with the VMware environment and the advantages of storing virtual
machines and application data on a SAN, it is useful to get an overview about VMware
products before you continue.
Theoretically, you can run all of your virtual machines on one LUN. However, for performance
reasons in more complex scenarios, it can be better to load balance virtual machines over
separate HBAs, storages, or arrays.
If you run an ESX host, for example, with several virtual machines, it makes sense to use one
“slow” array. For example, you can use one slow array for Print and Active Directory Services
guest operating systems without high I/O, and another fast array for database guest operating
systems.
The use of more and smaller volumes has the following advantages:
Separate I/O characteristics of the guest operating systems
More flexibility (the multipathing policy and disk shares are set per volume)
Microsoft Cluster Service requires its own volume for each cluster disk resource
For more information about designing your VMware infrastructure, see the following websites:
http://www.vmware.com/vmtn/resources/
http://www.vmware.com/resources/techresources/1059
Guidelines: ESX server hosts that use shared storage for virtual machine failover or load
balancing must be in the same zone. You can have only one VMFS volume per volume.
For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be
set to 30 seconds.
To make these changes on your system (Example 5-20), complete the following steps:
1. Back up the /etc/vmware/esx.cof file.
2. Open the /etc/vmware/esx.cof file for editing.
The file includes a section for every installed SCSI device.
3. Locate your SCSI adapters and edit the previously described parameters.
4. Repeat this process for every installed HBA.
Example 5-21 shows that the host Teddy is logged in to the SVC with two HBAs.
Then, the SCSI Controller Type must be set in VMware. By default, the ESXi server disables
the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS
file at the same time. See Figure 5-33 on page 227.
226 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
But, in many configurations, such as configurations for high availability, the virtual machines
must share the VMFS file to share a disk.
Complete the following steps to set the SCSI Controller Type in VMware:
1. Log in to your Infrastructure Client, shut down the virtual machine, right-click it, and select
Edit settings.
2. Highlight the SCSI Controller, and select one of the following available settings, depending
on your configuration:
– None: Disks cannot be shared by other virtual machines.
– Virtual: Disks can be shared by virtual machines on the same server.
– Physical: Disks can be shared by virtual machines on any server.
Click OK to apply the setting.
3. Create your volumes on the SVC. Then, map them to the ESX hosts.
Tips: If you want to use features, such as VMotion, the volumes that own the VMFS file
must be visible to every ESX host that can host the virtual machine.
In the SVC, select Allow the virtual disks to be mapped even if they are already
mapped to a host.
The volume must have the same SCSI ID on each ESX host.
For this configuration, we created one volume and mapped it to our ESX host, as shown in
Example 5-22.
ESX does not automatically scan for SAN changes (except when rebooting the entire ESXi
server). If you made any changes to your SVC or SAN configuration, complete the following
steps:
1. Open your VMware Infrastructure Client.
2. Select the host.
3. In the Hardware window, choose Storage Adapters.
4. Click Rescan.
228 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
Now, the created VMFS datastore appears in the Storage window, as shown in Figure 5-35.
You see the details for the highlighted datastore. Check whether all of the paths are available
and that the Path Selection is set to Round Robin.
If not all of the paths are available, check your SAN and storage configuration. After the
problem is fixed, select Refresh to perform a path rescan. The view is updated to the new
configuration.
The preferred practice is to use the Round Robin Multipath Policy for the SVC. If you need to
edit this policy, complete the following steps:
1. Highlight the datastore.
2. Click Properties.
3. Click Managed Paths.
4. Click Change.
5. Select Round Robin.
6. Click Change.
7. Click Close.
Now, your VMFS datastore is created and you can start using it for your guest operating
systems. Round Robin distributes the I/O load across all available paths. If you want to use a
fixed path, the policy setting Fixed also is supported.
For more information about performing this task, see 5.4.5, “Changing the disk timeout on
Windows Server” on page 188.
Note: Before you perform the steps that are described here, back up your data.
230 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name VMware
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 44.00MB
real_capacity 252.80MB
free_capacity 208.80MB
overallocation 4050
autoexpand on
warning 80
grainsize 256
se_copy yes
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 252.80MB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 44.00MB
parent_mdisk_grp_id 2
parent_mdisk_grp_name VMware
232 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
access_IO_group_count 1
last_access_time 151028033708
parent_mdisk_grp_id 2
parent_mdisk_grp_name VMware
owner_type none
owner_id
owner_name
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name VMware
type striped
mdisk_id
mdisk_name
fast_write_state not_empty
used_capacity 44.00MB
real_capacity 252.80MB
free_capacity 208.80MB
overallocation 44556
autoexpand on
warning 80
grainsize 256
se_copy yes
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 252.80MB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 44.00MB
parent_mdisk_grp_id 2
parent_mdisk_grp_name VMware
IBM_2145:ITSO SVC 3:superuser>
The VMFS volume is now extended and the new space is ready for use.
234 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05 Host Configuration Hartmut.fm
You can also configure SDDDSM to offer a web interface that provides basic information.
Before this configuration can work, you must configure the web interface. SDDSRV does not
bind to any TCP/IP port, by default, but it allows port binding to be enabled or disabled
dynamically.
For all platforms except Linux, the multipath driver package includes an sddsrv.conf
template file that is named the sample_sddsrv.conf file. On all UNIX platforms except Linux,
the sample_sddsrv.conf file is in the /etc directory. On Windows platforms, the
sample_sddsrv.conf file is in the directory in which SDDDSM was installed.
You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory
as the sample_sddsrv.conf file by copying it and naming the copied file sddsrv.conf. You can
then dynamically change the port binding by modifying the parameters in the sddsrv.conf file
and changing the values of Enableport and Loopbackbind to True.
For more information about SDDDSM configuration, see the IBM System Storage Multipath
Subsystem Device Driver User’s Guide, S7000303, which is available from this web site:
http://ibm.com/support/docview.wss?uid=ssg1S7000303
For more information about host attachment and storage subsystem attachment, and
troubleshooting, see the IBM SAN Volume Controller Knowledge Center at this web site:
http://www.ibm.com/support/knowledgecenter/api/redirect/svc/ic/index.jsp
For more information about specific considerations for attaching the XIV Storage System
to an SVC, see IBM XIV Storage System: Architecture, Implementation and Usage,
SG24-7659, which is available at this web site:
http://www.redbooks.ibm.com/abstracts/sg247659.html?Open
236 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
We also introduce and demonstrate the SVC support of the nondisruptive movement of
volumes between SVC I/O Groups, which is referred to as nondisruptive volume move or
multinode volume access.
For more information about the migrateexts command parameters, see the following
resources:
The SVC command-line interface help by entering the following command:
help migrateexts
The IBM System Storage SAN Volume Controller Command-Line Interface User’s Guide,
GC27-2287
When this command is run, a number of extents are migrated from the source MDisk where
the extents of the specified volume are located to a defined target MDisk that must be part of
the same storage pool.
238 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.
In this case, the extents that must be migrated are moved onto the set of MDisks that are not
being deleted. This statement is true if multiple MDisks are being removed from the storage
pool at the same time.
If a volume uses one or more extents that must be moved as a result of running the rmmdisk
command, the virtualization type for that volume is set to striped (if it was previously
sequential or image).
If the MDisk is operating in image mode, the MDisk changes to managed mode while the
extents are being migrated. Upon deletion, it changes to unmanaged mode.
Using the -force flag: If the -force flag is not used and if volumes occupy extents on one
or more of the MDisks that are specified, the command fails.
When the -force flag is used and if volumes occupy extents on one or more of the MDisks
that are specified, all extents on the MDisks are migrated to the other MDisks in the
storage pool if enough free extents exist in the storage pool. The deletion of the MDisks is
postponed until all extents are migrated, which can take time. If insufficient free extents
exist in the storage pool, the command fails.
Rule: For the migration to be acceptable, the source storage pool and the destination
storage pool must have the same extent size. Volume mirroring can also be used to
migrate a volume between storage pools. You can use this method if the extent sizes of the
two pools are not the same.
Extents are allocated to the migrating volume from the set of MDisks in the target storage
pool by using the extent allocation algorithm.
The process can be prioritized by specifying the number of threads that are used in parallel
(1 - 4) while migrating; the use of only one thread puts the least background load on the
system.
The offline rules apply to both storage pools. Therefore, as shown in Figure 6-1, if any of the
M4, M5, M6, or M7 MDisks go offline, the V3 volume goes offline. If the M4 MDisk goes
offline, V3 and V5 go offline; however, V1, V2, V4, and V6 remain online.
If the type of the volume is image, the volume type changes to striped when the first extent is
migrated. The MDisk access mode changes from image to managed.
During the move, the volume is listed as being a member of the original storage pool. For
configuration, the volume moves to the new storage pool instantaneously at the end of the
migration.
240 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Regardless of the mode in which the volume starts, the volume is reported as being in
managed mode during the migration. Also, both of the MDisks that are involved are reported
as being in image mode during the migration. Upon completion of the command, the volume
is classified as an image mode volume.
In the previous versions of SVC code, volumes could be migrated between I/O Groups,
however, this operation required I/O operations to be quiesced to all volumes being
migrated.
NDVM enhancements mean access to a single volume is now possible by all nodes in the
clustered system. The feature adds the concept of access I/O groups whilst maintaining the
concept of a caching I/O Group, i.e. access to a volume is possible from any node in the
cluster, however, a single I/O Group still controls the I/O caching.
This ability to dynamically rebalance the SVC workload is helpful in situations where the
natural growth of the environments I/O demands forces the client and storage administrators
to expand hardware resources. With NDVM, you can instantly rebalance the workload to the
volumes to the new set of SVC nodes (I/O Group) without needing to quiesce or interrupt
application operations and easily lower the high utilization of the original I/O Group. Although
this is now possible from an SVC perspective, we need to consider the implications this will
have from the hosts perspective.
The technical aspects of an NDVM transition, and I/O group access considerations are shown
in Figure 6-3 on page 243.
242 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Before you move the volumes to a new I/O Group on the SVC system, ensure that the
following prerequisites are met:
The host has access to the new I/O Group node ports through SAN zoning.
The host is assigned to the new I/O Group on the SVC system level.
The host operating system and multipathing software support the NDVM feature.
For more information about supported systems, see the Supported Hardware List, Device
Driver, Firmware, and Recommended Software Levels for the SVC, which is available at:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004946
Other technical considerations include:
The caching I/O group of a volume must be online to be able to access to the volume –
even if the volume is accessible through other IO groups.
A volume which is in a Metro or Global Mirror relationship cannot change its caching I/O
group.
If a volume in a FlashCopy relationship is moved, the “bitmaps” are left in the original I/O
group,
– This will cause additional inter-node messaging to allow FlashCopy to operate.
A volume could be configured so that it can be accessed through all I/O groups at all
times.
A volume which is mapped to a host through multiple I/O groups will often not have the
same LU number (aka SCSI ID) in each I/O group.
– Reports indicate that this may cause problems with the newest VMWare systems.
Can be used to change the preferred node of a volume non disruptively by moving the
caching I/O group to a different I/O group and back again
– The preferred node change may not be detected by the multipathing driver without a
reboot.
Non preferred nodes in any I/O group will have the same “priority”
Compressed volumes can be used with NDVM as long as the target I/O group does not
already have 200 Compressed volumes in it.
– Be aware that this will make the target I/O group into a compressed I/O group.
Note: If customers are already using 8 paths per volumes then using NDVM will increase
the number of paths per disk to > 8 paths. Therefore this will not be supported,
In this example, we want to move one of the AIX host volumes from its existing I/O Group to
the recently added pair of SVC nodes. To perform the NDVM by using the SVC GUI, complete
the following steps:
1. Verify that the host is assigned to the source and target I/O Groups. Select Hosts from the
left menu pane (Figure 6-4) and confirm the # of I/O Groups column.
2. Right-click the host and select Properties → Mapped Volumes. Verify the volumes and
caching I/O Group ownership, as shown in Figure 6-5.
244 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
3. Now, we move lpar01_vol3 from the existing SVC I/O Group 0 to the new I/O Group 1.
From the left menu pane, select Volumes to see all of the volumes and optionally, filter the
output for the results that you want, as shown in Figure 6-6.
4. Right-click volume lpar01_vol3, and in the menu, select Move Volume to a New I/O
Group.
Note: This operation can also be performed from the “volume by pool” or “volumes by host”
panels of the GUI
5. The Move Volume to a New I/O Group wizard window starts (Figure 6-7 on page 246).
Click Next.
Figure 6-7 Move Volume to a New I/O Group wizard: Welcome window
6. Select I/O Group and Node → New Group (and optionally the preferred SVC node) or
leave Automatic for the default node assignment. Click Apply and Next, as shown in
Figure 6-8.
You can see the progress of the task that is displayed in the task window and the SVC CLI
command sequence that is running the svctask movevdisk and svctask addvdiskaccess
commands.
Figure 6-8 Move Volume to a New I/O Group wizard: Select New I/O Group and Preferred Node
7. The task completion window opens. Next, you need to detect the new paths by the
selected host to switch over the I/O processing to the new I/O Group. Perform the path
246 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Figure 6-9 Move Volume to a New I/O Group wizard: Detect New Paths window
8. The SVC removes the old I/O Group access to a volume by calling the svctask
rmvdiskaccess CLI command. After the task completes, close the task window.
9. The confirmation with information about the I/O Group move is displayed on the Move
Volume to a New I/O Group wizard window. Proceed to the Summary by clicking Next.
10.Review the summary information and click Finish. The volume is successfully moved to a
new I/O Group without I/O disruption on the host side. To verify that volume is now being
cached by the new I/O Group, verify the Caching I/O Group column on the Volumes
submenu, as shown in Figure 6-10 on page 248.
Note: For SVC V6.4 and higher, the CLI command svctask chvdisk is not supported for
migrating a volume between I/O Groups. Although svctask chvdisk still modifies multiple
properties of a volume, the new SVC CLI command movevdisk is used for moving a volume
between I/O Groups.
In certain conditions, you might still want to keep the volume accessible through multiple I/O
Groups. This function is possible, but only a single I/O Group can provide the caching of the
I/O to the volume. For modifying the access to a volume for more I/O Groups, use the SVC
CLI commands addvdiskaccess or rmvdiskaccess.
248 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
To determine the extent allocation of MDisks and volumes, use the following commands:
To list the volume IDs and the corresponding number of extents that the volumes occupy
on the queried MDisk, use the following CLI command:
lsmdiskextent <mdiskname | mdisk_id>
To list the MDisk IDs and the corresponding number of extents that the queried volumes
occupy on the listed MDisks, use the following CLI command:
lsvdiskextent <vdiskname | vdisk_id>
To list the number of available free extents on an MDisk, use the following CLI command:
lsfreeextents <mdiskname | mdisk_id>
Important: After a migration is started, the migration cannot be stopped. The migration
runs to completion unless it is stopped or suspended by an error condition, or if the volume
that is being migrated is deleted.
If you want the ability to start, suspend, or cancel a migration or control the rate of
migration, consider the use of the volume mirroring function or migrating volumes between
storage pools.
The concept of the volumes extent migration is used by a feature called Easy Tier and its
Automatic Load Balancing that is enabled by default on each storage pool.
Automatic Load Balancing: Migrates extents within the same pool and distributes the
workload equally between MDisks.
6.3.1 Parallelism
You can perform several of the following activities in parallel.
Each system
An SVC system supports up to 32 active concurrent instances of members of the set of the
following migration tasks:
Migrate multiple extents
Migrate between storage pools
Migrate off a deleted MDisk
Migrate to image mode
The following high-level migration tasks operate by scheduling single extent migrations:
Up to 256 single extent migrations can run concurrently. This number is made up of single
extent migrations, which result from the operations previously listed.
The Migrate Multiple Extents and Migrate Between storage pools commands support a
flag with which you can specify the number of parallel “threads” to use (1 - 4). This
parameter affects the number of extents that are concurrently migrated for that migration
operation. Therefore, if the thread value is set to 4, up to four extents can be migrated
concurrently for that operation (subject to other resource constraints).
Each MDisk
The SVC supports up to four concurrent single extent migrations per MDisk. This limit does
not consider whether the MDisk is the source or the destination. If more than four single
extent migrations are scheduled for a particular MDisk, further migrations are queued,
pending the completion of one of the currently running migrations.
The migration is only suspended if any of the following conditions exist. Otherwise, the
migration is stopped:
The migration occurs between storage pools, and the migration progressed beyond the
first extent.
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves a volume that is spanning storage pools, which is not a valid
configuration other than during a migration.
The migration is a Migrate to Image Mode (even if it is processing the first extent).
These migrations are always suspended rather than stopped because stopping a
migration in progress leaves the volume in an inconsistent state.
A migration is waiting for a metadata checkpoint that failed.
If a migration is stopped and if any migrations are queued while waiting for the use of the
MDisk for migration, these migrations now start. However, if a migration is suspended, the
migration continues to use resources, and so, another migration is not started.
The SVC attempts to resume the migration if the error log entry is marked as fixed by using
the CLI or the GUI. If the error condition no longer exists, the migration proceeds. The
migration might resume on a node other than the node that started the migration.
Chunks
Regardless of the extent size for the storage pool, data is migrated in units of 16 MBs. In this
description, this unit is referred to as a chunk.
250 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
During the migration, the extent can be divided into the following regions, as shown in
Figure 6-12 on page 252:
Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the
virtualization layer that is waiting for the chunk to be copied.
Reads to Region A are directed to the destination because this data was copied. Writes to
Region A are written to the source and the destination extent to maintain the integrity of
the source extent.
Reads and writes to Region C are directed to the source because this region is not yet
migrated.
The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During
this time, all writes to the chunk from higher layers in the software stack, such as cache
destages, are held back. If the back-end storage is operating with significant latency, this
operation might take time (minutes) to complete, which can have an adverse affect on the
overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is
still active after 1 minute, the migration is paused for 30 seconds. During this time, writes to
the chunk can proceed. After 30 seconds, the migration of the chunk is resumed. This
algorithm is repeated as many times as necessary to complete the migration of the chunk, as
shown in Figure 6-12 on page 252.
Not to scale
16 MiB
Figure 6-12 Migrating an extent
The SVC ensures read stability during data migrations, even if the data migration is stopped
by a node reset or a system shutdown. This read stability is possible because the SVC
disallows writes on all nodes to the area that is being copied. On a failure, the extent
migration is restarted from the beginning. At the conclusion of the operation, we see the
following results:
Extents were migrated in 16 MiB chunks, one chunk at a time.
Chunks are copied in progress or not copied.
When the extent is finished, its new location is saved.
Figure 6-13 shows the data migration and write operation relationship.
252 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
MDisk modes
The following MDisk modes are available:
Unmanaged MDisk
An MDisk is reported as unmanaged when it is not a member of any storage pool. An
unmanaged MDisk is not associated with any volumes and it has no metadata that is
stored on it. The SVC does not write to an MDisk that is in unmanaged mode except when
it attempts to change the mode of the MDisk to one of the other modes.
Image mode MDisk
Image mode provides a direct block-for-block translation from the MDisk to the volume
with no virtualization. Image mode volumes have a minimum size of one block (512 bytes)
and always occupy at least one extent. An image mode MDisk is associated with exactly
one volume.
Managed mode MDisk
Managed mode MDisks contribute extents to the pool of available extents in the storage
pool. Zero or more managed mode volumes might use these extents.
add to group
Managed
Not in group
mode
remove from group
delete image
mode vdisk
create image
mode vdisk
Migrating to
Image mode
image mode start migrate to image mode
Image mode volumes have the special property that the last extent in the volume can be a
partial extent. Managed mode disks do not have this property.
To perform any type of migration activity on an image mode volume, the image mode disk first
must be converted into a managed mode disk. If the image mode disk has a partial last
extent, this last extent in the image mode volume must be the first extent to be migrated. This
migration is handled as a special case.
After this special migration operation occurs, the volume becomes a managed mode volume
and it is treated in the same way as any other managed mode volume. If the image mode disk
does not have a partial last extent, no special processing is performed. The image mode
volume is changed into a managed mode volume and it is treated in the same way as any
other managed mode volume.
After data is migrated off a partial extent, data cannot be migrated back onto the partial
extent.
254 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Have one storage pool for all the image mode volumes and other storage pools for the
managed mode volumes, which use the migrate volume facility.
Be sure to verify that enough extents are available in the target storage pool.
In our example, we want to move one of the AIX host volumes from its existing I/O Group to
the recently added pair of SVC nodes. To perform the NDVM by using the SVC GUI, complete
the following steps:
1. Verify that the host is assigned to the source and target I/O Groups. Select Hosts from the
left menu pane (Figure 6-15) and confirm the # of I/O Groups column.
2. Right-click the host and select Properties → Mapped Volumes. Verify the volumes and
caching I/O Group ownership, as shown in Figure 6-16.
3. Now, we move Dat1 from the existing SVC I/O Group 0 to the new I/O Group 1. From the
left menu pane, select Volumes to see all of the volumes and optionally, filter the output
for the results that you want, as shown in Figure 6-6.
4. Right-click Dat1, and in the menu, select Migrate to another pool. As shown in
Figure 6-18 on page 257.
256 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
5. The Migrate to another pool window opens, Select target pool you migrating to Click →
Migrate, as shown in (Figure 6-19).
6. You can see the progress of the task that is displayed in the task window and the SVC CLI
command sequence that is running the migratevdisk command.
Figure 6-20 Migrating Volume to a New I/O Group The task completion window opens.
You can use these methods individually or together to migrate your server’s LUNs from one
storage subsystem to another storage subsystem by using the SVC as your migration tool.
The only downtime that is required for these methods is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.
258 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
W2k8
SAN
Figure 6-22 on page 259 shows the two LUNs (drive X and drive Y).
Figure 6-23 shows the properties of one of the DS3400 disks that uses the Subsystem
Device Driver DSM (SDDDSM). The disk appears as an FAStT Multi-Path Disk Device.
260 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
W2k8
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
To add the SVC between the host system and the DS3400 storage subsystem, complete the
following steps:
1. Check that you installed the supported device drivers on your host system.
2. Check that your SAN environment fulfills the supported zoning configurations.
3. Shut down the host.
4. Change the LUN masking in the DS3400. Mask the LUNs to the SVC, and remove the
masking for the host.
Figure 6-25 on page 262 shows the two LUNs (win2008_lun_01 and win2008_lun_02)
with LUN IDs 2 and 3 that are remapped to the SVC Host ITSO_SVC_DH8.
Important: To avoid potential data loss, back up all the data that is stored on your
external storage before you use the wizard.
5. Log in to your SVC Console and open Pools → System Migration, as shown in
Figure 6-26.
6. Click Start New Migration, which starts a wizard, as shown in Figure 6-27.
7. Follow the Storage Migration Wizard, as shown in Figure 6-28 on page 263, and then click
Next.
262 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
8. Figure 6-29 shows the Prepare Environment for Migration information window. Click Next.
Figure 6-29 Storage Migration Wizard: Preparing the environment for migration (Step 2 of 8)
264 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
12.Mark desired MDisks (in our example mdisk0 and mdisk2) for migrating, as shown in
Figure 6-33, and then click Next.
13.Figure 6-34 shows the MDisk import process. During the import process, a storage pool is
automatically created, in our case, MigrationPool_8192. You can see that the command
that is issued by the wizard creates an image mode volume with a one-to-one mapping to
mdisk10 and mdisk12. Click Close to continue.
14.To create a host object to which we map the volume later, click Add Host, as shown in
Figure 6-35.
15.Figure 6-36 shows the empty fields that we must complete to match our host
requirements.
16.Enter the host name that you want to use for the host, add the Fibre Channel (FC) port,
and select a host type. In our case, the host name is Win_2008. Click Add Host, as shown
in Figure 6-37 on page 267.
266 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
18.Figure 6-39 shows that the host was added successfully. Click Next to continue.
Figure 6-40 Storage Migration Wizard: Volumes that are available for mapping (Step 6 of 8)
20.Mark both volumes and click Map to Host, as shown in Figure 6-41.
21.Modify the host mapping by choosing a host by using the drop-down menu, as shown in
Figure 6-42. Click Map.
22.Figure 6-43 shows the progress of the volume mapping to the host. Click Close when you
are finished.
268 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
23.After the volume to host mapping task is completed, a host that is beneath the column
heading Host Mappings is shown as marked Yes (Figure 6-44 on page 269). Click Next.
24.Select the storage pool that you want to use for migration, in our case, V7000_2_Test, as
shown in Figure 6-45. Click Next.
Figure 6-45 Storage Migration Wizard: Selecting a storage pool to use for migration (Step 7 of 8)
26.The window that is shown in Figure 6-47 opens. This window states that the migration has
begun. Click Finish.
27.The window that is shown in Figure 6-48 opens automatically to show the progress of the
migration.If not, go back to System Migration menu.
28.Click Volumes → Volumes by host, as shown in Figure 6-49, to see all the volumes that
are served by the new host for this migration step.
270 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
As you can see in Figure 6-50, the migrated volume is a mirrored volume with one copy on
the image mode pool and another copy in a managed mode storage pool. The administrator
can choose to leave the volume or split the initial copy from the mirror.
You can see the progress of the volumes synchronization using the running tasks indicator
(Figure 6-50).
6.6.3 Importing the migrated disks into a Windows Server 2008 host
To import the migrated disks into an online Windows Server 2008 Server host, complete the
following steps:
1. Start the Windows Server 2008 host system again, go to Disk Management of the
DS3400-allocated disks and see the new disk properties that changed to a 2145
Multi-Path Disk Device, as shown in Figure 6-51 on page 272.
272 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
3. Run the datapath query device command to check whether all paths are available as
planned in your SAN environment (Example 6-1).
274 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
3. To create an empty storage pool for migration, complete the following steps:
a. You are prompted for the pool name and extent size, as shown in Figure 6-55. After you
enter the information, click Next.
b. You are then prompted to optionally select the MDisk to include in the storage pool, as
shown in Figure 6-56 on page 276. Click Create.
4. Figure 6-57 on page 277 shows the progress status for pool migration as the system
creates a storage pool for migration. Click Close to continue.
276 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
5. From the Create Volumes panel, select the volume that you want to migrate to image
mode and select Export to Image Mode from the drop-down menu, as shown in
Figure 6-58.
6. Select the MDisk onto which you want to migrate the volume, as shown in Figure 6-59 on
page 277. Click Next.
7. Select a storage pool into which the image mode volume is placed after the migration
completes, in our case, the For Migration storage pool. Click Finish, as shown in
Figure 6-60.
8. The volume is exported to image mode and placed in the For Migration storage pool, as
shown in Figure 6-61. Click Close.
9. Browse to Pools → MDisk by Pools. Click the plus sign (+) (expand icon) to the left of the
name. Now, mdisk22 is an image mode MDisk, as shown in Figure 6-62.
10.Repeat these steps for every volume that you want to migrate to an image mode volume.
11.Delete the image mode data from SVC by using the procedure that is described in 6.6.7,
“Removing image mode data from SVC” on page 284.
278 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
In our example, we migrate the Windows Server W2K8 volume to another disk subsystem as
an image mode volume. The second storage subsystem is a V7000_ITSO2; a new LUN is
configured on the storage and mapped to the SVC system. The LUN is available to the SVC
as an unmanaged mdisk23, as shown in Figure 6-63.
To migrate the image mode volume to another image mode volume, complete the following
steps:
1. Mark the unmanaged mdisk23 and click Actions or right-click and select Import from the
list, as shown in Figure 6-64.
2. The Import Wizard window opens, which describes the process of importing the MDisk
and mapping an image mode volume to it. Select a temporary pool because you do not
want to migrate the volume into an SVC managed volume pool. Define the name of the
new volume, select the extent size from the drop-down menu, as shown in Figure 6-65 on
page 280. If Copy Services are enabled on the original storage system for this volume,
notify your target pool about it by ticking the checkbox. Click Finish.
3. The import process starts (as shown in Figure 6-66) by creating a temporary storage pool
MigrationPool_1024 (500 GiB) and an image volume. Click Close to continue.
Figure 6-66 Import of MDisk and creation of temporary storage pool MigrationPool_1024
4. As shown in Figure 6-67, an image mode mdisk15 now shows with the import controller
name and SCSI ID as its name.
5. Create a storage pool Migration_Out with the same extent size (1 GiB) as the
automatically created storage pool MigrationPool_1024 for transferring the image mode
disk. Go to Pools → MDisks by Pools, as shown in Figure 6-68 on page 281.
280 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
6. Click Create Pool to create an empty storage pool and give your new storage pool the
meaningful name Migration_Out. Click the Advanced Settings drop-down menu. Choose
1.00 GiB as the extent size for your new storage pool, as shown in Figure 6-69.Click Next
to continue.
8. Now, the empty storage pool for the image to image migration is created. Go to Pools →
MDisks by Pools, as shown Figure 8. Go to Volumes → Volumes by Pool, as shown in
Figure 6-71.
9. In the left pane, select the storage pool of the imported disk, which is called
MigrationPool_1024. Then, mark the image disk that you want to migrate out and select
Actions. From the drop-down menu, select Export to Image Mode, as shown in
Figure 6-72.
10.Select the target MDisk mdisk24 on the new disk controller to which you want to migrate.
Click Next, as shown in Figure 6-73.
282 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
11.Select the target Migration_Out (empty) storage pool, as shown in Figure 6-74 on
page 283. Click Finish.
12.Figure 6-75 shows the progress status of the Export Volume to Image process. Click
Close to continue.
13.Figure 6-76 on page 284 shows that the MDisk location changed to the new storage pool
Migration_Out. This is done once the volume migration process finishes.
14.Repeat these steps for all image mode volumes that you want to migrate.
15.If you want to delete the data from the SVC, use the procedure that is described in 6.6.7,
“Removing image mode data from SVC” on page 284.
To remove the image mode volume from the SVC, we use the delete vdisk command.
If the command succeeds on an image mode volume, the underlying back-end storage
controller is consistent with the data that a host might previously read from the image mode
volume. That is, all fast write data was flushed to the underlying LUN. Deleting an image
mode volume causes the MDisk that is associated with the volume to be ejected from the
storage pool. The mode of the MDisk is returned to unmanaged.
Image mode volumes only: This situation applies to image mode volumes only. If you
delete a normal volume, all of the data is also deleted.
As shown in Example 6-1 on page 273, the SAN disks are on the SVC.
Check that you installed the supported device drivers on your host system.
284 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
3. Check your Host and select your volume. Then, right-click and select Unmap all Hosts,
as shown in Figure 6-78.
4. Verify your unmap process, as shown in Figure 6-79, and click Unmap.
5. Repeat steps 3 - 5 for every image mode volume that you want to remove from the SVC.
6. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN
masking, and add the host to the masking.
7. Power on your host system.
3. Open your Disk Management window; the disks appeared, as shown in Figure 6-81 on
page 287. You might need to reactivate your disk by right-clicking each disk.
286 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
This example can help you to perform any of the following tasks in your environment:
Move a Linux server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC.
Perform this task first when you are introducing the SVC into your environment. This
section shows that your host downtime is only a few minutes while you remap and remask
disks by using your storage subsystem LUN management tool. For more information
about this task, see 6.7.1, “Preparing SVC to virtualize Linux disks” on page 289.
Move data between storage subsystems while your Linux server is still running and
servicing your business application.
Perform this task if you are removing a storage subsystem from your SAN environment.
You also can perform this task if you want to move the data onto LUNs that are more
appropriate for the type of data that is stored on those LUNs, taking availability,
performance, and redundancy into consideration. For more information about this task,
see 6.7.3, “Migrating image mode volumes to managed MDisks” on page 296.
Move your Linux server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the Linux server.
For more information about this step, see 6.7.4, “Preparing to migrate from SVC” on
page 298.
You can use these three activities individually or together to migrate your Linux server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three tasks, you can introduce or remove the SVC from
your environment.
The only downtime that is required for these tasks is the time that it takes to remask and
remap the LUNs between the storage subsystems and your SVC.
LINUX
Host
SAN
Green Zone
IBM or OEM
Storage
Subsystem
Figure 6-82 shows our Linux server that is connected to our SAN infrastructure. The following
LUNs are masked directly to our Linux server from our storage subsystem:
The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise
Linux V5.1). This LUN is used to boot the system directly from the storage subsystem. The
operating system identifies this LUN as /dev/mapper/VolGroup00-LogVol00.
SCSI LUN ID 0: To successfully boot a host off the SAN, you must assign the LUN as
SCSI LUN ID 0.
Example 6-9 on page 289 shows our disks that attach directly to the Linux hosts.
288 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Our Linux server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-82 on page 288. The
Linux server has the following configuration:
The Linux server’s host bus adapter (HBA) cards are zoned so that they are in the Green
Zone with our storage subsystem.
The two LUNs that were defined on the storage subsystem by using LUN masking are
directly available to our Linux server.
We must create an empty storage pool for each of the disks by using the commands that are
shown in Example 6-10.
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity
used_capacity real_capacity overallocation warning easy_tier easy_tier_status
2 Linux_Pool1 online 0 0 0 512 0 0.00MB
0.00MB 0.00MB 0 0 auto inactive
3 Linux_Pool2 online 0 0 0 512 0 0.00MB
0.00MB 0.00MB 0 0 auto inactive
The use of the lshbaportcandidate command on the SVC lists all of the worldwide names
(WWNs), which are not yet allocated to a host, that the SVC can see on the SAN fabric.
Example 6-11 shows the output of the nodes that it found on our SAN fabric. (If the port did
not show up, a zone configuration problem exists.)
IBM_2145:ITSO-CLS1:ITSO_admin>lshbaportcandidate
id
210000E08B89C1CD
210000E08B054CAA
210000E08B0548BC
210000E08B0541BC
210000E08B89CCC2
If you do not know the WWN of your Linux server, you can review which WWNs are currently
configured on your storage subsystem for this host. Figure 6-83 shows our configured ports
on old IBM DS4700 storage subsystem.
After it is verified that SVC can see our host (Palau), we create the host entry and assign the
WWN to this entry. Example 6-12 shows these commands.
IBM_2145:ITSO-CLS1:ITSO_admin>lshost Palau
id 0
name Palau
port_count 2
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B89C1CD
node_logged_in_count 4
state inactive
WWPN 210000E08B054CAA
node_logged_in_count 4
state inactive
290 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
You can rename the storage subsystem to a more meaningful name by using the
chcontroller -name command. If you have multiple storage subsystems that connect to your
SAN fabric, renaming the storage subsystems makes it considerably easier to identify them.
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are
shown in Figure 6-84 on page 291 (which shows the disk serial number SAN_Boot_palau)
and Figure 6-85 on page 292.
Before we move the LUNs to the SVC, we must configure the host multipath configuration for
the SVC. Add the following entry to your multipath.conf file, as shown in Example 6-14, and
then add the content of Example 6-15 to the file.
# SVC
device {
vendor "IBM"
product "2145DH8"
path_grouping_policy group_by_serial}
We are now ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as volumes.
292 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.
If we wanted to move only the LUN that holds our application and data files, we do not have to
reboot the host. The only requirement is that we unmount the file system and vary off the
volume group (VG) to ensure the data integrity between the reassignment.
Because we intend to move both LUNs at the same time, we must complete the following
steps:
1. Confirm that the multipath.conf file is configured for the SVC.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps instead:
a. Stop the applications that use the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are a Logical Volume Manager (LVM) volume, deactivate that VG by
using the vgchange -a n VOLUMEGROUP_NAME command.
d. If possible, also unload your HBA driver by using the rmmod DRIVER_MODULE command.
This command removes the SCSI definitions from the kernel. (We reload this module
and rediscover the disks later.) It is possible to tell the Linux SCSI subsystem to rescan
for new disks without requiring you to unload the HBA driver; however, we do not
provide those details here.
3. By using Storage Manager (our storage subsystem management tool), we can unmap and
unmask the disks from the Linux server and remap and remask the disks to the SVC.
LUN IDs: Although we are using boot from SAN, you can also map the boot disk with
any LUN to the SVC. The LUN does not have to be 0 until later when we configure the
mapping in the SVC to the host.
4. From the SVC, discover the new disks by using the detectmdisk command. The disks are
discovered and named mdiskN, where N is the next available MDisk number (starting from
0). Example 6-16 shows the commands that we used to discover our MDisks and to verify
that we have the correct MDisks.
Important: Match your discovered MDisk serial numbers (unique identifier UID on the
lsmdisk task display) with the serial number that you recorded earlier (in Figure 6-84 on
page 291 and Figure 6-85 on page 292).
5. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-17.
6. We create our image mode volumes by using the mkvdisk command and the -vtype
image option, as shown in Example 6-18 on page 294. This command virtualizes the disks
in the same layout as though they were not virtualized.
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
26 md_LinuxS online image 2 Linux_Pool1 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 md_LinuxD online image 3 Linux_Pool2 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>
IBM_2145:ITSO-CLS1:ITSO_admin>lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name
RC_id RC_name vdisk_UID fc_map_count copy_count fast_wri
te_state se_copy_count
29 Linux_SANB 0 io_grp0 online 4
Linux_Pool1 12.0GB image
60050768018301BF280000000000002B 0 1 empty 0
30 Linux_Data 0 io_grp0 online 4
Linux_Pool2 5.0GB image
60050768018301BF280000000000002C 0 1 empty 0
7. Map the new image mode volumes to the host, as shown in Example 6-19.
294 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Important: Ensure that you map the boot volume with SCSI ID 0 to your host. The host
must identify the boot volume during the boot process.
IBM_2145:ITSO-CLS1:ITSO_admin>lshostvdiskmap Linux
id name SCSI_id vdisk_id vdisk_name wwpn
vdisk_UID
0 Linux 0 29 Linux_SANB
210000E08B89C1CD 60050768018301BF280000000000002B
0 Linux 1 30 Linux_Data
210000E08B89C1CD 60050768018301BF280000000000002C
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.
8. Power on your host server and enter your FC HBA BIOS before booting the operating
system. Ensure that you change the boot configuration so that it points to the SVC.
Complete the following steps on a QLogic HBA:
a. Press Ctrl+Q to enter the HBA BIOS.
b. Open Configuration Settings.
c. Open Selectable Boot Settings.
d. Change the entry from your storage subsystem to the IBM SAN Volume Controller
2145 LUN with SCSI ID 0.
e. Exit the menu and save your changes.
9. Boot up your Linux operating system.
If you moved only the application LUN to the SVC and left your Linux server running, you
must complete only these steps to see the new volume:
a. Load your HBA driver by using the modprobe DRIVER_NAME command. If you did not (and
cannot) unload your HBA driver, you can run commands to the kernel to rescan the
SCSI bus to see the new volumes. (These details are beyond the scope of this book.)
b. Check your syslog to verify that the kernel found the new volumes. On Red Hat
Enterprise Linux, the syslog is stored in the /var/log/messages directory.
c. If your application and data are on an LVM volume, rediscover the VG and then run the
vgchange -a y VOLUME_GROUP command to activate the VG.
10.Mount your file systems by using the mount /MOUNT_POINT command, as shown in
Example 6-20. The df output shows us that all of the disks are available again.
IBM_2145:ITSO-CLS1:ITSO_admin>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
26 md_LinuxS online image 2 Linux_Pool1 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 md_LinuxD online image 3 Linux_Pool2 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd
28 mdisk28 online unmanaged 8.0GB 0000000000000010
DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000
generic_hdd
29 mdisk29 online unmanaged 8.0GB 0000000000000011
DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000
generic_hdd
30 mdisk30 online unmanaged 8.0GB 0000000000000012
DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000
generic_hdd
296 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
IBM_2145:ITSO-CLS1:ITSO_admin>lsmdisk
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID tier
26 md_LinuxS online image 2 Linux_Pool1 12.0GB 0000000000000008
DS4700 600a0b800026b2820000428f48739bca00000000000000000000000000000000
generic_hdd
27 md_LinuxD online image 3 Linux_Pool2 5.0GB 0000000000000009
DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000
generic_hdd
28 Linux-md1 online unmanaged 8 MD_LinuxVD 8.0GB 0000000000000010
DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000
generic_hdd
29 Linux-md2 online unmanaged 8 MD_LinuxVD 8.0GB 0000000000000011
DS4500 600a0b80001744310000010f48776bae00000000000000000000000000000000
generic_hdd
30 Linux-md3 online unmanaged 8 MD_LinuxVD 8.0GB 0000000000000012
DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000
generic_hdd
IBM_2145:ITSO-CLS1:ITSO_admin>lsmigrate
migrate_type MDisk_Group_Migration
progress 25
migrate_source_vdisk_index 29
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 70
migrate_source_vdisk_index 30
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
To check the overall progress of the migration, we use the lsmigrate command, as shown in
Example 6-22. Listing the storage pool by using the lsmdiskgrp command shows that the free
capacity on the old storage pools is slowly increasing while those extents are moved to the
new storage pool.
After this task completes, the volumes are now spread over three MDisks, as shown in
Example 6-23.
status online
mdisk_count 3
vdisk_count 2
capacity 24.0GB
extent_size 512
free_capacity 7.0GB
virtual_capacity 17.00GB
used_capacity 17.00GB
real_capacity 17.00GB
overallocation 70
warning 0
IBM_2145:ITSO-CLS1:ITSO_admin>lsvdiskmember Linux_SANB
id
28
29
30
IBM_2145:ITSO-CLS1:ITSO_admin>lsvdiskmember Linux_Data
id
28
29
30
Our migration to striped volumes on another storage subsystem (DS4500) is now complete.
The original MDisks (Linux-md1, Linux-md2, and Linux-md3) can now be removed from the
SVC, and these LUNs can be removed from the storage subsystem.
If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can
remove the DS4700 storage subsystem from our SAN fabric.
You might want to perform this task for any one of the following reasons:
You purchased a new storage subsystem and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
Changes to your environment no longer require this host to use the SVC.
We can perform other preparation tasks before we must shut down the host and reconfigure
the LUN masking and mapping. We describe these tasks in this section.
If you are moving the data to a new storage subsystem, it is assumed that the storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, which is shown in Figure 6-86 on
page 299.
298 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
LINUX
Host
SVC
I/O grp0
SVC
SVC
SAN
Green Zone
IBM or OEM IBM or OEM
Storage Storage Red Zone
Subsystem Subsystem
Blue Zone
Black Zone
It is also a good idea to rename the new storage subsystem’s controller to a more useful
name, which can be done by using the chcontroller -name command, as shown in
Example 6-25.
Also, verify that controller name was changed as you wanted, as shown in Example 6-26.
2. Creating LUNs
We created two LUNs and masked the LUNs on our storage subsystem so that the SVC
can see them. Eventually, we give these two LUNs directly to the host and remove the
volumes that the host features. To check that the SVC can use these two LUNs, run the
detectmdisk command, as shown in Example 6-27.
Even though the MDisks do not stay in the SVC for long, we suggest that you rename
them to more meaningful names so that they are not confused with other MDisks that are
used by other activities.
Also, we create the storage pools to hold our new MDisks, as shown in Example 6-28.
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity
extent_size free_capacity virtual_capacity used_capacity real_capacity
overallocation warning easy_tier easy_tier_status
300 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Our SVC environment is now ready for the volume migration to image mode volumes.
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
28 Linux-md1 online managed 8
MD_LinuxVD 8.0GB 0000000000000010 DS4500
600a0b8000174233000000b9487778ab00000000000000000000000000000000
29 Linux-md2 online managed 8
MD_LinuxVD 8.0GB 0000000000000011 DS4500
600a0b80001744310000010f48776bae00000000000000000000000000000000
30 Linux-md3 online managed 8
MD_LinuxVD 8.0GB 0000000000000012 DS4500
600a0b8000174233000000bb487778d900000000000000000000000000000000
31 mdLinux_ivd1 online image 8
MD_LinuxVD 6.0GB 0000000000000013 DS4500
600a0b8000174233000000bd4877890f00000000000000000000000000000000
32 mdLinux_ivd online image 8
MD_LinuxVD 12.5GB 0000000000000014 DS4500
600a0b80001744310000011048777bda00000000000000000000000000000000
IBM_2145:ITSO-CLS1:ITSO_admin> lsmigrate
migrate_type Migrate_to_Image
progress 4
migrate_source_vdisk_index 29
migrate_target_mdisk_index 32
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 30
migrate_source_vdisk_index 30
migrate_target_mdisk_index 31
migrate_target_mdisk_grp 8
max_thread_count 4
migrate_source_vdisk_copy_id 0
During the migration, our Linux server is unaware that its data is being physically moved
between storage subsystems.
After the migration completes, the image mode volumes are ready to be removed from the
Linux server. Also, the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.
Our Linux server has two LUNs: one LUN is our boot disk and holds operating system file
systems, and the other LUN holds our application and data files. Moving both LUNs at one
time requires shutting down the host.
If we want to move only the LUN that holds our application and data files, we can move that
LUN without rebooting the host. The only requirement is that we unmount the file system and
vary off the VG to ensure data integrity during the reassignment.
Before you start: Moving LUNs to another storage subsystem might need another entry in
the multipath.conf file. Check with the storage subsystem vendor to identify any content
that you must add to the file. You might be able to install and modify the file in advance.
Complete the following steps to move both LUNs at the same time:
1. Confirm that your operating system is configured for the new storage.
2. Shut down the host.
If you are moving only the LUNs that contain the application and data, complete the
following steps:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the vgchange -a n
VOLUMEGROUP_NAME command.
d. If you can, unload your HBA driver by using the rmmod DRIVER_MODULE command. This
command removes the SCSI definitions from the kernel. (We reload this module and
rediscover the disks later.) It is possible to tell the Linux SCSI subsystem to rescan for
new disks without requiring you to unload the HBA driver; however, we do not provide
those details here.
3. Remove the volumes from the host by using the rmvdiskhostmap command
(Example 6-30). To confirm that you removed the volumes, use the lshostvdiskmap
command, which shows that these disks are no longer mapped to the Linux server.
302 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
4. Remove the volumes from the SVC by using the rmvdisk command. This step makes
them unmanaged, as shown in Example 6-31.
Cached data: When you run the rmvdisk command, the SVC first confirms that no
outstanding dirty cached data exists for the volume that is being removed. If cached
data is still uncommitted, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data needs to be destaged and how busy the
I/O subsystem is determine how long this command takes to complete.
You can check whether the volume has uncommitted data in the cache by using the
command lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This
attribute has the following meanings:
empty: No modified data exists in the cache.
not_empty: Modified data might exist in the cache.
corrupt: Modified data might exist in the cache, but any data was lost.
5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the Linux server.
Important: If one of the disks is used to boot your Linux server, you must ensure that
the disk is presented back to the host as SCSI ID 0 so that the FC adapter BIOS finds
that disk during its initialization.
6. Power on your host server and enter your FC HBA BIOS before you boot the OS. Ensure
that you change the boot configuration so that it points to the SVC. In our example, we
performed the following steps on a QLogic HBA:
a. Pressed Ctrl+Q to enter the HBA BIOS
b. Opened Configuration Settings
c. Opened Selectable Boot Settings
d. Changed the entry from the SVC to the storage subsystem LUN with SCSI ID 0
Important: This step is the last step that you can perform and still safely back out from
the changes so far.
Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the detectmdisk command to rediscover the MDisks.
Re-create the volumes with the mkvdisk command.
Remap the volumes back to the server with the mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
304 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
This example can help you perform any one of the following tasks in your environment:
Move your ESX server’s data LUNs (that are your VMware VMFS file systems where you
might have your VMs stored), which are directly accessed from a storage subsystem, to
virtualized disks under the control of the SVC.
Move LUNs between storage subsystems while your VMware VMs are still running.
You can perform this task to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. For more information, see 6.8.3, “Migrating image mode volumes” on
page 312.
Move your VMware ESX server’s LUNs back to image mode volumes so that they can be
remapped and remasked directly back to the server.
This task starts in 6.8.4, “Preparing to migrate SVC” on page 315.
You can use these tasks individually or together to migrate your VMware ESX server’s LUNs
from one storage subsystem to another storage subsystem by using the SVC as your
migration tool. If you do not use all three of these tasks, you can introduce the SVC in your
environment or move the data between your storage subsystems. The only downtime that is
required for these tasks is the time that it takes you to remask and remap the LUNs between
the storage subsystems and your SVC.
Figure 6-87 shows our ESX server that is connected to the SAN infrastructure. Two LUNs are
masked directly to it from our storage subsystem.
Our ESX server represents a typical SAN environment with a host that directly uses LUNs
that were created on a SAN storage subsystem, as shown in Figure 6-87.
The ESX server’s HBA cards are zoned so that they are in the Green Zone with our storage
subsystem.
The two LUNs that were defined on the storage subsystem and that use LUN masking are
directly available to our ESX server.
We create an empty storage pool for these disks by using the command that is shown in
Example 6-33. Our MDG_Nile_VM storage pool holds the boot LUN and our data LUN.
First, we get the WWN for our ESX server’s HBA because many hosts are connected to our
SAN fabric and in the Blue Zone. We want to ensure that we have the correct WWN to reduce
our ESX server’s downtime.
Log in to your VMware Management Console as root, browse to Configuration, and select
Storage Adapters. The storage adapters are shown on the right side of the window that is
shown in Figure 6-88. This window displays all of the necessary information. Figure 6-88
shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.
306 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Figure 6-88 Obtain your WWN by using the VMware Management Console
Use the lshbaportcandidate command on the SVC to list all of the WWNs that are not yet
allocated to a host and that the SVC can see on the SAN fabric. Example 6-34 on page 307
shows the output of the host WWNs that it found on our SAN fabric. (If the port is not shown,
a zone configuration problem exists.)
After we verify that the SVC can see our host, we create the host entry and assign the WWN
to this entry, as shown in Example 6-35.
When we discover these MDisks, we confirm that we have the correct serial numbers before
we create the image mode volumes.
If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN
serial numbers. Right-click your logical drive and choose Properties. Figure 6-89 and
Figure 6-90 show our serial numbers. Figure 6-89 shows disk serial number VM_W2k3.
308 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
We are ready to move the ownership of the disks to the SVC, discover them as MDisks, and
give them back to the host as volumes.
The VMs are on these LUNs. Therefore, to move these LUNs under the control of the SVC,
we do not need to reboot the entire ESX server. However, we must stop and suspend all
VMware guests that are using these LUNs.
2. Identify all of the VMware guests that are using this LUN and shut them down. One way to
identify them is to highlight the VM and open the Summary tab. The datapool that is used
is displayed under Datastore. Figure 6-93 on page 310 shows a Linux VM that is using the
datastore that is named SLES_Costa_Rica.
Figure 6-93 Identify the LUNs that are used by the VMs
3. If you have several ESX hosts, also check the other ESX hosts to ensure that no guest
operating system is running and using this datastore.
4. Repeat steps 1 - 3 for every datastore that you want to migrate.
5. After the guests are suspended, we use Storage Manager (our storage subsystem
management tool) to unmap and unmask the disks from the ESX server and to remap and
remask the disks to the SVC.
6. From the SVC, discover the new disks by using the detectmdisk command. The disks are
discovered and named as mdiskN, where N is the next available MDisk number (starting
from 0). Example 6-37 shows the commands that we used to discover our MDisks and to
verify that we have the correct MDisks.
310 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Important: Match your discovered MDisk serial numbers (UID on the lsmdisk
command task display) with the serial number that you obtained earlier, as shown in
Figure 6-89 on page 308 and Figure 6-90 on page 309.
7. After we verify that we have the correct MDisks, we rename them to avoid confusion in the
future when we perform other MDisk-related tasks, as shown in Example 6-38.
8. We create our image mode volumes by using the mkvdisk command (Example 6-39). The
use of the -vtype image parameter ensures that it creates image mode volumes, which
means that the virtualized disks have the same layout as though they were not virtualized.
9. We can map the new image mode volumes to the host. Use the same SCSI LUN IDs as
on the storage subsystem for the mapping, as shown in Example 6-40.
IBM_2145:ITSO-CLS1:ITSO_admin> lshostvdiskmap
id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID
1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD
60050768018301BF280000000000002A
1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD
60050768018301BF2800000000000029
10.By using the VMware Management Console, rescan to discover the new volume. Open
the Configuration tab, select Storage Adapters, and then click Rescan. During the
rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with the new vmhba devices.
11.We are ready to restart the VMware guests again.
At this point, you migrated the VMware LUNs successfully to the SVC.
We also need a Green Zone for our host to use when we are ready for it to directly access the
disk after it is removed from the SVC.
312 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
21 ESX_SLES online image 3
MDG_Nile_VM 60.0GB 0000000000000008 DS4700
600a0b800026b282000041ca486d14a500000000000000000000000000000000
22 ESX_W2k3 online image 3
MDG_Nile_VM 70.0GB 0000000000000009 DS4700
600a0b80002904de0000447a486d14cd00000000000000000000000000000000
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
To check the overall progress of the migration, we use the lsmigrate command, as shown in
Example 6-42. Listing the storage pool with the lsmdiskgrp command shows that the free
capacity on the old storage pool is slowly increasing as those extents are moved to the new
storage pool.
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity
extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation
warning
3 MDG_Nile_VM online 2 2 130.0GB
512 1.0GB 130.00GB 130.00GB 130.00GB 100
4 MDG_ESX_VD online 3 0 165.0GB
512 35.0GB 0.00MB 0.00MB 0.00MB 0
If you compare the lsmdiskgrp output after the migration (as shown in Example 6-43), you
can see that all of the virtual capacity was moved from the old storage pool (MDG_Nile_VM)
314 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
to the new storage pool (MDG_ESX_VD). The mdisk_count column shows that the capacity is
now spread over three MDisks.
The migration to the SVC is complete. You can remove the original MDisks from the SVC and
remove these LUNs from the storage subsystem.
If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it
from our SAN fabric.
You might want to perform this process for any one of the following reasons:
You purchased a new storage subsystem and you were using the SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a volume onto another volume, and you
no longer need that host connected to the SVC.
You want to move a host, which is connected to the SVC, and its data to a site where no
SVC exists.
Changes to your environment no longer require this host to use the SVC.
We can perform other preparatory activities before we shut down the host and reconfigure the
LUN masking and mapping. This section describes those activities. In our example, we move
volumes that are on a DS4500 to image mode volumes that are on a DS4700.
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as described in “Adding a storage
subsystem to the IBM SAN Volume Controller” on page 312 and “Make fabric zone changes”
on page 312.
Creating LUNs
On our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see
them. These two LUNs eventually are given directly to the host, which removes the volumes
that it uses. To check that the SVC can use them, run the detectmdisk command, as shown in
Example 6-44.
Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are being used by
other activities. We also create the storage pools to hold our new MDisks, as shown in
Example 6-45.
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
4 MDG_ESX_VD online 3 2
165.0GB 512 35.0GB 130.00GB 130.00GB
130.00GB 78 0
5 MDG_IVD_ESX online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
Our SVC environment is ready for the volume migration to image mode volumes.
316 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
IBM_2145:ITSO-CLS1:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name
UID
23 IBMESX-MD1 online managed 4
MDG_ESX_VD 55.0GB 000000000000000D DS4500
600a0b8000174233000000b4486d250300000000000000000000000000000000
24 IBMESX-MD2 online managed 4
MDG_ESX_VD 55.0GB 000000000000000E DS4500
600a0b800017443100000108486d182c00000000000000000000000000000000
25 IBMESX-MD3 online managed 4
MDG_ESX_VD 55.0GB 000000000000000F DS4500
600a0b8000174233000000b5486d255b00000000000000000000000000000000
26 ESX_IVD_SLES online image 5
MDG_IVD_ESX 120.0GB 000000000000000A DS4700
600a0b800026b282000041f0486e210100000000000000000000000000000000
27 ESX_IVD_W2K3 online image 5
MDG_IVD_ESX 100.0GB 000000000000000B DS4700
600a0b800026b282000041e3486e20cf00000000000000000000000000000000
During the migration, our ESX server is unaware that its data is being physically moved
between storage subsystems. We can continue to run and use the VMs that are running on
the server.
You can check the migration status by using the lsmigrate command, as shown in
Example 6-47.
After the migration completes, the image mode volumes are ready to be removed from the
ESX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystem’s tool.
In our example, we moved the VM disks. Therefore, to remove these LUNs from the control of
the SVC, we must stop and suspend all of the VMware guests that are using this LUN.
Complete the following steps:
1. Check which SCSI LUN IDs are assigned to the migrated disks by using the
lshostvdiskmap command, as shown in Example 6-48. Compare the volume UID and sort
out the information.
IBM_2145:ITSO-CLS1:ITSO_admin> lsvdisk
id name IO_group_id IO_group_name status
mdisk_grp_id mdisk_grp_name capacity type FC_id
FC_name RC_id RC_name vdisk_UID
fc_map_count copy_count
0 vdisk_A 0 io_grp0 online
2 MDG_Image 36.0GB image
29 ESX_W2k3_IVD 0 io_grp0 online
4 MDG_ESX_VD 70.0GB striped
60050768018301BF2800000000000029 0 1
30 ESX_SLES_IVD 0 io_grp0 online
4 MDG_ESX_VD 60.0GB striped
60050768018301BF280000000000002A 0 1
2. Shut down and suspend all guests that are using the LUNs. You can use the same method
that is described in “Moving VMware guest LUNs” on page 309 to identify the guests that
are using this LUN.
3. Remove the volumes from the host by using the rmvdiskhostmap command, as shown in
Example 6-49. To confirm that the volumes were removed, use the lshostvdiskmap
command, which shows that these volumes are no longer mapped to the ESX server.
4. Remove the volumes from the SVC by using the rmvdisk command, which makes the
MDisks unmanaged, as shown in Example 6-50.
318 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Cached data: When you run the rmvdisk command, the SVC first confirms that there is
no outstanding dirty cached data for the volume that is being removed. If uncommitted
cached data still exists, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data exists to destage and how busy the I/O
subsystem is determine how long this command takes to complete.
You can check whether the volume has uncommitted data in the cache by using the
lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This
attribute has the following meanings:
empty: No modified data exists in the cache.
not_empty: Modified data might exist in the cache.
corrupt: Modified data might exist in the cache, but the data was lost.
5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the ESX server. Remember in Example 6-48 on
page 318, we recorded the SCSI LUN IDs. To map your LUNs on the storage subsystem,
use the same SCSI LUN IDs that you used in the SVC.
Important: This step is the last step that you can perform and still safely back out of
any changes made so far.
Up to this point, you can reverse all of the following actions that you performed to get
the server back online without data loss:
Remap and remask the LUNs back to SVC.
Run the detectmdisk command to rediscover the MDisks.
Re-create the volumes with the mkvdisk command.
Remap the volumes back to the server with the mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
6. By using the VMware Management Console, rescan to discover the new volume.
Figure 6-95 shows the view before the rescan. Figure 6-96 on page 320 shows the view
after the rescan. The size of the LUN changed because we moved to another LUN on
another storage subsystem.
320 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
During the rescan, you can receive geometry errors when ESX discovers that the old disk
disappeared. Your volume appears with a new vmhba address and VMware recognizes it
as our VMWARE-GUESTS disk.
We are now ready to restart the VMware guests.
7. To ensure that the MDisks are removed from the SVC, run the detectmdisk command.
The MDisks are discovered as offline and then automatically removed when the SVC
determines that no volumes are associated with these MDisks.
We manage those LUNs with the SVC, move them between other managed disks, and then
move them back to image mode disks so that those LUNs can then be masked and mapped
back to the AIX server directly.
By using this example, you can perform any of the following tasks in your environment:
Move an AIX server’s SAN LUNs from a storage subsystem and virtualize those same
LUNs through the SVC, which is the first task that you perform when you are introducing
the SVC into your environment.
This section shows that your host downtime is only a few minutes while you remap and
remask disks by using your storage subsystem LUN management tool. This step starts in
6.9.1, “Preparing SVC to virtualize AIX disks” on page 323.
Move data between storage subsystems while your AIX server is still running and
servicing your business application.
You can perform this task if you are removing a storage subsystem from your SAN
environment and you want to move the data onto LUNs that are more appropriate for the
type of data that is stored on those LUNs, considering availability, performance, and
redundancy. This step is described in 6.9.3, “Migrating image mode volumes to volumes”
on page 329.
Move your AIX server’s LUNs back to image mode volumes so that they can be remapped
and remasked directly back to the AIX server.
This step starts in 6.9.4, “Preparing to migrate from SVC” on page 332.
Use these tasks individually or together to migrate your AIX server’s LUNs from one storage
subsystem to another storage subsystem by using the SVC as your migration tool. If you do
not use all three tasks, you can introduce or remove the SVC from your environment.
The only downtime that is required for these activities is the time that it takes you to remask
and remap the LUNs between the storage subsystems and your SVC.
AIX
Host
SAN
Green Zone
IBM or OEM
Storage
Subsystem
Figure 6-97 also shows that our AIX server is connected to our SAN infrastructure. It has two
LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.
The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the
itsoaixvg1 LVM group, as shown in Example 6-51 on page 323.
322 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Our AIX server represents a typical SAN environment with a host directly by using LUNs that
were created on a SAN storage subsystem, as shown in Figure 6-97 on page 322.
The AIX server’s HBA cards are zoned so that they are in the Green (dotted line) Zone with
our storage subsystem.
The two LUNs, hdisk3 and hdisk4, were defined on the storage subsystem. By using LUN
masking, they are directly available to our AIX server.
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count capacity extent_size
free_capacity virtual_capacity used_capacity real_capacity overallocation
7 aix_imgmdg online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A68D
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A7FB
ROS Level and ID............02C03951
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF401210
Device Specific.(Z5)........02C03951
Device Specific.(Z6)........06433951
Device Specific.(Z7)........07433951
Device Specific.(Z8)........20000000C932A7FB
Device Specific.(Z9)........CS3.91A1
Device Specific.(ZA)........C1D3.91A1
Device Specific.(ZB)........C2D3.91A1
Device Specific.(YL)........U0.1-P2-I4/Q1
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9002
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I4/Q1
#lscfg -vpl fcs1
fcs1 U0.1-P2-I5/Q1 FC Adapter
Part Number.................00P4494
EC Level....................A
Serial Number...............1E3120A67B
Manufacturer................001E
Device Specific.(CC)........2765
FRU Number.................. 00P4495
Network Address.............10000000C932A800
ROS Level and ID............02C03891
Device Specific.(Z0)........2002606D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........02000909
Device Specific.(Z4)........FF401050
Device Specific.(Z5)........02C03891
Device Specific.(Z6)........06433891
Device Specific.(Z7)........07433891
Device Specific.(Z8)........20000000C932A800
Device Specific.(Z9)........CS3.82A1
Device Specific.(ZA)........C1D3.82A1
Device Specific.(ZB)........C2D3.82A1
Device Specific.(YL)........U0.1-P2-I5/Q1
324 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
PLATFORM SPECIFIC
Name: fibre-channel
Model: LP9000
Node: fibre-channel@1
Device Type: fcp
Physical Location: U0.1-P2-I5/Q1
The lshbaportcandidate command on the SVC lists all of the WWNs, which were not yet
allocated to a host, that the SVC can see on the SAN fabric. Example 6-54 shows the
output of the host WWNs that it found in our SAN fabric. (If the port is not shown, a zone
configuration problem exists.)
IBM_2145:ITSO-CLS2:ITSO_admin> lshbaportcandidate
id
10000000C932A7FB
10000000C932A800
210000E08B89B8C0
After we verify that the SVC can see our host (Kanaga), we create the host entry and
assign the WWN to this entry, as shown with the commands in Example 6-55.
Names: The chcontroller command enables you to change the discovered storage
subsystem name in the SVC. In complex SANs, we suggest that you rename your
storage subsystem to a more meaningful name.
326 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
We are ready to move the ownership of the disks to the SVC, discover them as MDisks,
and give them back to the host as volumes.
Because we want to move only the LUN that holds our application and data files, we move
that LUN without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.
Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver
(SDD) is installed on the AIX server. You can install the SDD in advance; however, it might
require an outage of your host to install the SDD in advance.
Complete the following steps to move both LUNs at the same time:
1. Confirm that the SDD is installed.
2. Complete the following steps to unmount and vary off the VGs:
a. Stop the applications that are using the LUNs.
b. Unmount those file systems by using the umount MOUNT_POINT command.
c. If the file systems are an LVM volume, deactivate that VG by using the varyoffvg
VOLUMEGROUP_NAME command.
Example 6-57 shows the commands that were run on Kanaga.
3. By using Storage Manager (our storage subsystem management tool), the disks can be
unmapped and unmasked from the AIX server and remapped and remasked as disks of
the SVC.
4. From the SVC, discover the new disks by using the detectmdisk command. The disks are
discovered and named mdiskN, where N is the next available MDisk number (starting from
0). Example 6-58 shows the commands that were used to discover our MDisks and to
verify that the correct MDisks are available.
Important: Match your discovered MDisk serial numbers (the UID on the lsmdisk
command task display) with the serial number that you discovered earlier, as shown in
Figure 6-98 on page 326 and Figure 6-99 on page 327).
5. After you verify that the correct MDisks are available, rename them to avoid confusion in
the future when you perform other MDisk-related tasks, as shown in Example 6-59.
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online unmanaged
8.0GB 0000000000000009 DS4700
328 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
6. Create the image mode volumes by using the mkvdisk command and the option -vtype
image, as shown in Example 6-60. This command virtualizes the disks in the same layout
as though they were not virtualized.
7. Map the new image mode volumes to the host, as shown in Example 6-61.
FlashCopy: While the application is in a quiescent state, you can choose to use
FlashCopy to copy the new image volumes onto other volumes. You do not need to wait
until the FlashCopy process completes before you start your application.
Complete the following steps to put the image mode volumes online:
1. Remove the old disk definitions, if you have not done so already.
2. Run the cfgmgr -vs command to rediscover the available LUNs.
3. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
4. Mount your file systems with the mount /MOUNT_POINT command.
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 mdisk26 online unmanaged
6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 mdisk27 online unmanaged
6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 mdisk28 online unmanaged
6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
24 Kanaga_AIX online image 7
aix_imgmdg 5.0GB 0000000000000008 DS4700
600a0b800026b282000043224874f41900000000000000000000000000000000
25 Kanaga_AIX1 online image 7
aix_imgmdg 8.0GB 0000000000000009 DS4700
600a0b800026b2820000432f4874f57c00000000000000000000000000000000
26 aix_vd0 online managed 6
aix_vd 6.0GB 000000000000000A DS4700
600a0b800026b2820000439c48751ddc00000000000000000000000000000000
27 aix_vd1 online managed 6
aix_vd 6.0GB 000000000000000B DS4700
600a0b800026b2820000438448751da900000000000000000000000000000000
28 aix_vd2 online managed 6
aix_vd 6.0GB 000000000000000C DS4700
600a0b800026b2820000439048751dc200000000000000000000000000000000
Migrating volumes
We are ready to migrate the image mode volumes onto striped volumes by using the
migratevdisk command, as shown in Example 6-22 on page 297.
While the migration is running, our AIX server is still running and we can continue accessing
the files.
330 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
To check the overall progress of the migration, we use the lsmigrate command, as shown in
Example 6-63. Listing the storage pool by using the lsmdiskgrp command shows that the free
capacity on the old storage pool is slowly increasing while those extents are moved to the
new storage pool.
IBM_2145:ITSO-CLS2:ITSO_admin> lsmigrate
migrate_type MDisk_Group_Migration
progress 10
migrate_source_vdisk_index 8
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type MDisk_Group_Migration
progress 0
migrate_source_vdisk_index 9
migrate_target_mdisk_grp 6
max_thread_count 4
migrate_source_vdisk_copy_id 0
After this task is complete, the volumes are spread over three MDisks in the aix_vd storage
pool, as shown in Example 6-64. The old storage pool is empty.
Our migration to SVC is complete. You can remove the original MDisks from SVC and you
can remove these LUNs from the storage subsystem.
If these LUNs are the LUNs that were used last on our storage subsystem, we can remove
these LUNs from our SAN fabric.
You can perform this task for one of the following reasons:
You purchased a new storage subsystem and you were using SVC as a tool to migrate
from your old storage subsystem to this new storage subsystem.
You used the SVC to FlashCopy or Metro Mirror a volume onto another volume and you no
longer need that host that is connected to SVC.
You want to move a host, which is connected to SVC, and its data to a site where no SVC
exists.
Changes to your environment no longer require this host to use SVC.
Other preparatory tasks need to be performed before we shut down the host and reconfigure
the LUN masking and mapping. This section describes those tasks.
If you are moving the data to a new storage subsystem, it is assumed that this storage
subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches.
Your environment must look similar to our environment, as shown in Figure 6-100.
332 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Create a Green Zone for our host to use when we are ready for it to access the disk directly
after it is removed from the SVC (it is assumed that you created the necessary zones). After
your zone configuration is set up correctly, SVC sees the new storage subsystem’s controller
by using the lscontroller command, as shown in Example 6-65 on page 333. It is also
useful to rename the controller to a more meaningful name by using the chcontroller -name
command.
Creating LUNs
On our storage subsystem, we created two LUNs and masked them so that the SVC can see
them. We eventually give these LUNs directly to the host and remove the volumes that the
host is using. To check that the SVC can use the LUNs, run the detectmdisk command, as
shown in Example 6-66.
In our example, we use two 10 GB LUNs that are on the DS4500 subsystem. Therefore, in
this step, we migrate back to image mode volumes and to another subsystem in one step. We
deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they
appear offline here.
Although the MDisks do not stay in the SVC long, we suggest that you rename them to more
meaningful names so that they are not confused with other MDisks that are used by other
activities. Also, we create the storage pools to hold our new MDisks, as shown in
Example 6-67 on page 334.
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdiskgrp
id name status mdisk_count vdisk_count
capacity extent_size free_capacity virtual_capacity used_capacity
real_capacity overallocation warning
3 KANAGA_AIXMIG online 0 0 0
512 0 0.00MB 0.00MB 0.00MB 0
0
6 aix_vd online 3 2
18.0GB 512 5.0GB 13.00GB 13.00GB
13.00GB 72 0
7 aix_imgmdg offline 2 0
13.0GB 512 13.0GB 0.00MB 0.00MB
0.00MB 0 0
Now, our SVC environment is ready for the volume migration to image mode volumes.
334 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
IBM_2145:ITSO-CLS2:ITSO_admin> lsmigrate
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 9
migrate_target_mdisk_index 30
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
migrate_type Migrate_to_Image
progress 50
migrate_source_vdisk_index 8
migrate_target_mdisk_index 29
migrate_target_mdisk_grp 3
max_thread_count 4
migrate_source_vdisk_copy_id 0
During the migration, our AIX server is unaware that its data is being moved physically
between storage subsystems.
After the migration is complete, the image mode volumes are ready to be removed from the
AIX server and the real LUNs can be mapped and masked directly to the host by using the
storage subsystems tool.
Because our LUNs hold data files only and we use a unique VG, we can remap and remask
the disks without rebooting the host. The only requirement is that we unmount the file system
and vary off the VG to ensure data integrity after the reassignment.
Before you start: Moving LUNs to another storage system might need a driver other than
SDD. Check with the storage subsystems vendor to see which driver you need. You might
be able to install this driver in advance.
3. Remove the volumes from the host by using the rmvdiskhostmap command, as shown in
Example 6-69. To confirm that you removed the volumes, use the lshostvdiskmap
command, which shows that these disks are no longer mapped to the AIX server.
4. Remove the volumes from SVC by using the rmvdisk command, which makes the MDisks
unmanaged, as shown in Example 6-70.
Cached data: When you run the rmvdisk command, SVC first confirms that there is no
outstanding dirty cached data for the volume that is being removed. If uncommitted
cached data still exists, the command fails with the following error message:
CMMVC6212E The command failed because data in the cache has not been
committed to disk
You must wait for this cached data to be committed to the underlying storage
subsystem before you can remove the volume.
The SVC automatically destages uncommitted cached data 2 minutes after the last
write activity for the volume. How much data there is to destage and how busy the I/O
subsystem is determine how long this command takes to complete.
You can check whether the volume has uncommitted data in the cache by using the
lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This
attribute has the following meanings:
empty: No modified data exists in the cache.
not_empty: Modified data might exist in the cache.
corrupt: Modified data might exist in the cache, but any modified data was lost.
IBM_2145:ITSO-CLS2:ITSO_admin> lsmdisk
id name status mode
mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_#
controller_name UID
29 AIX_MIG online unmanaged
10.0GB 0000000000000010 DS4500
600a0b8000174233000000b84876512f00000000000000000000000000000000
30 AIX_MIG1 online unmanaged
10.0GB 0000000000000011 DS4500
600a0b80001744310000010e4876444600000000000000000000000000000000
336 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
5. By using Storage Manager (our storage subsystem management tool), unmap and
unmask the disks from the SVC back to the AIX server.
Important: This step is the last step that you can perform and still safely back out of
any changes that you made.
Up to this point, you can reverse all of the following actions that you performed so far to
get the server back online without data loss:
Remap and remask the LUNs back to the SVC.
Run the detectmdisk command to rediscover the MDisks.
Re-create the volumes with the mkvdisk command.
Remap the volumes back to the server with the mkvdiskhostmap command.
After you start the next step, you might not be able to turn back without the risk of data
loss.
We are ready to access the LUNs from the AIX server. If all of the zoning, LUN masking, and
mapping were successful, our AIX server boots as though nothing happened. Complete the
following steps:
1. Run the cfgmgr -S command to discover the storage subsystem.
2. Use the lsdev -Cc disk command to verify the discovery of the new disk.
3. Remove the references to all of the old disks. Example 6-71 shows the removal by using
SDDPCM or MPIO (Example 6-71).
4. If your application and data are on an LVM volume, rediscover the VG. Then, run the
varyonvg VOLUME_GROUP command to activate the VG.
5. Mount your file systems by using the mount /MOUNT_POINT command.
You are ready to start your application.
6. To ensure that the MDisks are removed from the SVC, run the detectmdisk command.
The MDisks first are discovered as offline. Then, they removed automatically after the
SVC determines that no volumes are associated with these MDisks.
To use SVC for migration purposes only, complete the following steps:
1. Add SVC to your SAN environment.
2. Prepare SVC.
3. Depending on your operating system, unmount the selected LUNs or shut down the host.
4. Add SVC between your storage and the host.
5. Mount the LUNs or start the host again.
6. Start the migration.
7. After the migration process is complete, unmount the selected LUNs or shut down the
host.
8. Remove SVC from your SAN.
9. Mount the LUNs or start the host again.
As you can see, little downtime is required. If you prepare everything correctly, you can
reduce your downtime to a few minutes. The copy process is handled by SVC, so the host
does not hinder the performance while the migration progresses.
To use SVC for storage migrations, complete the steps that are described in the following
sections:
6.6.2, “Adding SVC between the host system and DS3400” on page 261
6.6.6, “Migrating volume from image mode to image mode” on page 279
6.6.7, “Removing image mode data from SVC” on page 284
Complete the following step to gather statistics about MDisks and volumes:
1. Use secure copy (scp command) to retrieve the dump files for analyzing. For example,
issue the following command:
scp clusterip:/dumps/iostats/v_*
338 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
This command copies all the volume statistics files to the AIX host in the current directory.
2. Analyze the memory dumps to determine which volumes are hot. It might be helpful to
also determine which MDisks are being used heavily as you can spread the data that they
contain more evenly across all the MDisks in the storage pool by migrating the extents.
3. After you analyze the I/O statistics data, you can determine which volumes are hot. You
also need to determine the storage pool that you want to move this volume to. Either
create a new storage pool or determine an existing group that is not yet overly used.
Check the I/O statistics files that you generated and then ensure that the MDisks or
volumes in the target storage pool are used less than the MDisks or volumes in the source
storage pool.
You can use data migration or volume mirroring to migrate data between storage pools:
Data migration uses the command migratevdisk.
Volume mirroring uses the commands addvdiskcopy and rmvdiskcopy.
Note 1: The following migration - migratevdisk - options are valid for Cluster with
unencrypted Pools:
Child pool to its parent pool
Parent pool to one of its child pools
Between the child pools in the same parent pool
Between two parent pools
Note 2: The following migration - migratevdisk - options are valid for Cluster with
encrypted Pools:
A parent pool to parent pool migration is allowed in all cases.
A parent pool to child pool migration is not allowed if child has encryption key.
A child pool to parent pool or child pool is not allowed if either child pool has an
encryption key.
You cannot use the data migration function to move a volume between storage pools that
have different extent sizes.
Migration commands fail if the target or source volume is offline, there is no quorum disk
defined, or the defined quorum disks are unavailable. Correct the offline or quorum disk
condition and reissue the command.
The system supports migrating volumes between child pools within the same parent pool
or migrating a volume in a child pool to its parent pool. Migration of volumes fail if source
and target child pools have different parent pools. However, you can use addvdiskcopy
and rmvdiskcopy commands to migrate volumes between child pools in different parent
pools.
When you use data migration, it is possible for the free destination extents to be consumed
by another process; for example, if a new volume is created in the destination parent pool
or if more migration commands are started. In this scenario, after all the destination
extents are allocated, the migration commands suspend and an error is logged (error ID
020005). To recover from this situation, use either of the following methods:
Add more MDisks to the target parent pool, which provides more extents in the group and
allows the migrations to be restarted. You must mark the error as fixed before you
reattempt the migration.
Migrate one or more volumes that are already created from the parent pool to another
group. This action frees up extents in the group and allows the original migrations to be
restarted.
Complete the following steps to use the migratevdisk command to migrate volumes between
storage pools:
After you determine the volume that you want to migrate and the new storage pool you want
to migrate it to, issue the following CLI command:
newmdiskgrname/ID -threads 4
You can check the progress of the migration by issuing the following CLI command:
lsmigrate
After you determine the volume that you want to migrate and the new pool that you want to
migrate it to, enter the following command:
The copy ID of the new copy is returned. The copies now synchronize such that the data is
stored in both storage pools. You can check the progress of the synchronization by issuing the
following command:
lsvdisksyncprogress
After the synchronization is complete, remove the copy from the original I/O group to free up
extents and decrease the utilization of the storage pool. To remove the original copy, issue the
following command:
340 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
Migrating data (extents) from one MDisk to another (within the same parent pool). This
method can be used to remove highly used MDisks.
Migrating volumes from one parent pool to another. This method can be used to remove
highly used parent pools. For example, you can reduce the use of a pool of MDisks. Child
pools that receive their capacity from parent pools, cannot have extents that are migrated to
them.
Note that:
The source MDisk must not currently be the source MDisk for any other migrate extents
operation.
The destination MDisk must not be the destination MDisk for any other migrate extents
operation.
Migration commands fail if the target or source volume is offline, there is no quorum disk
defined, or the defined quorum disks are unavailable. Correct the offline or quorum disk
condition and reissue the command.
You can determine the use of particular MDisks by gathering input/output (I/O) statistics about
nodes, MDisks, and volumes. After you collect this data, you can analyze it to determine
which MDisks are used frequently. The procedure then takes you through querying and
migrating extents to different locations in the same parent pool. This procedure only can be
completed using the command-line interface.
If performance monitoring tools indicate that an MDisk in the pool is being overused, you can
migrate data to other MDisks within the same parent pool.
-threads indicates the priority of the migration processing, where 1 is the lowest priority
and 4 is the highest priority.
6. Repeat the previous steps for each set of extents that you are moving.
7. You can check the progress of the migration by issuing this CLI command:
lsmigrate
342 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
During system setup in the management GUI, you can activate and enable encryption
licenses. The management GUI automatically displays any nodes that support encryption.
The license can either be automatically or manually activated and then enabled for the
system and the supported nodes. Any pools that created after encryption is enabled are
assigned a key that can be used to encrypt and decrypt data.
However, if encryption was configured after volumes were already assigned to non-encrypted
pools, you can migrate those volumes to an encrypted pool by using child pools.
When you create a child pool after encryption is enabled, an encryption key is created for the
child pool even when the parent pool is not encrypted. You can then use volume mirroring to
migrate the volumes from the non-encrypted parent pool to the encrypted child pool.
8. On the Add Volume Copy page, select Basic for the type of copy that you are creating.
From the list of available pools, select the child pool as the target pool for the copy of the
volume (Figure 6-102 on page 345).
344 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
10.Repeat these steps to add volume copies to the encrypted child pool for the remaining
volumes in the parent pool.
11.After all the copies are synchronized in the encrypted child pool, you can delete the all the
primary copies from the parent pool. The empty parent pool must remain unused to use
encrypted volumes in the child pool.
To migrate volumes using the addvdiskcopy in the command-line interface, complete these
steps:
1. In the command-line interface, enter the following command to create a child pool.
a. mkmdiskgrp -name my_encrypted_child_pool -parentmdiskgrp mypool -encrypt yes
where my_encrypted_child_pool is the name of the new child pool and mypool is the
name of the parent pool.
2. To create mirrored copies of the volumes that are in the parent pool in the new child pool in
the CLI, enter the following command:
a. addvdiskcopy -autodelete -mdiskgrp my_encrypted_child_pool -vdisk volume1
Where autodelete specifies the primary copy is deleted once the secondary copy is
synchronized
b. where my_encrypted_child_pool is the name of the new child pool and volume1 is the
name of the volume that is being copied. Use -autodelete to automatically delete the
primary copy of the volume after the copy synchronizes.
3. Repeat step 2 until all the volumes from the original parent contain mirrored copies in the
new child pool. The empty parent pool must remain unused to use encrypted volumes in
the child pool.
By way of example let us consider a situation in which new “additional” External Storage
LUNs are now being managed by the Spectrum Virtualize V7.6 code running on SVC
2145-DH8 node hardware, and that an encrypted pool has been created from this managed
storage. Volumes created before Spectrum Virtualize V7.6 was implemented on our cluster
will be unencrypted, but we can double check their encryption status by customizing the
Volumes window view.
346 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
You can display the encryption status of a volume in the Volumes → Volumes window.
In order to do this you must customize attributes bar: (right-click → select encryption).
This will alter the default view to include a column which displays encryption status using a
“key” icon.
We can take this unencrypted volume and migrate it to an encrypted version of itself using the
GUIs migrate option (which executes the command migratevdisk). The target pool selected
must be encrypted for this conversion to take place.
Note: The target pool must be a “Parent Pool”. Currently it is not possible to use the
Migrate option between Parent and Child Pools.
The Migrate to Another Pool option opens the Migrate Volume Copy window.
2. Select a new encrypted pool (which has the same extent size) as the pool you are
migrating from (Figure 6-105).
Figure 6-105 shows the original unencrypted volume “unencrypted volume” and the new
encrypted target pool “New_Additional_SW_enc”.
3. Having confirmed target pool select Migrate. The following “Task completed” message
should appear in the Migrate Volume Copy window (Figure 6-106).
348 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
The Volumes → Volumes view shows the migrated volume with new encryption
characteristic (Figure on page 349).
To use the SVC for migration purposes only, complete the following steps:
1. Add the SVC to your SAN environment.
2. Prepare the SVC.
3. Depending on your operating system, unmount the selected LUNs or shut down the host.
4. Add the SVC between your storage and the host.
As you can see, little downtime is required. If you prepare everything correctly, you can
reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host
does not hinder the performance while the migration progresses.
To use the SVC for storage migrations, complete the steps that are described in the following
sections:
6.6.2, “Adding SVC between the host system and DS3400” on page 261
6.6.6, “Migrating volume from image mode to image mode” on page 279
To migrate from a fully allocated volume to a thin-provisioned volume, complete the following
steps:
1. Add the target thin-provisioned copy.
2. Wait for synchronization to complete.
3. Remove the source fully allocated copy.
By using this feature, clients can free managed disk space easily and make better use of their
storage without the need to purchase any other functions for the SVC.
Volume mirroring and thin-provisioned volume functions are included in the base virtualization
license. Clients with thin-provisioned storage on an existing storage system can migrate their
data under SVC management by using thin-provisioned volumes without having to allocate
more storage space.
Zero detect works only if the disk contains zeros. An uninitialized disk can contain anything,
unless the disk is formatted (for example, by using the -fmtdisk flag on the mkvdisk
command).
350 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
As shown in Figure 6-108 on page 351, a thin-provisioned volume features the following
components:
Used capacity
This term specifies the portion of real capacity that is used to store data. For
non-thin-provisioned copies, this value is the same as the volume capacity. If the volume
copy is thin-provisioned, the value increases from zero to the real capacity value as more
of the volume is written to.
Real capacity
This capacity is the real allocated space in the storage pool. In a thin-provisioned volume,
this value can differ from the total capacity.
Free data
This value specifies the difference between the real capacity and the used capacity
values. If the free data capacity reaches the used capacity and if the volume is configured
with the -autoexpand option, the SVC automatically expands the allocated space for this
volume to keep this value equal to the real capacity.
Grains
This value is the smallest unit into which the allocated space can be divided.
Metadata
This value is allocated in the real capacity, and it tracks the used capacity, real capacity,
and free capacity.
======================================================================
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status offline
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
capacity 15.00GB
type striped
formatted yes
.
.
vdisk_UID 60050768018401BF280000000000000B
352 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
mdisk_grp_name MDG_DS47
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
2. We add a thin-provisioned volume copy with the volume mirroring option by using the
addvdiskcopy command and the autoexpand parameter, as shown in Example 6-73.
======================================================================
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
tsync_rate 50
copy_count 2
copy_id 0
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
fused_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync no
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
As you can see in Example 6-74, the VD_Full has a copy_id 1 where the used_capacity is
0.41 MB, which is equal to the metadata, because only zeros exist in the disk.
The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in
the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real
capacity minus the used capacity.
If zeros are written on the disk, the thin-provisioned volume does not use space.
Example 6-74 shows that the thin-provisioned volume does not use space even when the
capacities are in sync.
======================================================================
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk VD_Full
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id many
mdisk_grp_name many
capacity 15.00GB
type many
formatted yes
mdisk_id many
mdisk_name many
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 2
copy_id 0
status online
sync yes
primary yes
354 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
mdisk_grp_id 0
mdisk_grp_name MDG_DS47
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 15.00GB
real_capacity 15.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
3. We can split the volume mirror or remove one of the copies, which keeps the
thin-provisioned copy as our valid copy by using the splitvdiskcopy command or the
rmvdiskcopy command.
If you need your copy as a thin-provisioned clone, we suggest that you use the
splitvdiskcopy command because that command generates a new volume and you can
map to any server that you want.
If you need your copy because you are migrating from a previously fully allocated volume
to go to a thin-provisioned volume without any effect on the server operations, we suggest
that you use the rmvdiskcopy command. In this case, the original volume name is kept and
it remains mapped to the same server.
Example 6-75 shows the splitvdiskcopy command.
======================================================================
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk -filtervalue name VD*
======================================================================
id 7
name VD_TPV
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
vdisk_UID 60050768018401BF280000000000000D
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
356 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 06 Data Migration Frank.fm
IBM_2145:ITSO-CLS2:ITSO_admin>svcinfo lsvdisk 2
id 2
name VD_Full
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
capacity 15.00GB
type striped
formatted no
vdisk_UID 60050768018401BF280000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid 0
fc_map_count 0
sync_rate 50
copy_count 1
copy_id 1
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name MDG_DS83
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 323.57MB
free_capacity 323.17MB
overallocation 4746
autoexpand on
warning 80
grainsize 32
358 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
The first part of this chapter provides a brief overview of Spectrum Virtualize volumes, the
classes of volumes available and the topologies they are associated with. It also provides an
overview of advanced customization available.
The second describes how to create volumes using the GUI’s Quick and Advanced volume
creation menus and how to map these to defined hosts.
The third provides an introduction to the new volume manipulation commands, designed to
facilitate the creation and administration of volumes used for HyperSwap and Enhanced
Stretched Cluster topologies.
Note: Advanced host and volume administration, such as volume migration, creating
volume copies, and so on, is described in Chapter 9, “Advanced Copy Services” on
page 475.
A basic volume simply presents an area of usable storage that the host can access to perform
I/O. However additional advanced features can be used to customize the properties of a basic
volume to provide, capacity savings (Thin-provisioning and Compression) and enhanced
availability using mirroring. A mirrored volume being a volume with two physical copies. Each
volume copy belonging to a different storage pool, and each copy having the same virtual
capacity as the volume.
With Spectrum Virtualize V7.5 code we expanded the advanced features of SVC volumes to
support High Availability (HA) using HyperSwap. Specific configuration requirements are
needed to support this volume class, and in the 760 GUI we have introduced assisted
configuration - using GUI wizards - that simplify their creation. These wizards simply creation
by ensuring that only - site specific - valid configuration options are presented.
Spectrum Virtualize V7.6 code also incorporates a class of volume designed to provide
enhanced support with VMware, by supporting VMware vSphere Virtual Volumes, sometimes
referred to as VVols, which allow VMware vCenter to manage system objects like volumes
and pools.
The SVC volume presented is derived from a virtual disk created form managed virtualized
storage (MDisks). Application servers access volumes, not MDisks or drives and each
volume’s volume copy is created from a set of MDisk extents managed in a storage pool.
Note: A managed disk (MDisk) is a logical unit of physical storage. MDisks are either
arrays (RAID) from internal storage or external physical disks that are presented as a
single logical disk on the storage area network (SAN). Each MDisk is divided into a number
of extents, which are numbered, from 0, sequentially from the start to the end of the MDisk.
The extent size is a property of the storage pools the MDisks is added to
The type attribute of a volume defines the allocation of extents that make up the volume copy.
i.e.
A striped volume contains a volume copy that has one extent allocated in turn from each
MDisk that is in the storage pool. This is the default option but you can also supply a list of
MDisks to use as the stripe set. (Figure 7-1).
360 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Attention: By default, striped volume copies are striped across all MDisks in the storage
pool. If some of the MDisks are smaller than others, the extents on the smaller MDisks are
used up before the larger MDisks run out of extents. Manually specifying the stripe set in
this case might result in the volume copy not being created.
If you are unsure if there is sufficient free space to create a striped volume copy, select one
of the following options:
Check the free space on each MDisk in the storage pool using the lsfreeextents
command.
Let the system automatically create the volume copy by not supplying a specific stripe
set.
A sequential volume contains a volume copy has extents allocated sequentially on one
MDisk.
Image-mode volumes are a special type of volume that have a direct relationship with one
MDisk. They are used when you have an MDisk that contains data that you want to merge
into the clustered system.
A Basic volume is the simplest form of volume, it consists of a single volume copy, made up of
extents striped across all MDisks in a storage pool. It services I/O using readwrite cache and
is classified as fully allocated i.e. reported real capacity and virtual capacity and equal.
You can create other forms of volumes, depending on the type of topology that is configured
on your system.
With standard topology, which is single-site configuration, you can create a basic volume
or a mirrored volume.
– By using volume mirroring, a volume can have two physical copies. Each volume copy
can belong to a different pool, and each copy has the same virtual capacity as the
volume. In the management GUI, an asterisk indicates the primary copy of the mirrored
volume. The primary copy indicates the preferred volume for read requests
With HyperSwap topology, which is three-site High Availability configuration, you can
create a basic volume or a HyperSwap volume.
– HyperSwap volumes create copies on separate sites for systems that are configured
with HyperSwap topology. Data that is written to a HyperSwap volume is automatically
sent to both copies so that either site can provide access to the volume if the other site
becomes unavailable
With Stretched topology, which is three-site disaster resilient configuration, you can create
a basic volume or a Stretched volume.
There is also a custom volume class. This is available across all topologies and enables
additional customization - specific to the topology the volume has been created in - including
the method of capacity savings.
Thin-provisioned When you create a volume, you can designate it as thin-provisioned. A
thin-provisioned volume has a virtual capacity and a real capacity.
Virtual capacity is the volume storage capacity that is available to a
host. Real capacity is the storage capacity that is allocated to a
volume copy from a storage pool. In a fully allocated volume, the
virtual capacity and real capacity are the same. In a thin-provisioned
volume, however, the virtual capacity can be much larger than the real
capacity. Finally, Thin-Mirrored volumes combine the characteristics
of mirrored and thin-provisioned volumes.
Compressed This is a special type of volume where data is compressed as it is
written to disk, saving additional space. To use the compression
function, you must obtain the IBM Real-time Compression license.
760 code also introduces Virtual Volumes. These are available in a system configuration
which supports VMware vSphere Virtual Volumes, sometimes referred to as VVols. These
allow VMware vCenter to manage system objects like volumes and pools. SVC administrators
can create volume objects of this class, and assign ownership to VMware administrators to
simplify management.
Note: From V7.4 it is possible to prevent accidental deletion of volumes, if they have
recently performed any IO operations. This feature is called Volume protection and it
prevents active volumes, or host mappings from being deleted inadvertently. This is done
using a global system setting. For more information see the “Enabling volume protection”
topic in the IBM Knowledge Center:
http://www.ibm.com/support/knowledgecenter/STPVGU_7.4.0
362 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
To start the process of creating a new volumes click the dynamic menu (function icons), open
the Volumes menu and click the Volumes option of the IBM SVC V7000 graphical user
interface (Figure 7-2).
A list of existing volumes, their state, capacity and associated storage pools is displayed.
To define a new Basic volume, click Create Volumes option on the tab header (Figure 7-3).
The Create Volume options tab opens the Create Volumes menu, which displays two
potential creation methods, Quick Volume Creation and Advanced and volume classes.
Note: The volume classes displayed on the Create Volume menu is dependent on the
topology of the system
Volumes can be created using the Quick Volume Creation submenu, or the Advanced
submenu as shown in (Figure 7-4).
In the example above Quick Volume Creation submenu shows icons that enable the quick
creation of Basic, and Mirrored volumes (in standard topology) and the Advanced submenu
shows a Custom icon that can be used to customize parameters of volumes. Custom volumes
are discussed in more detail later in this section.
For a HyperSwap Topology the Create Volumes menu will display (Figure 7-5).
For a Stretched Topology the Create Volumes menu will display (Figure 7-6).
Independent of the topology of the system the Create Volume menu will display a Basic
volume icon, in the Quick Volume Creation submenu, and always show a Custom volume
icon in the Advanced submenu.
364 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Clicking on any of the 3 x icon in the Create Volumes window will open a drop down window
where volume details can be entered. The example (Figure 7-7) uses a Basic volume to
demonstrate this.
Notes:
A Basic volume is a volume whose data is striped across all available managed disks
(MDisks) in one storage pool.
A Mirrored volume is a volume with two physical copies, where each volume copy can
belong to a different storage pools.
A Custom volume, in the context of this menu, is either a Basic or Mirrored volume with
customization from default parameters
Quick Volume Creation also provides, using the Capacity Savings parameter, the ability
to change the default provisioning of a Basic or Mirrored Volume to Thin-provisioned or
Compressed. See section.
Note: Advanced host and volume administration, such as volume migration, creating
volume copies, and so on, is described in Chapter 9, “Advanced Copy Services” on
page 475.
Note: The ability to create HyperSwap volumes using the GUI is a new feature with
Spectrum Virtualize V7.6 code and significantly simplifies creation / configuration. This
simplification is enhance by the GUI using a new command - mkvolume.
Create a Basic volume by clicking the Basic icon (Figure 7-4). This opens an additional input
window where you can define:
Pool: The Pool in which the volume will be created (drop-down)
Quantity: Number of volumes to be created (numeric up/down)
Capacity: Size of the volume
– Units (drop-down)
Capacity Savings: (drop-down)
– None
– Thin-provisioned
– Compressed
Name: Name of the Volume (cannot start with a numeric)
I/O group
366 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
We suggest using an appropriate naming convention of volumes to help you easily identify
the associated host or group of hosts. At a minimum, it should contain the name of the pool or
some tag that identifies the underlying storage subsystem. It can also contain the host name
that the volume will be mapped to, or perhaps the content of this volume, for example, name
of applications to be installed.
Once all the characteristics of the Basic volume have been defined it can be created by
selecting one of:
Create
Create and Map to Host
In this example we have chosen the Create option (the volume-to-host mapping can be
performed at a later date). Once selected you should see the following confirmation window
(Figure 7-9).
Note: The “+” icon highlighted in green can be used to create additional volumes in the
same instance of the volume creation wizard.
Success will also be indicated by the state of the Basic volume being reported as formatting in
the Volumes screen (Figure 7-10).
Notes:
The V7.5 release changed the default behavior of volume creation to introduce a new
feature which fills a fully allocated volume with zeros as a background task activity. i.e.
Basic volumes are automatically formatted through the quick initialization process. This
process makes fully allocated volumes available for use immediately.
Quick initialization requires a small amount of I/O to complete and limits the number of
volumes that can be initialized at the same time. Some volume actions such as moving,
expanding, shrinking, or adding a volume copy are disabled when the specified volume
is initializing. Those actions are available after the initialization process completes.
The quick initialization process can be disabled in circumstances where it is not
necessary. For example, if the volume is the target of a Copy Services function, the
Copy Services operation formats the volume.The quick initialization process can also
be disabled for performance testing so that the measurements of the raw system
capabilities can take place without waiting for the process to complete.
368 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Note: Volume mirroring is not a true disaster recovery solution, because both copies are
accessed by the same node pair and addressable by only a single cluster, but it can
improve availability.
370 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Note: When creating a Mirrored Volume using this menu you are not required to specify
the Mirrored Sync rate. It defaults to 16 MiB. Customization of this synchronization rate
can be achieved using Advanced - Custom menu.
Figure 7-13 Quick Volume Creation - with Capacity Saving option set to Compressed
Tip: An alternative way of opening the Actions menu is to highlight / select a volume an
then use the right-hand mouse button.
1. From the Actions menu select the Map to Host option. (Figure 7-14).
372 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
2. This opens a Map to Host window. In this window use the Select the Host Drop-down to
select a host to map the volume to. (Figure 7-15).
3. Selected Host from Drop-down, tick selection and then map. (Figure 7-16 on page 374).
4. The Modify Mappings window will display the command details and then a Task
completed message. (Figure 7-17).
374 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Work through these submenus to customize your Custom volume as desired and then commit
these changes using Create. See (Figure 7-18).
The real physical capacity allocated to the volume from the storage pool. The real capacity
determines the quantity of extents that are initially allocated to the volume.
Its virtual capacity available to the host. The virtual capacity is the capacity of the volume
that is reported to all other components (for example, FlashCopy, cache, and remote copy)
and to the hosts.
2. Next Click the Volume Location subsection to define the pool in which the volume will be
created. Using the drop-down in the Pool option to choose the pool. All other options,
Volume copy type, Caching I/O grp, Preferred node, and Accessible I/O groups can
be left with their default options (Figure 7-20 on page 376).
3. The real and virtual capacity, the expansion criteria and grain size (Figure 7-21 on
page 377).
376 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
4. Next Click the Thin Provisioning subsection to manage the real and virtual capacity, the
expansion criteria and grain size (Figure 7-21 on page 377).
Important: If you do not use the autoexpand feature, the volume will go offline after
reaching real capacity.
The GUI also defaults to a grain size of 32KiB, however 256KiB is the default when
using the CLI. The optimum choice of grain size is dependent upon volume use
type. See: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003982
If you are not going to use the thin-provisioned volume as a FlashCopy® source
or target volume, use 256 KiB to maximise performance.
If you are going to use the thin-provisioned volume as a FlashCopy source or
target volume, specify the same grain size for the volume and for the FlashCopy
function.
5. Apply all required changes and click Create to define the volume (Figure 7-22).
6. Again, you can directly start the wizard for mapping this volume to the host by clicking
Create and Map to Host.
Figure 7-23 Defining a volume as compressed using the Capacity Savings option
378 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
2. Open the Volume Location subsection and select, from the drop-down menu, the Pool in
which the compressed volume will be created. (use all other parameter defaults as
defined).
3. Open the Compression subsection and check Real Capacity is set to a minimum of the
the default value of 2%. (use all other parameter defaults as defined). See Figure 7-24 on
page 379.
The progress of formatting and synchronization of a newly created Mirrored Volume can be
checked from the Running Tasks menu. This menu reports the progress of all currently
running tasks; including: Volume Format and Volume Synchronization (Figure 7-26 on
page 381).
380 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Figure 7-26
The Mirror Sync rate can be changed from the default setting using the Advanced option,
subsection Volume Location, of the Create Volume window. This option sets the priority of
copy synchronization progress, allowing a preferential rate to be set for more important
mirrored volumes.
The summary shows you the capacity information and the allocated space. You can click
Advanced and customize the thin-provision settings or the mirror synchronization rate. After
you create the volume, the confirmation window opens (Figure 7-27 on page 382).
The initial synchronization of thin-mirrored volumes is fast when a small amount of real and
virtual capacity is used.
In order to complete the switch to stretched topology you will need to assign site awareness to
the following cluster objects:
Host(s)
Controller(s) (External Storage)
Nodes
Site awareness must be defined for each of these object classes before the new topology of
stretched can be set.
382 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Assign Site Names - All 3 fields must be assigned before proceeding using “Next”. See
(Figure 7-29 on page 383).
Next, Hosts and Storage must be assigned sites. Each host must be assigned to a site. See
(Figure 7-30) for an example host.
The next objects to be set with site awareness are nodes (Figure 7-32 on page 385)
384 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
After completing site awareness assignments for Host, Controller and Node objects you can
now proceeds to change topology to Stretched.
Note: The above example shows a 2 x node cluster, which is an unsupported stretched
configuration. A more accurate representation is shown in Figure 7-1 on page 386.
A summary of new topology configuration will be displayed before the change is committed.
For our 2 x node example this would look like (Figure 7-33).
386 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
With topology now set the “Stretched” icon is visible in the Quick Volume Creation menu.
Select this and define the volume’s attributes. Note: creation options are restricted based on
site awareness attributes of controllers. (Figure 7-35 on page 388).
The bottom left section of the above example (Figure 7-35) summarizes the volume creation
activities about to be performed i.e. 1 x volume with 2 x copies at sites “Site1” and “Site2”.
In this section we discuss the new mkvolume command and how the GUI uses this command,
when HyperSwap topology has been configured, instead of the “traditional” mkvdisk
command. The GUI continues to use mkvdisk when all other classes of volumes are created.
https://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102538
https://ibm.biz/BdHvgN
388 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
The GUI simplifies the complexity of HyperSwap volume creation, by only presenting the
volume class of HyperSwap as a Quick Volume Creation option after HyperSwap topology
has been configured.
In the following example HyperSwap topology has been configured and the Quick Volume
Creation window is being used to define a HyperSwap Volume. (Figure 7-37 on page 390).
The capacity and name characteristics are defined as for a Basic volume - highlighted in blue
in the example - and the mirroring characteristics are defined by the Site parameters -
highlighted in red.
The drop-downs assist in creation and the summary (bottom left of the creation window)
indicates the actions that will be carried out once the Create option is selected. In the
example above (Figure 7-37) a single volume will created, with volume copies in site1 and
site2. This volume will be in an active-active (Metro-Mirror) relationship with additional
resilience provided by 2 x Change volumes.
The command executed to created this volume is shown (Figure 7-38 on page 391). and can
be summarized as:
390 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
With a single mkvolume we achieve the creation of a HyperSwap volume. Previously (using
Spectrum Virtualize release V7.5) this was only possible with careful planning and issuing
multiple commands:
mkvdisk master_vdisk
mkvdisk aux_vdisk
mkvdisk master_change_volume
mkvdisk aux_change_volume
mkrcrelationship –activeactive
chrcrelationship -masterchange
chrcrelationship -auxchange
addvdiskacces
Also:
lsvdisk now includes “volume_id”, “volume_name” and “function” fields to easily identify the
individual vdisks that make up a HyperSwap volume. These views are “rolled-up” up in the
GUI to provide views that reflect the client’s view of the HyperSwap Volumes and it’s site
dependent copies, as opposed to the “low-level” VDisk(s) and VDisk-change-volume(s).
E.g.The Volumes → Volumes view below (Figure 7-39 on page 392) shows the HyperSwap
Volume “My hs volume” with an expanded view opened - using “+” to reveal 2 x volume copies
“My hs volume (London)” (Master VDisk) and “My hs volume (Hursley)” (Aux VDisk), i.e. we
do not show the VDisk-Change-Volumes.
Likewise the status the HyperSwap volume is reported at “parent” level, i.e. if one of the
copies is syncing or offline the “parent” HyperSwap volume reflects this state. (Figure 7-40).
Individual commands are briefly discussed in this next section, but refer to the infocenter for
full details, and current support status, of these new commands.
mkvolume
392 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Create a new empty volume using storage from existing storage pools. The type of volume
created is determined by the system topology and the number of storage pools specified.
Volume is always formatted (zeroed). This command can be used to create:
– Basic volume- any topology
– Mirrored volume- standard topology
– Stretched volume- stretched topology
– HyperSwap volume- hyperswap topology
rmvolume
Remove a volume. For a HyperSwap volume this includes deleting the active-active
relationship and the change volumes.
The –force parameter with rmvdisk is replaced by individual override parameters, making
it clearer to the user exactly what protection they are bypassing
mkimagevolume
Create a new image mode volume. Can be used to import a volume, preserving existing
data. Implemented as a separate command to provide greater differentiation between the
action of creating a new empty volume and creating a volume by importing data on an
existing mdisk.
addvolumecopy
Add a new copy to an existing volume. The new copy will always be synchronized from the
existing copy. For stretched and hyperswap topology systems this creates a highly
available volume. This command can be used to create:
– Mirrored volume- standard topology
– Stretched volume- stretched topology
– HyperSwap volume- hyperswap topology
rmvolumecopy
Remove a copy of a volume. Leaves the volume intact. Converts a Mirrored, Stretched or
HyperSwap volume into a basic volume. For a HyperSwap volume this includes deleting
the active-active relationship and the change volumes.
Allows a copy to be identified simply by its site.
The –force parameter with rmvdiskcopy is replaced by individual override parameters,
making it clearer to the user exactly what protection they are bypassing.
Author Comment: Please do not forget to check with Hartmut, if that is the case and if it covers the
topics then that is fine.
<< I’m proposing that we delete all / the majority of this section for this release of the redbook.
Currently this section is split into 4 parts
5.8.1 Windows 2008
5.8.2 Window 2008 with iSCSI
5.8.3 VMware
5.8.4 VMware with iSCSI
You can map the newly created volume to the host at creation time or map it later. If you did
not click Create and Map to Host when you created the volume, follow the steps in 7.8.1,
“Mapping newly created volumes to the host using the wizard” on page 394.
7.8.1 Mapping newly created volumes to the host using the wizard
We continue to map the volume that was created in 7.3, “Creating volumes using the Quick
Volume Creation” on page 366. We assume that you followed that procedure and clicked
Continue as, for example, shown in Figure 7-14 on page 373.
394 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
2. The Modify Host Mappings window opens, and your host and the newly created volume
are already selected. Click Map Volumes to map the volume to the host (Figure 7-42).
3. The confirmation window shows the result of mapping volume task (Figure 7-43).
4. After the task completes, the wizard returns to the Volumes window. By double-clicking the
volume, you can see the host maps (Figure 7-44).
The host is now able to access the volumes and store data on them. See 7.9, “Discovering
volumes on hosts and multipathing” on page 396 for information about discovering the
volumes on the host and making additional host settings, if required.
You can also create multiple volumes in preparation for discovering them later, and customize
mappings.
We assume that you have completed all previous steps in this book so that the hosts and the
IBM SVC is prepared:
Prepare your operating systems for attachment (Chapter 4, “Initial configuration” on
page 133).
Create hosts using the GUI (Chapter 4, “Initial configuration” on page 133).
Perform basic volume configuration and host mapping.
Our examples illustrate how to discover Fibre Channel and Internet Small Computer System
Interface (iSCSI) volumes on Microsoft Windows 2008 and VMware ESX 4.x hosts.
From the dynamic menu of the IBM SVC GUI, click the Hosts icon to open the Hosts menu,
and click the Hosts option (Figure 7-45).
396 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
The host details show you which volumes are currently mapped to the host, and you also
see the volume UID and the SCSI ID. In our example, four volumes with SCSI ID 0-3 are
mapped to the host.
2. Log on to your Microsoft host and click Start → All Programs → Subsystem Device
Driver DSM → Subsystem Device Driver DSM. A command-line interface opens. Type
the datapath query device command and press Enter to display IBM SVC disks that are
connected to this host (Example 7-1).
Total Devices : 4
398 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
============================================================================
Path# Adapter/Hard Disk State Mode Select Errors
0 * Scsi Port3 Bus0/Disk7 Part0 OPEN NORMAL 12 0
1 * Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 17 0
2 * Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 0 0
3 Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 7355 0
4 Scsi Port2 Bus0/Disk7 Part0 OPEN NORMAL 7546 0
5 * Scsi Port3 Bus0/Disk7 Part0 OPEN NORMAL 0 0
6 Scsi Port3 Bus0/Disk7 Part0 OPEN NORMAL 7450 0
The output provides information about mapped volumes. In our example, four disks are
connected, Disk5, Disk6, Disk7, Disk8, and eight paths to the disks are available (State
indicates OPEN).
3. Open the Windows Disk Management window (Figure 7-49 on page 400) by clicking
Start → Run, and then type diskmgmt.msc, and click OK.
Windows device discovery: Usually, Windows discovers new devices, such as disks,
by itself (Plug&Play function). If you completed all the steps but do not see any disks,
click Actions → Rescan Disk in Disk Management to discover potential volumes.
In our example, three of four disks are already initialized. We will use the fourth, unknown,
1 GB disk as an example for the next initialization steps.
4. Right-click the disk in the left pane and select Online (Figure 7-50).
400 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
5. Right-click the disk again, select Initialize Disk (Figure 7-51), and click OK.
6. Right-click in the right pane and select New Simple Volume (Figure 7-52).
7. Follow the wizard and the volume is ready to use from your Windows host (Figure 7-53 on
page 402).
The basic setup is now complete. The IBM SVC is configured. And the host is prepared to
access the volumes over several paths and is able to store data on the storage subsystem.
402 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Clicking the Mapped Volumes tab shows you which volumes are currently mapped to the
host, and you also see the volume UID and the SCSI ID. In our example, there are no
mapped volumes so far (Figure 7-55).
2. Log on to your Windows 2008 host and click Start → Administrative Tools → iSCSI
Initiator to open the iSCSI Configuration tab (Figure 7-56).
3. Enter the IP address of one of the IBM SVC iSCSI ports and click Quick Connect
(Figure 7-57).
iSCSI IP addresses: The iSCSI IP addresses are different from the cluster and
canister IP addresses, and they are configured in Chapter 4, “Initial configuration” on
page 133.
404 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
Now you have completed the steps to connect the storage disk to your iSCSI host, but you
are using only a single path at the moment. To enable multipathing for iSCSI targets, more
actions are required. Complete the following steps:
1. Click Start → Run and type cmd to open a command prompt. Run the following command
and press Enter (Example 7-2):
ServerManagerCMD.exe -install Multipath-IO
Start Installation...
[Installation] Succeeded: [Multipath I/O] Multipath I/O.
<100/100>
2. Click Start → Administrative Tools → MPIO, click the Discover Multi-Paths tab, and
select the Add support for iSCSI devices check box (Figure 7-59).
4. After reboot, select Start → Administrative Tools → iSCSI Initiator to open the iSCSI
Initiator Properties window (Configuration tab). Click the Discovery tab (Figure 7-60).
5. Click Discover Portal, enter the IP address of another IBM SVC iSCSI port (Figure 7-61),
and click OK.
406 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
6. Return to the Targets tab (Figure 7-62); the new connection is listed there as Inactive.
7. Highlight the inactive port and click Connect. The Connect to Target window opens
(Figure 7-63).
8. Select the Enable Multipath check box and click OK. The status of the second port now
indicates Connected (Figure 7-64).
Repeat this step for each IBM SVC port you want to use for iSCSI traffic. You may have up
to four port paths to the system.
408 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
9. Click Devices → MPIO to ensure that the multipath policy for Windows 2008 is set to the
default, which is Round Robin with Subset (Figure 7-65), and click OK to close this view.
10.Map volume to the iSCSI host if you have not done it already. In our example, we use 2 GB
disk.
11.Open the Windows Disk Management window (Figure 7-66 on page 410) by clicking
Start → Run, and then type diskmgmt.msc, and click OK.
12.Set the disk online, initialize it, create a file system on it, and then it is ready to use. The
detailed steps of this process are the same as described in 7.9.1, “Windows 2008 Fibre
Channel volume attachment” on page 397.
Now the storage disk is ready for use (Figure 7-67). In our example, we mapped a 2 GB disk,
from IBM SVC Generation 2, to a Windows 2008 host using iSCSI protocol.
410 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
The Host Details window shows that one volume is connected to the ESX FC host using
SCSI ID 0. The UID of the volume is also displayed.
2. Connect to your VMware ESX Server using the vSphere client, navigate to the
Configuration tab, and select Storage Adapters or Storage view (Figure 7-69).
3. Select Rescan All and click OK (Figure 7-70 on page 412) to scan for new storage
devices.
5. The Add Storage wizard opens. Click Select Disk/LUN and click Next. The IBM SVC disk
is displayed (Figure 7-72 on page 413). Select it and click Next.
412 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
6. Follow the wizard to complete the attachment of the disk. After you click Finish, the wizard
closes and you return to the storage view.
Figure 7-73 shows that the new volume is added to the configuration.
7. Highlight the new data store and click Properties to see the details of it (Figure 7-74 on
page 414).
8. Click Manage Paths to customize the multipath settings. Select Round Robin
(Figure 7-75) and click Change.
414 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
The storage disk is available and ready to use for your VMware ESX server using Fibre
Channel attachment.
The Host Details window shows that one volume is connected to the ESX iSCSI host
using SCSI ID 0. The UID of the volume is also displayed.
2. Connect to your VMware ESX Server using the vSphere Client, click the Configuration
tab (Figure 7-77), and select Storage Adapters.
3. Select iSCSI Software Adapter and click Properties. The iSCSI initiator properties
window opens. Select the Dynamic Discovery tab (Figure 7-78 on page 416) and click
Add.
4. To add a target, enter the target IP address (Figure 7-79). The target IP address is the IP
address of a node canister in the I/O group from which you are mapping the iSCSI volume.
Keep the IP port number at the default value of 3260, and click OK. The connection
between the initiator and target is established.
Repeat this step for each IBM SVC iSCSI port that you want to use for iSCSI connections.
416 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
iSCSI IP addresses: The iSCSI IP addresses are different from the cluster and
canister IP addresses. They have been configured in Chapter 4, “Initial configuration”
on page 133.
5. After you have added all the ports required, close the iSCSI Initiator properties by clicking
Close (Figure 7-78 on page 416).
You are prompted to rescan for new storage devices. Confirm the scan by clicking Yes
(Figure 7-80).
6. Go to the storage view shown in Figure 7-81 and click Add Storage.
7. The Add Storage wizard opens (Figure 7-82). Select Disk/LUN and click Next.
8. The new iSCSI logical unit number (LUN) displays. Select it and click Next (Figure 7-83 on
page 419).
418 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
11.Enter a name for the data store and click Next (Figure 7-86).
12.Select the maximum file system size and click Next (Figure 7-87).
420 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
The new iSCSI LUN is now in the process of being added. After the task completes, the
new data store is listed in the storage view (Figure 7-89).
14.Highlight the new data store and click Properties to open and review the data store
settings (Figure 7-90).
422 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 05A VOLUME CONFIGURATION JON.fm
15.Click Manage Paths, select Round Robin as the multipath policy (Figure 7-91), and then
click Change.
Click Close twice to return to the storage view, and now the storage disk is available and
ready to use for your VMware ESX server using an iSCSI attachment.
424 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
We provide a basic technical overview and the benefits of each feature. For more information
about planning and configuration, see the following IBM Redbooks publications:
Easy Tier:
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
– IBM System Storage SAN Volume Controller Best Practices and Performance
Guidelines, SG24-7521
– IBM DS8000 Easy Tier, REDP-4667 (This concept is similar to SVC Easy Tier.)
Thin provisioning:
– Thin Provisioning in an IBM SAN or IP SAN Enterprise Environment, REDP-4265
– DS8000 Thin Provisioning, REDP-4554 (similar concept to IBM SAN Volume Controller
thin provisioning)
RtC:
– Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859
– Implementing IBM Real-time Compression in SAN Volume Controller and IBM
Storwize V7000, TIPS1083
– Implementing IBM Easy Tier with IBM Real-time Compression, TIPS1072
Thin provisioning
Real-time Compression Software
426 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
8.1 Introduction
In modern and complex application environments, the increasing and often unpredictable
demands for storage capacity and performance lead to issues of planning and optimization of
storage resources.
All of these issues deal with data placement and relocation capabilities or data volume
reduction. Most of these challenges can be managed by having spare resources available
and by moving data, and by the use of data mobility tools or operating systems features (such
as host level mirroring) to optimize storage configurations. However, all of these corrective
actions are expensive in terms of hardware resources, labor, and service availability.
Relocating data among the physical storage resources that dynamically or effectively reduces
the amount of data, that is, transparently to the attached host systems, is becoming
increasingly important.
Choosing the correct mix of drives and the correct data placement is critical to achieve
optimal performance at low cost. Maximum value can be derived by placing “hot” data with
high I/O density and low response time requirements on SSDs or flash arrays, and targeting
HDDs for “cooler” data that is accessed more sequentially and at lower rates.
Easy Tier automates the placement of data among different storage tiers and it can be
enabled for internal and external storage. This IBM Spectrum Virtualize feature boosts your
storage infrastructure performance to achieve optimal performance through a software,
server, and storage solution. Additionally, the new, no charge feature called storage pool
balancing, introduced in the V7.3 IBM Spectrum Virtualize software version, automatically
moves extents within the same storage tier, from overloaded to less loaded managed disks
(MDisks). Storage pool balancing ensures that your data is optimally placed among all disks
within storage pools.
428 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Figure 8-1 Easy Tier in the IBM Spectrum Virtualize software stack
In general, the storage environments’ I/O is monitored at a volume level and the entire volume
is always placed inside one appropriate storage tier. Determining the amount of I/O, moving
part of the underlying volume to an appropriate storage tier and reacting to workload changes
are too complex for manual operation. This area is where the Easy Tier feature can be used.
Easy Tier is a performance optimization function because it automatically migrates (or moves)
extents that belong to a volume between different storage tiers (Figure 8-2) or the same
storage tier. Because this migration works at the extent level, it is often referred to as
sub-LUN migration. The movement of the extents is online and unnoticed from the host’s
point of view. As a result of extent movement, the volume no longer has all its data in one tier
but rather in two or three tiers. Figure 8-2 shows the basic Easy Tier principle of operation.
You can enable Easy Tier on a volume basis. It monitors the I/O activity and latency of the
extents on all Easy Tier enabled volumes over a 24-hour period. Based on the performance
log, Easy Tier creates an extent migration plan and dynamically moves (promotes) high
activity or hot extents to a higher disk tier within the same storage pool. It also moves
(demotes) extents whose activity dropped off, or cooled, from a higher disk tier MDisk back to
a lower tier MDisk. When Easy Tier runs in a storage pool rebalance mode, it moves extents
from busy MDisks to less busy MDisks of the same type.
The individual SSDs in the storage that is managed by the SVC are combined into an array,
usually in RAID 10 or RAID 5 format. It is unlikely that RAID 6 SSD arrays are used because
of the double parity overhead, with two logical SSDs used for parity only. A LUN is created on
the array and then presented to the SVC as a normal MDisk.
430 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
As is the case for HDDs, the SSD RAID array format helps to protect against individual SSD
failures. Depending on your requirements, you can achieve more high availability protection
above the RAID level by using volume mirroring.
The internal storage configuration of flash arrays can differ depending on an array vendor. But
regardless of the methods that are used to configure flash-based storage, the flash system
maps a volume to a host, in this case, to the SVC. From the SVC perspective, a volume that is
presented from flash storage is also seen as a normal managed disk.
Starting with SVC 2145-DH8 nodes and IBM Storwize V7.3, up to two expansion drawers can
be connected to the one SVC IO Group. Each drawer can have up to 24 SDDs and only SDD
drives are supported. The SDD drives are then gathered together to form RAID arrays in the
same way that RAID arrays are formed in the IBM Storwize systems.
After creation of an SSD RAID array it appears as an MDisk but with a tier of flash, which
differs from MDisks presented from external storage systems. Because the SVC does not
know from what kind of physical disks the presented MDisks are formed from, the default
MDisk tier that SVC adds to each external MDisk is enterprise. It is up to the user or
administrators to change the type of MDisks to flash, enterprise, or nearline (NL).
To change a tier of an MDisk in the CLI, use the chmdisk command, as in Example 8-1.
It is also possible to change the MDisk tier from the GUI but this only applies to external
MDisks. To change the tier go to Pools → External Storage and click the “+” sign next to the
controller which owns the MDisks for which you want to change the tier. Then right-click on
the desired MDisk and select Modify Tier (Figure 8-3).
The new window opens with options to change the tier (Figure 8-4).
This change happens online and has no impact on hosts or availability of the volumes.
If you do not see the Tier column right-click the blue title row and select the Tier check box as
presented in Figure 8-5.
432 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
serial-attached SCSI (SAS) disks, nearline SAS or Serial Advanced Technology Attachment
(SATA), or even SSDs or flash storage systems.
The SVC does not automatically detect the type of MDisks, except for MDisks that are formed
of SSD drives from attached expansion drawers. Instead, all external MDisks are initially put
into the enterprise tier, by default. Then, the administrator must manually change the tier of
MDisks and add them to storage pools. Depending on what type of disks are gathered to form
a storage pool, we distinguish two types of storage pools: single-tier and multi-tier.
MDisks that are used in a single-tier storage pool should have the same hardware
characteristics, for example, the same RAID type, RAID array size, disk type, disk revolutions
per minute (RPM), and controller performance characteristics.
Adding SSDs to the pool also means that more space is now available for new volumes or
volume expansion.
Note: Image mode and sequential volumes are not candidates for Easy Tier automatic
data placement because all extents for those types of volumes must reside on one, specific
MDisk and cannot be moved.
The Easy Tier setting can be changed on a storage pool and volume level. Depending on the
Easy Tier setting and the number of tiers in the storage pool, Easy Tier services may function
in a different way. Table 8-1 on page 434 shows possible combinations of Easy Tier settings.
434 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Storage pool Easy Number of tiers in the Volume copy Easy Volume copy Easy
Tier setting storage pool Tier setting Tier status
Table notes:
1. If the volume copy is in image or sequential mode or is being migrated, the volume copy
Easy Tier status is measured instead of active.
2. When the volume copy status is inactive, no Easy Tier functions are enabled for that
volume copy.
3. When the volume copy status is measured, the Easy Tier function collects usage
statistics for the volume but automatic data placement is not active.
4. When the volume copy status is balanced, the Easy Tier function enables
performance-based pool balancing for that volume copy.
5. When the volume copy status is active, the Easy Tier function operates in automatic
data placement mode for that volume.
6. The default Easy Tier setting for a storage pool is Auto, and the default Easy Tier setting
for a volume copy is On. Therefore, Easy Tier functions, except pool performance
balancing, are disabled for storage pools with a single tier. Automatic data placement
mode is enabled by default for all striped volume copies in a storage pool with two or
more tiers.
Figure 8-8 shows the naming convention and all supported combinations of storage tiering
that are used by Easy Tier.
436 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
When enabled, Easy Tier performs the following actions between three tiers presented in
Figure 8-8:
Promote
Moves the relevant hot extents to higher performing tier.
Swap
Exchange cold extent in upper tier with hot extent in lower tier.
Warm demote:
– Prevents performance overload of a tier by demoting a warm extent to the lower tier.
– Triggered when bandwidth or IOPS exceeds predefined threshold.
Demote or cold demote
Coldest data is moved to lower HDD tier. Only supported between HDD tiers.
Expanded cold demote
Demotes appropriate sequential workloads to the lowest tier to better utilize nearline disk
bandwidth.
Storage pool balancing:
– Redistribute extents within a tier to balance utilization across MDisks for maximum
performance.
– Moves hot extents from high utilized MDisks to low utilized MDisks.
– Exchanges extents between high utilized MDisks and low utilized MDisks.
Easy Tier attempts to migrate the most active volume extents up to SSD first.
A previous migration plan and any queued extents that are not yet relocated are
abandoned.
Note: Extent migration occurs only between adjacent tiers. In a three-tiered storage pool,
Easy Tier will not move extents from SSDs directly to NL-SAS and vice versa without
moving them first to SAS drives.
438 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Dynamic data movement is not apparent to the host server and application users of the data,
other than providing improved performance. Extents are automatically migrated, as explained
in “Implementation rules” on page 439.
The statistic summary file is also created in this mode. This file can be offloaded for input to
the advisor tool. The tool produces a report on the extents that are moved to a higher tier and
a prediction of performance improvement that can be gained if more higher tier disks are
available.
Options: The Easy Tier function can be turned on or off at the storage pool level and at the
volume level.
The process will automatically balance existing data when new MDisks are added into an
existing pool even if the pool only contains a single type of drive. This does not mean that the
process will migrate extents from existing MDisks to achieve even extent distribution among
all, old and new, MDisks in the storage pool. Easy Tier RB process within a tier migration plan
is based on performance and not the capacity of underlying MDisks.
Note: Storage pool balancing can be used to balance extents when mixing different size
disks of the same performance tier. For example, when adding larger capacity drives to a
pool with smaller capacity drives of the same class, storage pool balancing redistributes
the extents to take advantage of the additional performance of the new MDisks.
Implementation rules
Remember the following implementation and operational rules when you use the IBM System
Storage Easy Tier function on the SVC:
Easy Tier automatic data placement is not supported on image mode or sequential
volumes. I/O monitoring for such volumes is supported, but you can not migrate extents on
these volumes unless you convert image or sequential volume copies to striped volumes.
Automatic data placement and extent I/O activity monitors are supported on each copy of
a mirrored volume. Easy Tier works with each copy independently of the other copy.
If possible, the SVC creates volumes or expands volumes by using extents from MDisks
from the HDD tier. However, it uses extents from MDisks from the SSD tier, if necessary.
When a volume is migrated out of a storage pool that is managed with Easy Tier, Easy Tier
automatic data placement mode is no longer active on that volume. Automatic data
placement is also turned off while a volume is being migrated, even if when it is between
pools that both have Easy Tier automatic data placement enabled. Automatic data placement
for the volume is re-enabled when the migration is complete.
Limitations
When you use Easy Tier on the SVC, keep in mind the following limitations:
Removing an MDisk by using the -force parameter
When an MDisk is deleted from a storage pool with the -force parameter, extents in use
are migrated to MDisks in the same tier as the MDisk that is being removed, if possible. If
insufficient extents exist in that tier, extents from the other tier are used.
Migrating extents
When Easy Tier automatic data placement is enabled for a volume, you cannot use the
svctask migrateexts CLI command on that volume.
Migrating a volume to another storage pool
When the SVC migrates a volume to a new storage pool, Easy Tier automatic data
placement between the two tiers is temporarily suspended. After the volume is migrated to
its new storage pool, Easy Tier automatic data placement between the generic SSD tier
and the generic HDD tier resumes for the moved volume, if appropriate.
When the SVC migrates a volume from one storage pool to another, it attempts to migrate
each extent to an extent in the new storage pool from the same tier as the original extent.
In several cases, such as where a target tier is unavailable, the other tier is used. For
example, the generic SSD tier might be unavailable in the new storage pool.
Migrating a volume to an image mode
Easy Tier automatic data placement does not support image mode. When a volume with
active Easy Tier automatic data placement mode is migrated to an image mode, Easy Tier
automatic data placement mode is no longer active on that volume.
Image mode and sequential volumes cannot be candidates for automatic data placement;
however, Easy Tier supports evaluation mode for image mode volumes.
440 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier off
easy_tier_status measured
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 1.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 1.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
encrypt no
442 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
owner_type none
owner_id
owner_name
encrypt no
volume_id 1
volume_name test
function
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 0
mdisk_grp_name Pool0_Site1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 1.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 1.00GB
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
encrypt no
free_capacity 1.93TB
virtual_capacity 22.00GB
used_capacity 22.00GB
real_capacity 22.00GB
overallocation 1
warning 80
easy_tier auto
easy_tier_status balanced
tier ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 4
tier_capacity 1.95TB
tier_free_capacity 1.93TB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id 1
site_name ITSO_DC1
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
encrypt no
owner_type none
owner_id
owner_name
444 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_mdisk_count 4
tier_capacity 1.95TB
tier_free_capacity 1.93TB
tier nearline
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
site_id 1
site_name ITSO_DC1
parent_mdisk_grp_id 0
parent_mdisk_grp_name Pool0_Site1
child_mdisk_grp_count 0
child_mdisk_grp_capacity 0.00MB
type parent
encrypt no
owner_type none
owner_id
owner_name
The first setting we discuss is called Easy Tier acceleration. This is a system-wide setting and
is disabled by default. Turning this setting on makes Easy Tier move extents up to four times
faster than the default setting. In accelerate mode Easy Tier can move up to 48 GiB per 5
minutes, while in normal mode it moves up to 12 GiB. Enabling Easy Tier acceleration is
advised only during periods of low system activity. The two most probable use cases for
acceleration are:
When adding new capacity to the pool, accelerating Easy Tier can quickly spread existing
volumes onto the new MDisks.
Migrating the volumes between the storage pools when target storage pool has more tiers
than the source storage pool so Easy Tier can quickly promote or demote extents in the
target pool.
This setting can be changed online, without any impact on host or data availability. To turn on
or off Easy Tier acceleration mode use chsystem as in Example 8-3
space_in_mdisk_grps 3.9TB
space_allocated_to_vdisks 522.00GB
total_free_space 11.2TB
total_vdiskcopy_capacity 522.00GB
total_used_capacity 522.00GB
total_overallocation 4
total_vdisk_capacity 522.00GB
total_allocated_extent_capacity 525.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.6.0.0 (build 121.17.1510192058000)
console_IP 10.18.228.140:443
id_alias 000002007FC02102
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply [email protected]
email_contact no
email_contact_primary 1234567
email_contact_alternate
email_contact_location ff
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 7
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret 1010
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 3.90TB
tier_free_capacity 3.39TB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
easy_tier_acceleration off
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no
446 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization ff
email_machine_address ff
email_machine_city ff
email_machine_state XX
email_machine_zip 12345
email_machine_country US
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method chap
vdisk_protection_time 15
vdisk_protection_enabled no
product_name IBM SAN Volume Controller
odx off
max_replication_delay 0
id 000002007FC02102
name ITSO SVC DH8
location local
partnership
total_mdisk_capacity 11.7TB
space_in_mdisk_grps 3.9TB
space_allocated_to_vdisks 522.00GB
total_free_space 11.2TB
total_vdiskcopy_capacity 522.00GB
total_used_capacity 522.00GB
total_overallocation 4
total_vdisk_capacity 522.00GB
total_allocated_extent_capacity 525.00GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 7.6.0.0 (build 121.17.1510192058000)
console_IP 10.18.228.140:443
id_alias 000002007FC02102
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply [email protected]
email_contact no
email_contact_primary 1234567
email_contact_alternate
email_contact_location ff
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 7
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret 1010
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 3.90TB
tier_free_capacity 3.39TB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
easy_tier_acceleration on
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization ff
email_machine_address ff
email_machine_city ff
email_machine_state XX
email_machine_zip 12345
email_machine_country US
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method chap
vdisk_protection_time 15
vdisk_protection_enabled no
448 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
The second setting is called MDisk Easy Tier load. This setting is set on a per MDisk basis
and indicates how much load Easy Tier can put on a particular MDisk. There are five different
values that can be set to each MDisk: default, low, medium, high, very high.
The system uses the default setting based on the discovered storage system the MDisk is
presented from. Change the default setting to any other value only when you are certain a
particular MDisk is under-utilized and can handle more load, or that the MDisk is overutilized
and the load should be lowered. Change this setting to very high only for SDD and flash
MDisks.
Setting can be changed online, without any impact on the hosts or data availability. To change
this setting use chmdisk as seen in Example 8-4.
drive_count 0
stripe_width 0
rebuild_areas_total
rebuild_areas_available
rebuild_areas_goal
450 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Heat data files are produced approximately once a day (that is, every 24 hours) when Easy
Tier is active on one or more storage pools and summarizes the activity per volume since the
prior heat data file was produced. On the SVC the heat data file is in the /dumps directory on
the configuration node and named dpa_heat.node_name.time_stamp.data.
Any existing heat data file is erased after seven days. The file must be offloaded by the user
and STAT must be invoked from a Windows command prompt console with the file specified
as a parameter. The user can also specify the output directory. STAT creates a set of HTML
files and the user can then open the index.html in a browser to view the results.
Updates to the STAT for V7.3 have introduced an additional capability for reporting. As a
result, when the STAT tool is run on a heat map file an additional three CSV files are created
and placed in the Data_files directory.
Figure 8-10 shows the CSV files highlighted in the Data_files directory after running the stat
tool over an SVC heatmap.
In addition to STAT tool, IBM Spectrum Virtualize has an additional utility, which is a Microsoft
SQL file for creating additional graphical reports of the workload that Easy Tier performs. The
IBM STAT Charting Utility takes the output of the three CSV files and turns them into graphs
for simple reporting.
452 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
The STAT Charting Utility can be downloaded from the IBM Support website:
http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS5251
Thin provisioning presents more storage space to the hosts or servers that are connected to
the storage system than is available on the storage system. The IBM SVC has this capability
for Fibre Channel and iSCSI provisioned volumes.
An example of thin provisioning is when a storage system contains 5000 GiB of usable
storage capacity, but the storage administrator mapped volumes of 500 GiB each to 15 hosts.
In this example, the storage administrator makes 7500 GiB of storage space visible to the
hosts, even though the storage system has only 5000 GiB of usable space, as shown in
Figure 8-14. In this case, all 15 hosts cannot immediately use all 500 GiBs that are
provisioned to them. The storage administrator must monitor the system and add storage, as
needed.
454 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
You can imagine thin provisioning as the same process as when airlines sell more tickets on a
flight than physical seats are available, assuming that some passengers do not appear at
check-in. They do not assign actual seats at the time of sale, which avoids each client having
a claim on a specific seat number. The same concept applies to thin provisioning (airline)
SVC (plane) and its volumes (seats). The storage administrator (airline ticketing system) must
closely monitor the allocation process and set proper thresholds.
Real capacity defines how much disk space is allocated to a volume. Virtual capacity is the
capacity of the volume that is reported to other SVC components (such as FlashCopy or
remote copy) and to the hosts. For example, you can create a volume with a real capacity of
only 100 GiB but a virtual capacity of 1 TiB. The actual space that is used by the volume on
the SVC will be 100 GiB but hosts will see a 1 TiB volume.
A directory maps the virtual address space to the real address space. The directory and the
user data share the real capacity.
A volume that is created without the autoexpand feature, and therefore has a zero
contingency capacity, goes offline when the real capacity is used and the volume must
expand.
Warning threshold: Enable the warning threshold, by using email or a Simple Network
Management Protocol (SNMP) trap, when you work with thin-provisioned volumes. You
can enable the warning threshold on the volume, and on the storage pool side, especially
when you do not use the autoexpand mode. Otherwise, the thin volume goes offline if it
runs out of space.
Autoexpand mode does not cause real capacity to grow much beyond the virtual capacity.
The real capacity can be manually expanded to more than the maximum that is required by
the current virtual capacity, and the contingency capacity is recalculated.
Space allocation
When a thin-provisioned volume is created, a small amount of the real capacity is used for
initial metadata. Write I/Os to the grains of the thin volume (that were not previously written to)
cause grains of the real capacity to be used to store metadata and user data. Write I/Os to the
grains (that were previously written to) update the grain where data was previously written.
Grain definition: The grain is defined when the volume is created and can be 32 KiB,
64 KiB, 128 KiB, or 256 KiB.
Smaller granularities can save more space, but they have larger directories. When you use
thin-provisioning with FlashCopy, specify the same grain size for the thin-provisioned volume
and FlashCopy.
Select the volume size, name and choose Thin-provisioned from Capacity savings drop down
menu. If you want to create more volumes just click the “+” sign next to volume name. Click
Volume Location tab and select the storage pool for the volume. If you have more than one
I/O group, here you can also select the caching I/O group and preferred node, as shown in
Figure 8-16.
456 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Next go to Thin Provisioning tab and enter the thin provisioned parameters, such as real
capacity, warning threshold or autoexpand enabled or disabled, as seen in Figure 8-17.
Check your selections in the General tab and click Create button as shown in Figure 8-18.
Some file systems (for example, New Technology File System [NTFS]) write to the whole
volume before overwriting deleted files. Other file systems reuse space in preference to
allocating new space.
File system problems can be moderated by tools, such as “defrag,” or by managing storage by
using host Logical Volume Managers (LVMs).
The thin-provisioned volume also depends on how applications use the file system. For
example, some applications delete log files only when the file system is nearly full.
Note: Starting with V7.3 the cache subsystem architecture was redesigned. Now,
thin-provisioned volumes can benefit from lower cache functions (such as coalescing
writes or prefetching), which greatly improve performance.
458 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Table 8-2 Maximum thin-provisioned volume virtual capacities for an extent size
Extent size in MB Maximum volume real capacity Maximum thin-provisioned
in GB volume virtual capacity in GB
16 2,048 2,000
32 4,096 4,000
64 8,192 8,000
Table 8-3 on page 459 shows the maximum thin-provisioned volume virtual capacities for a
grain size.
Table 8-3 Maximum thin-provisioned volume virtual capacities for a grain size
Grain size in KiB Maximum thin-provisioned volume virtual capacity
in GiB
32 260,000
64 520,000
128 1,040,000
256 2,080,000
For more information and detailed performance considerations for configuring thin
provisioning, see IBM System Storage SAN Volume Controller Best Practices and
Performance Guidelines, SG24-7521.
Tip: Implementing compression in IBM Spectrum Virtualize provides the same benefits
to internal SSDs and externally virtualized storage systems.
General-purpose volumes
Most general-purpose volumes are used for highly compressible data types, such as home
directories, CAD/CAM, oil and gas geoseismic data, and log data. Storing such types of data
in compressed volumes provides immediate capacity reduction to the overall consumed
space. More space can be provided to users without any change to the environment.
460 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Many file types can be stored in general-purpose servers. However, for practical information,
the estimated compression ratios are based on actual field experience. Expected
compression ratios are 50% - 60%.
File systems that contain audio, video files, and compressed files are not good candidates for
compression. The overall capacity savings on these file types are minimal.
Databases
Database information is stored in table space files. High compression ratios are common in
database volumes. Examples of databases that can greatly benefit from RtC are IBM DB2®,
Oracle, and Microsoft SQL Server. Expected compression ratios are 50% - 80%.
Virtualized infrastructures
The proliferation of open systems virtualization in the market has increased the use of storage
space, with more virtual server images and backups kept online. The use of compression
reduces the storage requirements at the source.
Examples of virtualization solutions that can greatly benefit from RtC are VMware, Microsoft
Hyper-V, and kernel-based virtual machine (KVM). Expected compression ratios are 45% -
75%.
Tip: Virtual machines (VMs) with file systems that contain compressed files are not good
compression candidates.
At a high level, the IBM RACE component compresses data that is written into the storage
system dynamically. This compression occurs transparently, so Fibre Channel and iSCSI
connected hosts are not aware of the compression. RACE is an inline compression
technology, which means that each host write is compressed as it passes through IBM
Spectrum Virtualize to the disks. This technology has a clear benefit over other compression
technologies that are post-processing based. These technologies do not provide immediate
capacity savings; therefore, they are not a good fit for primary storage workloads, such as
databases and active data set applications.
RACE is based on the Lempel-Ziv lossless data compression algorithm and operates in a
real-time method. When a host sends a write request, the request is acknowledged by the
write cache of the system, and then staged to the storage pool. As part of its staging, the
write request passes through the compression engine and is then stored in compressed
format onto the storage pool. Therefore, writes are acknowledged immediately after they are
received by the write cache with compression occurring as part of the staging to internal or
external physical storage.
Capacity is saved when the data is written by the host because the host writes are smaller
when they are written to the storage pool.
IBM RtC is a self-tuning solution, which is similar to the SVC system. It is adapting to the
workload that runs on the system at any particular moment.
Compression utilities
Compression is probably most known to users because of the widespread use of
compression utilities, such as the zip and gzip utilities. At a high level, these utilities take a file
as their input, and parse the data by using a sliding window technique. Repetitions of data are
detected within the sliding window history, most often 32 KiB. Repetitions outside of the
window cannot be referenced. Therefore, the file cannot be reduced in size unless data is
repeated when the window “slides” to the next 32 KiB slot.
Figure 8-19 shows compression that uses a sliding window, where the first two repetitions of
the string “ABCD” fall within the same compression window, and can therefore be
compressed by using the same dictionary. The third repetition of the string falls outside of this
window and therefore cannot be compressed by using the same compression dictionary as
the first two repetitions, reducing the overall achieved compression ratio (Figure 8-19).
462 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
However, drawbacks exist to this approach. An update to a chunk requires a read of the
chunk followed by a recompression of the chunk to include the update. The larger the chunk
size chosen, the heavier the I/O penalty to recompress the chunk. If a small chunk size is
chosen, the compression ratio is reduced because the repetition detection potential is
reduced.
Figure 8-20 shows an example of how the data is broken into fixed size chunks (in the
upper-left corner of the figure). It also shows how each chunk gets compressed
independently into variable length compressed chunks (in the upper-right side of the figure).
The resulting compressed chunks are stored sequentially in the compressed output.
This method enables an efficient and consistent method to index the compressed data
because the data is stored in fixed-size containers.
Location-based compression
Both compression utilities and traditional storage systems compression compress data by
finding repetitions of bytes within the chunk that is being compressed. The compression ratio
of this chunk depends on how many repetitions can be detected within the chunk. The
number of repetitions is affected by how much the bytes stored in the chunk are related to
each other. The relationship between bytes is driven by the format of the object. For example,
an office document might contain textual information, and an embedded drawing, such as this
page. Because the chunking of the file is arbitrary, it has no notion of how the data is laid out
within the document. Therefore, a compressed chunk can be a mixture of the textual
information and part of the drawing. This process yields a lower compression ratio because
the different data types mixed together cause a suboptimal dictionary of repetitions. That is,
fewer repetitions can be detected because a repetition of bytes in a text object is unlikely to be
found in a drawing.
This traditional approach to data compression is also called location-based compression. The
data repetition detection is based on the location of data within the same chunk.
This challenge was also addressed with the predecide mechanism that was introduced in
version 7.1.
Predecide mechanism
Certain data chunks have a higher compression ratio than others. Compressing some of the
chunks saves little space but still requires resources, such as CPU and memory. To avoid
spending resources on uncompressible data, and to provide the ability to use a different,
more effective (in this particular case) compression algorithm, IBM invented a predecide
mechanism that was first introduced in version 7.1.
The chunks that are below a certain compression ratio are skipped by the compression
engine, therefore saving CPU time and memory processing. Chunks that are decided not to
be compressed with the main compression algorithm, but that still can be compressed well
with another algorithm, will be marked and processed. The result can vary because predecide
does not check the entire block, only a sample of it.
464 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
Temporal compression
RACE offers a technology leap, which is called temporal compression, beyond location-based
compression.
When host writes arrive to RACE, they are compressed and fill fixed size chunks that are also
called compressed blocks. Multiple compressed writes can be aggregated into a single
compressed block. A dictionary of the detected repetitions is stored within the compressed
block. When applications write new data or update existing data, the data is typically sent
from the host to the storage system as a series of writes. Because these writes are likely to
originate from the same application and be from the same data type, more repetitions are
usually detected by the compression algorithm.
This type of data compression is called temporal compression because the data repetition
detection is based on the time that the data was written into the same compressed block.
Temporal compression adds the time dimension that is not available to other compression
algorithms. Temporal compression offers a higher compression ratio because the
compressed data in a block represents a more homogeneous set of input data.
Figure 8-23 shows how three writes sent one after the other by a host end up in different
chunks. They get compressed in different chunks because their location in the volume is not
adjacent. This approach yields a lower compression ratio because the same data must be
compressed non-natively by using three separate dictionaries.
When the same three writes, as shown on Figure 8-24, are sent through RACE, the writes are
compressed together by using a single dictionary. This approach yields a higher compression
ratio than location-based compression.
466 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
With dual RACE enhancement the compression performance can be boosted up to two times
for compressed workloads when compared to previous versions.
To take advantage of dual RACE, several software and hardware requirements must be met:
The software must be at level V7.4.
Only 2145-DH8 nodes are supported.
A second eight-core CPU must be installed per SVC node.
An additional 32 GB must be installed per SVC node.
At least one Coleto Creek acceleration card must be installed per SVC node. The second
acceleration card is not required.
Note: We recommend using two acceleration cards for the best performance.
When using the dual RACE feature, the acceleration cards are shared between RACE
components, which means that the acceleration cards are used simultaneously by both
RACE components. The rest of resources, such as CPU cores and RAM memory, are evenly
divided between the RACE components. You do not need to manually enable dual RACE;
dual RACE triggers automatically when all minimal software and hardware requirements are
met. If the SVC is compression capable but the minimal requirements for dual RACE are not
met, only one RACE instance is used (as in the previous versions of the code).
RACE technology is implemented into the IBM Spectrum Virtualize thin provisioning layer,
and it is an organic part of the stack. The IBM Spectrum Virtualize software stack is shown in
Figure 8-26. Compression is transparently integrated with existing system management
design. All of the IBM Spectrum Virtualize advanced features are supported on compressed
volumes. You can create, delete, migrate, map (assign), and unmap (unassign) a compressed
volume as though it were a fully allocated volume. In addition, you can use RtC with Easy Tier
on the same volumes. This compression method provides nondisruptive conversion between
compressed and decompressed volumes. This conversion provides a uniform user
experience and eliminates the need for special procedures when dealing with compressed
volumes.
When the upper cache layer destages to the RACE, the I/Os are sent to the thin-provisioning
layer. They are then sent to the RACE, and if necessary, to the original host write or writes.
The metadata that holds the index of the compressed volume is updated, if needed, and
compressed, as well.
468 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
If the RtC component does not contain the requested data, the request is forwarded to the
SVC lower-level cache.
If the lower-level cache contains the requested data, it is sent up the stack and returned to
the host without accessing the storage.
If the lower-level cache does not contain the requested data, it sends a read request to the
storage for the requested data.
In the drop-down menu select Compression as the Capacity Savings option as shown in
Figure 8-28.
After the copies are fully synchronized, the original volume copy will be deleted automatically.
As a result, you have compressed data on the existing volume. This process is nondisruptive,
so the data remains online and accessible by applications and users.
This capability enables clients to regain space from the storage pool, which can then be
reused for other applications.
With the virtualization of external storage systems, the ability to compress already stored data
significantly enhances and accelerates the benefit to users. This capability allows them to see
a tremendous return on their SVC investment. On the initial purchase of an SVC with RtC,
clients can defer their purchase of new storage. When new storage needs to be acquired, IT
purchases less of the required storage before compression.
Important: The SVC reserves some of its resources, such as CPU cores and RAM
memory, after you create one compressed volume or volume copy. This reserve might
affect your system performance if you do not plan for the reserve in advance.
In order to create a compressed volume using Basic option, from the top bar under Volumes
menu chose Create Volumes and select Basic in Quick Volume Creation section, as shown
in Figure 8-29.
470 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
In order to create a compressed volume using Advanced option, from the top bar under
Volumes menu chose Create Volumes and select Custom in Advanced section, as shown
in Figure 8-30.
In Volume Details section set up Capacity, Saving Type (Compression) and give volume a
Name as shown in Figure 8-31.
Set location properties in Volume Location section while setting Pool as shown in
Figure 8-32.
Compressed section provides the ability to set or change allocated (virtual) capacity and the
real capacity that data uses on this volume, autoexpand and warning thresholds
(Figure 8-33).
472 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 07 Advanced Features Lev Marcin.fm
When the compressed volume is configured, you can directly map it to the host or map it later
on request.
8.4.10 Comprestimator
IBM Spectrum Virtualize V7.6 introduce a utility to estimate expected compression ratios on
existing volumes. V7.5 and V7.6 also includes a line of RAS improvements and features that
helps IBM services to troubleshoot and monitor customers environment in a much better way.
The built-in Comprestimator is a command line utility that can be used to estimate an
expected compression rate for a given volume.
Example 8-5 shows an example of the command that run over one volume ID 0
IBM_2145:ITSO_SVC_DH8:superuser>lsvdiskanalysis 0
id 0
name SQL_Data0
state estimated
started_time 151012104343
analysis_time 151012104353
capacity 300.00GB
thin_size 290.85GB
thin_savings 9.15GB
thin_savings_ratio 3.05
compressed_size 141.58GB
compression_savings 149.26GB
compression_savings_ratio 51.32
total_savings 158.42GB
total_savings_ratio 52.80
accuracy 4.97
where state:
idle - Was never estimated and not currently scheduled.
scheduled - Volume is queued for estimation, will be processed based on lowest volume
ID first.
active - Volume is being analyzed.
canceling - Volume was requested to abort an active analysis, analysis was not yet
aborted.
estimated - Volume was analyzed and results show the expected savings of thin
provisioning and compression.
sparse - Volume was analyzed but comprestimator could not find enough non-zero
samples to establish a good estimation.
compression_savings_ratio - compression saving ratio is the estimated amount of space
that can be saved on the storage in the frame of this specific volume expressed as a
percentage.
analyzevdiskbysystem - Provides an option to run Comprestimator on all volumes within
the system. The analyzing process is non-disruptive and should not affect the system
significantly. Analysis speed may vary due to the fullness of the volume, but should not
take more than couple of minutes per volume.
Can be canceled by running the analyzevdiskbysystem -cancel command.
lsvdiskanalysisprogress - Shows progress of the Comprestimator analysis as shown in
Example 8-6.
474 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
In Chapter 10, “Operations using the CLI” on page 565, we describe how to use the
command-line interface (CLI) and Advanced Copy Services.
In Chapter 11, “Operations using the GUI” on page 715, we explain how to use the GUI and
Advanced Copy Services.
9.1 FlashCopy
By using the FlashCopy function of the IBM Spectrum Virtualize, you can perform a
point-in-time copy of one or more volumes. In this section, we describe the inner workings of
FlashCopy and provide details of its configuration and use.
You can use FlashCopy to help you solve critical and challenging business needs that require
duplication of data of your source volume. Volumes can remain online and active while you
create consistent copies of the data sets. Because the copy is performed at the block level, it
operates below the host operating system and it’s cache. Therefore the copy is not apparent
to the host.
Important: Because FlashCopy operates at the block level below the host operating
system and cache, those levels do need to be flushed for consistent FlashCopies.
While the FlashCopy operation is performed, the source volume is frozen briefly to initialize
the FlashCopy bitmap and then I/O can resume. Although several FlashCopy options require
the data to be copied from the source to the target in the background, which can take time to
complete, the resulting data on the target volume is presented so that the copy appears to
complete immediately. This process is performed by using a bitmap (or bit array), which tracks
changes to the data after the FlashCopy is started, and an indirection layer, which allows data
to be read from the source volume transparently.
The business applications for FlashCopy are wide-ranging. Common use cases for
FlashCopy include, but are not limited to, the following examples:
Rapidly creating consistent backups of dynamically changing data
Rapidly creating consistent copies of production data to facilitate data movement or
migration between hosts
Rapidly creating copies of production data sets for application development and testing
Rapidly creating copies of production data sets for auditing purposes and data mining
Rapidly creating copies of production data sets for quality assurance
Regardless of your business needs, FlashCopy within the IBM Spectrum Virtualize is flexible
and offers a broad feature set, which makes it applicable to many scenarios.
476 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
After the FlashCopy is performed, the resulting image of the data can be backed up to tape,
as though it were the source system. After the copy to tape is complete, the image data is
redundant and the target volumes can be discarded. For time-limited applications, such as
these examples, “no copy” or incremental FlashCopy is used most often. The use of these
methods puts less load on your infrastructure.
When FlashCopy is used for backup purposes, the target data usually is managed as
read-only at the operating system level. This approach provides extra security by ensuring
that your target data was not modified and remains true to the source.
This approach can be used for various applications, such as recovering your production
database application after an errant batch process that caused extensive damage.
In addition to the restore option, which copies the original blocks from the target volume to
modified blocks on the source volume, the target can be used to perform a restore of
individual files. To do that you need to make the target available on a host. We suggest that
you do not make the target available to the source host because seeing doubles of disks
causes problems for most host operating systems. Copy the files to the source via normal
host data copy methods for your environment.
This method differs from the other migration methods, which are described later in this
chapter. Common uses for this capability are host and back-end storage hardware refreshes.
To ensure the integrity of the copy that is made, it is necessary to flush the host operating
system and application cache for any outstanding reads or writes before the FlashCopy
operation is performed. Failing to flush the host operating system and application cache
produces what is referred to as a crash consistent copy. The resulting copy requires the same
type of recovery procedure, such as log replay and file system checks, that is required
following a host crash. FlashCopies that are crash consistent often can be used following file
system and application recovery procedures.
Note: Although the best way to perform FlashCopy is to flush host cache first, certain
companies, such as Oracle, support using snapshots without it, as stated in Metalink note
604683.1.
Various operating systems and applications provide facilities to stop I/O operations and
ensure that all data is flushed from host cache. If these facilities are available, they can be
used to prepare for a FlashCopy operation. When this type of facility is unavailable, the host
cache must be flushed manually by quiescing the application and unmounting the file system
or drives.
Preferred practice: From a practical standpoint, when you have an application that is
backed by a database and you want to make a FlashCopy of that application’s data, it is
sufficient in most cases to use the write-suspend method that is available in most modern
databases because the database maintains strict control over I/O. This method is opposed
to flushing data from both the application and the backing database (which is the
recommended method because it is safer). However, this method can be used when
facilities do not exist or your environment includes time sensitivity.
478 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
FlashCopy produces an exact copy of the source volume, including any metadata that was
written by the host operating system, Logical Volume Manager (LVM), and applications.
The source volume and target volume are available (almost) immediately following the
FlashCopy operation.
The source and target volumes must be the same “virtual” size.
The source and target volumes must be on the same SVC clustered system.
The source and target volumes do not need to be in the same I/O Group or storage pool.
The storage pool extent sizes can differ between the source and target.
The source volumes can have up to 256 target volumes (Multiple Target FlashCopy).
The target volumes can be the source volumes for other FlashCopy relationships
(cascaded FlashCopy).
Consistency Groups are supported to enable FlashCopy across multiple volumes in the
same time.
Up to 255 FlashCopy Consistency Groups are supported per system.
Up to 512 FlashCopy mappings can be placed in one Consistency Group.
The target volume can be updated independently of the source volume.
Bitmaps that are governing I/O redirection (I/O indirection layer) are maintained in both
nodes of the IBM SVC I/O Group to prevent a single point of failure.
FlashCopy mapping and Consistency Groups can be automatically withdrawn after the
completion of the background copy.
Thin-provisioned FlashCopy (or Snapshot in the GUI) use disk space only when updates
are made to the source or target data and not for the entire capacity of a volume copy.
FlashCopy licensing is based on the virtual capacity of the source volumes.
Incremental FlashCopy copies all of the data when you first start FlashCopy and then only
the changes when you stop and start FlashCopy mapping again. Incremental FlashCopy
can substantially reduce the time that is required to re-create an independent image.
Reverse FlashCopy enables FlashCopy targets to become restore points for the source
without breaking the FlashCopy relationship and without having to wait for the original
copy operation to complete.
The maximum number of supported FlashCopy mappings is 4096 per IBM SVC system.
The size of the source and target volumes cannot be altered (increased or decreased)
while a FlashCopy mapping is defined.
A key advantage of the IBM Spectrum Virtualize Multiple Target Reverse FlashCopy function
is that the reverse FlashCopy does not destroy the original target, which allows processes by
using the target, such as a tape backup, to continue uninterrupted.
The IBM Spectrum Virtualize also provides the ability to create an optional copy of the source
volume to be made before the reverse copy operation starts. This ability to restore back to the
original source data can be useful for diagnostic purposes.
The production disk is instantly available with the backup data. Figure 9-1 shows an example
of Reverse FlashCopy.
Regardless of whether the initial FlashCopy map (volume X → volume Y) is incremental, the
Reverse FlashCopy operation copies the modified data only.
Consistency Groups are reversed by creating a set of new reverse FlashCopy maps and
adding them to a new reverse Consistency Group. Consistency Groups cannot contain more
than one FlashCopy map with the same target volume.
480 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
IBM Tivoli Storage FlashCopy Manager provides fast application-aware backups and restores
using advanced point-in-time image technologies in the IBM Spectrum Virtualize. In addition,
it provides an optional integration with IBM Tivoli Storage Manager for the long-term storage
of snapshots. Figure 9-2 shows the integration of Tivoli Storage Manager and FlashCopy
Manager from a conceptual level.
Figure 9-2 Tivoli Storage Manager for Advanced Copy Services features
Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for
Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli
FlashCopy Manager, you can coordinate and automate host preparation steps before you
issue FlashCopy start commands to ensure that a consistent backup of the application is
made. You can put databases into hot backup mode and flush the file system cache before
starting the FlashCopy.
FlashCopy Manager also allows for easier management of on-disk backups that use
FlashCopy, and provides a simple interface to perform the “reverse” operation.
Released December 2013, IBM Tivoli FlashCopy Manager V4.1 adds support for VMware 5.5
and vSphere environments with Site Recovery Manager (SRM), with instant restore for
VMware Virtual Machine File System (VMFS) data stores. This release also integrates with
IBM Tivoli Storage Manager for Virtual Environments, and it allows backup of point-in-time
images into the Tivoli Storage Manager infrastructure for long-term storage.
The addition of VMware vSphere brings support and application awareness for FlashCopy
Manager to the following applications:
Microsoft Exchange and Microsoft SQL Server, including SQL Server 2012 Availability
Groups
IBM DB2 and Oracle databases, for use either with or without SAP environments
IBM General Parallel File System (GPFS) software snapshots for DB2 pureScale®
Other applications supported through script customizing
For more information about IBM Tivoli FlashCopy Manager, see this website:
http://www.ibm.com/software/products/en/tivostorflasmana/
482 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Before you start a FlashCopy (regardless of the type and specified options), you must issue a
prestartfcmap or prestartfcconsistgrp, which puts the IBM SVC cache into write-through
mode and flushes the I/O that is currently bound for your volume. After FlashCopy is started,
an effective copy of a source volume to a target volume is created. The content of the source
volume is presented immediately on the target volume, and the original content of the target
volume is lost. This FlashCopy operation is also referred to as a time-zero copy (T0).
Note: Instead of using prestartfcmap or prestartfcconsistgrp, you can also use the
-prep parameter in the startfcmap or startfcconsistgrp command to prepare and start
FlashCopy in one step.
The source and target volumes are available for use immediately after the FlashCopy
operation. The FlashCopy operation creates a bitmap that is referenced and maintained to
direct I/O requests within the source and target relationship. This bitmap is updated to reflect
the active block locations as data is copied in the background from the source to the target
and updates are made to the source.
For more information about background copy, see 9.4.5, “Grains and the FlashCopy bitmap”
on page 489. Figure 9-5 shows the redirection of the host I/O toward the source volume and
the target volume.
Important: As with any point-in-time copy technology, you are bound by operating system
and application requirements for interdependent data and the restriction to an entire
volume.
The source and target volumes must belong to the same IBM SVC system, but they do not
have to be in the same I/O Group or storage pool. FlashCopy associates a source volume to
a target volume through FlashCopy mapping.
To become members of a FlashCopy mapping, the source and target volumes must be the
same size. Volumes that are members of a FlashCopy mapping cannot have their size
increased or decreased while they are members of the FlashCopy mapping.
A FlashCopy mapping is the act of creating a relationship between a source volume and a
target volume. FlashCopy mappings can be stand-alone or a member of a Consistency
484 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Group. You can perform the actions of preparing, starting, or stopping FlashCopy on either a
stand-alone mapping or a Consistency Group.
Figure 9-7 also shows four targets and mappings that are taken from a single source, with
their interdependencies. In this example, Target 1 is the oldest (as measured from the time
that it was started) through to Target 4, which is the newest. The ordering is important
because of how the data is copied when multiple target volumes are defined and because of
the dependency chain that results.
A write to the source volume does not cause its data to be copied to all of the targets. Instead,
it is copied to the newest target volume only (Target 4 in Figure 9-7). The older targets refer to
new targets first before referring to the source.
From the point of view of an intermediate target disk (neither the oldest nor the newest), it
treats the set of newer target volumes and the true source volume as a type of composite
source.
It treats all older volumes as a kind of target (and behaves like a source to them). If the
mapping for an intermediate target volume shows 100% progress, its target volume contains
a complete set of data. In this case, mappings treat the set of newer target volumes (up to and
including the 100% progress target) as a form of composite source. A dependency
relationship exists between a particular target and all newer targets (up to and including a
target that shows 100% progress) that share the source until all data is copied to this target
and all older targets.
For more information about Multiple Target FlashCopy, see 9.4.6, “Interaction and
dependency between multiple target FlashCopy mappings” on page 490.
When Consistency Groups are used, the FlashCopy commands are issued to the FlashCopy
Consistency Group, which performs the operation on all FlashCopy mappings that are
contained within the Consistency Group at the same time.
Figure 9-8 shows a Consistency Group that includes two FlashCopy mappings.
486 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Dependent writes
To show why it is crucial to use Consistency Groups when a data set spans multiple volumes,
consider the following typical sequence of writes for a database update transaction:
1. A write is run to update the database log, which indicates that a database update is about
to be performed.
2. A second write is run to perform the actual update to the database.
3. A third write is run to update the database log, which indicates that the database update
completed successfully.
The database ensures the correct ordering of these writes by waiting for each step to
complete before the next step is started. However, if the database log (updates 1 and 3) and
the database (update 2) are on separate volumes, it is possible for the FlashCopy of the
database volume to occur before the FlashCopy of the database log. This sequence can
result in the target volumes seeing writes 1 and 3 but not 2 because the FlashCopy of the
database volume occurred before the write was completed.
In this case, if the database was restarted by using the backup that was made from the
FlashCopy target volumes, the database log indicates that the transaction completed
successfully. In fact, it did not complete successfully because the FlashCopy of the volume
with the database file was started (the bitmap was created) before the write completed to the
volume. Therefore, the transaction is lost and the integrity of the database is in question.
To overcome the issue of dependent writes across volumes and to create a consistent image
of the client data, a FlashCopy operation must be performed on multiple volumes as an
atomic operation. To accomplish this method, the IBM Spectrum Virtualize supports the
concept of Consistency Groups.
A FlashCopy Consistency Group can contain up to 512 FlashCopy mappings. The maximum
number of FlashCopy mappings that is supported by the SVC system V7.4 is 4,096.
FlashCopy commands can then be issued to the FlashCopy Consistency Group and,
therefore, simultaneously for all of the FlashCopy mappings that are defined in the
Consistency Group.
For example, when a FlashCopy start command is issued to the Consistency Group, all of
the FlashCopy mappings in the Consistency Group are started at the same time. This
simultaneous start results in a point-in-time copy that is consistent across all of the FlashCopy
mappings that are contained in the Consistency Group.
If a particular volume is the source volume for multiple FlashCopy mappings, you might want
to create separate Consistency Groups to separate each mapping of the same source
volume. Regardless of whether the source volume with multiple target volumes is in the same
Consistency Group or in separate Consistency Groups, the resulting FlashCopy produces
multiple identical copies of the source data.
Maximum configurations
Table 9-1 on page 488 lists the FlashCopy properties and maximum configurations.
FlashCopy targets per source 256 This maximum is the number of FlashCopy
mappings that can exist with the same source
volume.
FlashCopy mappings per system 4,096 The number of mappings is no longer limited by
the number of volumes in the system, so the
FlashCopy component limit applies.
FlashCopy Consistency Groups 255 This maximum is an arbitrary limit that is policed
per system by the software.
FlashCopy volume capacity per 4 PiB This maximum is a limit on the quantity of
I/O Group FlashCopy mappings that are using bitmap space
from this I/O Group. This maximum configuration
uses all 4GiB of bitmap space for the I/O Group
and allows no Metro or Global Mirror bitmap
space. The default is 40 TiB.
FlashCopy mappings per 512 This limit is because of the time that is taken to
Consistency Group prepare a Consistency Group with many
mappings.
To show how the FlashCopy indirection layer works, we examine what happens when a
FlashCopy mapping is prepared and then started.
When a FlashCopy mapping is prepared and started, the following sequence is applied:
1. Flush the write cache to the source volume or volumes that are part of a Consistency
Group.
2. Put cache into write-through mode on the source volumes.
3. Discard cache for the target volumes.
4. Establish a sync point on all of the source volumes in the Consistency Group (which
creates the FlashCopy bitmap).
5. Ensure that the indirection layer governs all of the I/O to the source volumes and target
volumes.
6. Enable cache on the source volumes and target volumes.
FlashCopy provides the semantics of a point-in-time copy by using the indirection layer, which
intercepts I/O that is directed at the source or target volumes. The act of starting a FlashCopy
mapping causes this indirection layer to become active in the I/O path, which occurs
automatically across all FlashCopy mappings in the Consistency Group. The indirection layer
then determines how each I/O is to be routed, which is based on the following factors:
The volume and the logical block address (LBA) to which the I/O is addressed
Its direction (read or write)
The state of an internal data structure, the FlashCopy bitmap
488 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
The indirection layer allows the I/O to go through to the underlying volume, redirects the I/O
from the target volume to the source volume, or queues the I/O while it arranges for data to be
copied from the source volume to the target volume. To explain in more detail which action is
applied for each I/O, we first look at the FlashCopy bitmap.
The FlashCopy bitmap dictates read and write behavior for the source and target volumes.
Source reads
Reads are performed from the source volume, which is the same as for non-FlashCopy
volumes.
Source writes
Writes to the source cause:
If the grain was not copied to the target yet, the grain is copied before the actual write is
performed to the source. Bitmap is updated indicating this grain is already copied to the
target.
If the grain was already copied, the write is performed to the source as usual.
Target reads
Reads are performed from the target if the grain was copied. Otherwise, the read is
performed from the source and no copy is performed.
Target writes
Writes to the target cause:
If the grain was not copied from the source to the target, the grain is copied from the
source to the target before the actual write is performed to the source. Bitmap is updated
indicating this grain is already copied to the target.
If the entire grain is being updated on the target, the target is marked as split with the
source (if there is no I/O error during the write) and the write goes directly to the target.
If the grain in question was already copied from the source to the target, the write goes
directly to the target.
Figure 9-9 on page 490 shows how the background copy runs while I/Os are handled
according to the indirection layer algorithm.
490 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Target 0 is not dependent on a source because it completed copying. Target 0 has two
dependent mappings (Target 1 and Target 2).
Target 1 depends on Target 0. It remains dependent until all of Target 1 is copied. Target 2
depends on it because Target 2 is 20% copy complete. After all of Target 1 is copied, it can
then move to the idle_copied state.
Target 2 is dependent upon Target 0 and Target 1 and remains dependent until all of Target 2
is copied. No target depends on Target 2; therefore, when all of the data is copied to Target 2,
it can move to the idle_copied state.
If the grain of the next oldest mapping is not yet copied, it must be copied before the write can
proceed to preserve the contents of the next oldest mapping. The data that is written to the
next oldest mapping comes from a target or source.
If the grain in the target that is being written is not yet copied, the grain is copied from the
oldest copied grain in the mappings that are newer than the target or the source if none are
copied. After this copy is done, the write can be applied to the target.
Note: The stopping copy process can be ongoing for several mappings that share the
source at the same time. At the completion of this process, the mapping automatically
makes an asynchronous state transition to the stopped state or the idle_copied state if the
mapping was in the copying state with progress = 100%.
For example, if the mapping that is associated with Target 0 was issued a stopfcmap or
stopfcconsistgrp command, Target 0 enters the stopping state while a process copies the
data of Target 0 to Target 1. After all of the data is copied, Target 0 enters the stopped state
and Target 1 is no longer dependent upon Target 0; however, Target 1 remains dependent on
Target 2.
Target No If any newer targets exist for Hold the write. Check the
this source in which this grain dependency target volumes
was copied, read from the to see whether the grain was
oldest of these targets. copied. If the grain is not
Otherwise, read from the copied to the next oldest
source. target for this source, copy
the grain to the next oldest
target. Then, write to the
target.
Yes Read from the target volume. Write to the target volume.
492 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
This copy-on-write process introduces significant latency into write operations. To isolate the
active application from this additional latency, the FlashCopy indirection layer is placed
logically between upper and lower cache. Therefore, the additional latency that is introduced
by the copy-on-write process is encountered only by the internal cache operations and not by
the application.
The logical placement of the FlashCopy indirection layer is shown in Figure 9-12.
Introduction of the two level cache provides additional performance improvements to the
FlashCopy mechanism. Because the FlashCopy layer is now above lower cache in the
software stack, it can benefit from read prefetching and coalescing writes to backend storage.
Also, preparing FlashCopy is much faster because upper cache write data does not have to
go directly to backend storage but to lower cache layer. Additionally, in the multitarget
FlashCopy the target volumes of the same image share cache data. This design is opposite
to previous IBM Spectrum Virtualize code versions where each volume had its own copy of
cached data.
Example 9-1 Listing the size of a volume in bytes and creating a volume of equal size
IBM_2145:ITSO SVC DH8:superuser>lsvdisk -bytes test_image_vol_1
id 12
name test_image_vol_1
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 3
mdisk_grp_name temp_migration_pool
capacity 21474836480
type image
formatted no
formatting no
mdisk_id 5
mdisk_name mdisk3
FC_id
FC_name
RC_id
RC_name
vdisk_UID 600507680283818B300000000000000E
throttling 0
preferred_node_id 2
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
RC_change no
compressed_copy_count 0
access_IO_group_count 1
last_access_time
parent_mdisk_grp_id 3
parent_mdisk_grp_name temp_migration_pool
owner_type none
owner_id
owner_name
encrypt no
volume_id 12
volume_name test_image_vol_1
function
copy_id 0
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 3
mdisk_grp_name temp_migration_pool
type image
mdisk_id 5
mdisk_name mdisk3
494 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
fast_write_state empty
used_capacity 21474836480
real_capacity 21474836480
free_capacity 0
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status measured
tier ssd
tier_capacity 0
tier enterprise
tier_capacity 21474836480
tier nearline
tier_capacity 0
compressed_copy no
uncompressed_used_capacity 21474836480
parent_mdisk_grp_id 3
parent_mdisk_grp_name temp_migration_pool
encrypt no
Tip: Alternatively, you can use the expandvolumesize and shrinkvolumesize volume
commands to modify the size of the volume.
You can use an image mode volume as a FlashCopy source volume or target volume.
Overview of a FlashCopy sequence of events: The following tasks show the FlashCopy
sequence:
1. Associate the source data set with a target location (one or more source and target
volumes).
2. Create a FlashCopy mapping for each source volume to the corresponding target
volume. The target volume must be equal in size to the source volume.
3. Discontinue access to the target (application dependent).
4. Prepare (pre-trigger) the FlashCopy:
a. Flush the cache for the source.
b. Discard the cache for the target.
5. Start (trigger) the FlashCopy:
a. Pause I/O (briefly) on the source.
b. Resume I/O on the source.
c. Start I/O on the target.
Flush done The FlashCopy mapping automatically moves from the preparing state to
the prepared state after all cached data for the source is flushed and all
cached data for the target is no longer valid.
496 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Start When all of the FlashCopy mappings in a Consistency Group are in the
prepared state, the FlashCopy mappings can be started. To preserve the
cross-volume Consistency Group, the start of all of the FlashCopy
mappings in the Consistency Group must be synchronized correctly
concerning I/Os that are directed at the volumes by using the startfcmap
or startfcconsistgrp command.
Flush failed If the flush of data from the cache cannot be completed, the FlashCopy
mapping enters the stopped state.
Copy complete After all of the source data is copied to the target and no dependent
mappings exist, the state is set to copied. If the option to automatically
delete the mapping after the background copy completes is specified, the
FlashCopy mapping is deleted automatically. If this option is not
specified, the FlashCopy mapping is not deleted automatically and it can
be reactivated by preparing and starting again.
Idle_or_copied
The source and target volumes act as independent volumes even if a mapping exists between
the two. Read and write caching is enabled for the source and the target volumes.
If the mapping is incremental and the background copy is complete, the mapping records the
differences between the source and target volumes only. If the connection to both nodes in
the I/O group that the mapping is assigned to is lost, the source and target volumes are
offline.
Copying
The copy is in progress. Read and write caching is enabled on the source and the target
volumes.
Prepared
The mapping is ready to start. The target volume is online, but is not accessible. The target
volume cannot perform read or write caching. Read and write caching is failed by the Small
Computer System Interface (SCSI) front end as a hardware error. If the mapping is
incremental and a previous mapping is completed, the mapping records the differences
between the source and target volumes only. If the connection to both nodes in the I/O Group
that the mapping is assigned to is lost, the source and target volumes go offline.
Preparing
The target volume is online, but not accessible. The target volume cannot perform read or
write caching. Read and write caching is failed by the SCSI front end as a hardware error. Any
changed write data for the source volume is flushed from the cache. Any read or write data for
the target volume is discarded from the cache. If the mapping is incremental and a previous
mapping is completed, the mapping records the differences between the source and target
volumes only. If the connection to both nodes in the I/O Group that the mapping is assigned to
is lost, the source and target volumes go offline.
Performing the cache flush that is required as part of the startfcmap or startfcconsistgrp
command causes I/Os to be delayed while they are waiting for the cache flush to complete. To
overcome this problem, FlashCopy supports the prestartfcmap or prestartfcconsistgrp
command, which prepares for a FlashCopy start while still allowing I/Os to continue to the
source volume.
In the preparing state, the FlashCopy mapping is prepared by completing the following steps:
1. Flushing any modified write data that is associated with the source volume from the cache.
Read data for the source is left in the cache.
2. Placing the cache for the source volume into write-through mode so that subsequent
writes wait until data is written to disk before the write command that is received from the
host is complete.
3. Discarding any read or write data that is associated with the target volume from the cache.
498 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Stopped
The mapping is stopped because you issued a stop command or an I/O error occurred. The
target volume is offline and its data is lost. To access the target volume, you must restart or
delete the mapping. The source volume is accessible and the read and write cache is
enabled. If the mapping is incremental, the mapping is recording write operations to the
source volume. If the connection to both nodes in the I/O Group that the mapping is assigned
to is lost, the source and target volumes go offline.
Stopping
The mapping is copying data to another mapping.
If the background copy process is complete, the target volume is online while the stopping
copy process completes.
If the background copy process is not complete, data is discarded from the target volume
cache. The target volume is offline while the stopping copy process runs.
Suspended
The mapping started, but it did not complete. Access to the metadata is lost, which causes
the source and target volume to go offline. When access to the metadata is restored, the
mapping returns to the copying or stopping state and the source and target volumes return
online. The background copy process resumes. Any data that was not flushed and was
written to the source or target volume before the suspension is in cache until the mapping
leaves the suspended state.
Offline if copy
incomplete
Performance: The best performance is obtained when the grain size of the
thin-provisioned volume is the same as the grain size of the FlashCopy mapping.
The benefit of the use of a FlashCopy mapping with background copy enabled is that the
target volume becomes a real clone (independent from the source volume) of the FlashCopy
mapping source volume after the copy is complete. When the background copy function is not
performed, the target volume remains a valid copy of the source data only while the
FlashCopy mapping remains in place.
500 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
The background copy rate is a property of a FlashCopy mapping that is defined as a value
0 - 100. The background copy rate can be defined and changed dynamically for individual
FlashCopy mappings. A value of 0 disables the background copy.
Table 9-5 shows the relationship of the background copy rate value to the attempted number
of grains to be copied per second.
11 - 20 256 KiB 1 4
21 - 30 512 KiB 2 8
31 - 40 1 MiB 4 16
41 - 50 2 MiB 8 32
51 - 60 4 MiB 16 64
61 - 70 8 MiB 32 128
71 - 80 16 MiB 64 256
The grains per second numbers represent the maximum number of grains that the SVC
copies per second, assuming that the bandwidth to the managed disks (MDisks) can
accommodate this rate.
If the SVC cannot achieve these copy rates because of insufficient bandwidth from the SVC
nodes to the MDisks, the background copy I/O contends for resources on an equal basis with
the I/O that is arriving from the hosts. Background copy I/O and I/O that is arriving from the
hosts may see an increase in latency. Background copy and foreground I/O continue to make
progress, and do not stop, hang, or cause the node to fail. The background copy is performed
by both nodes of the I/O Group in which the source volume is found.
However, there is a lock for each grain. The lock can be in shared or exclusive mode. For
multiple targets, a common lock is shared and the mappings are derived from a particular
source volume. The lock is used in the following modes under the following conditions:
The lock is held in shared mode during a read from the target volume, which touches a
grain that was not copied from the source.
The lock is held in exclusive mode while a grain is being copied from the source to the
target.
If the lock is held in shared mode and another process wants to use the lock in shared mode,
this request is granted unless a process is already waiting to use the lock in exclusive mode.
If the lock is held in shared mode and it is requested to be exclusive, the requesting process
must wait until all holders of the shared lock free it.
Similarly, if the lock is held in exclusive mode, a process that is wanting to use the lock in
shared or exclusive mode must wait for it to be freed.
Node failure
Normally, two copies of the FlashCopy bitmap are maintained. One copy of the FlashCopy
bitmap is on each of the two nodes that make up the I/O Group of the source volume. When a
node fails, one copy of the bitmap for all FlashCopy mappings whose source volume is a
member of the failing node’s I/O Group becomes inaccessible. FlashCopy continues with a
single copy of the FlashCopy bitmap that is stored as non-volatile in the remaining node in the
source I/O Group. The system metadata is updated to indicate that the missing node no
longer holds a current bitmap. When the failing node recovers or a replacement node is
added to the I/O Group, the bitmap redundancy is restored.
Because the storage area network (SAN) that links the SVC nodes to each other and to the
MDisks is made up of many independent links, a subset of the nodes can be temporarily
isolated from several of the MDisks. When this situation happens, the managed disks are said
to be path offline on certain nodes.
Other nodes: Other nodes might see the managed disks as online because their
connection to the managed disks is still functioning.
When an MDisk enters the path offline state on an SVC node, all of the volumes that have
extents on the MDisk also become path offline. Again, this situation happens only on the
affected nodes. When a volume is path offline on a particular SVC node, the host access to
that volume through the node fails with the SCSI check condition indicating offline.
502 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Table 9-6 on page 503 lists the supported combinations of FlashCopy and remote copy. In the
table, remote copy refers to Metro Mirror and Global Mirror.
Although these presets meet most FlashCopy requirements, they do not provide support for
all possible FlashCopy options. If more specialized options are required that are not
supported by the presets, the options must be performed by using CLI commands.
In this section, we describe the three preset options and their use cases.
Snapshot
This preset creates a copy-on-write point-in-time copy. The snapshot is not intended to be an
independent copy. Instead, the copy is used to maintain a view of the production data at the
time that the snapshot is created. Therefore, the snapshot holds only the data from regions of
the production volume that changed since the snapshot was created. Because the snapshot
preset uses thin provisioning, only the capacity that is required for the changes is used.
Use case
The user wants to produce a copy of a volume without affecting the availability of the volume.
The user does not anticipate many changes to be made to the source or target volume; a
significant proportion of the volumes remains unchanged.
By ensuring that only changes require a copy of data to be made, the total amount of disk
space that is required for the copy is reduced; therefore, many snapshot copies can be used
in the environment.
Snapshots are useful for providing protection against corruption or similar issues with the
validity of the data, but they do not provide protection from physical controller failures.
Snapshots can also provide a vehicle for performing repeatable testing (including “what-if”
modeling that is based on production data) without requiring a full copy of the data to be
provisioned.
Clone
The clone preset creates a replica of the volume, which can be changed without affecting the
original volume. After the copy completes, the mapping that was created by the preset is
automatically deleted.
Use case
Users want a copy of the volume that they can modify without affecting the original volume.
After the clone is established, there is no expectation that it is refreshed or that there is any
504 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
further need to reference the original production data again. If the source is thin-provisioned,
the target is thin-provisioned for the auto-create target.
Backup
The backup preset creates a point-in-time replica of the production data. After the copy
completes, the backup view can be refreshed from the production data, with minimal copying
of data from the production volume to the backup volume.
Use case
The user wants to create a copy of the volume that can be used as a backup if the source
becomes unavailable, as in case of loss of the underlying physical controller. The user plans
to periodically update the secondary copy and does not want to suffer from the overhead of
creating new copy each time (and incremental FlashCopy times are faster than full copy,
which helps to reduce the window where the new backup is not yet fully effective). If the
source is thin-provisioned, the target is also thin-provisioned in this option for the auto-create
target.
Another use case, which is not supported by the name, is to create and maintain (periodically
refresh) an independent image that can be subjected to intensive I/O (for example, data
mining) without affecting the source volume’s performance.
Volume mirroring is provided by a specific volume mirroring function in the I/O stack, and it
cannot be manipulated like a FlashCopy or other types of copy volumes. However, this feature
provides migration functionality, which can be obtained by splitting the mirrored copy from the
source or by using the “migrate to” function. Volume mirroring cannot control backend storage
mirroring or replication.
With volume mirroring, host I/O completes when both copies are written. Before V6.3, this
feature took a copy offline when it had an I/O timeout, and then resynchronized with the online
copy after it recovered. Starting with V6.3, this feature is enhanced with a tunable latency
tolerance. This tolerance provides an option to give preference to losing the redundancy
between the two copies. This tunable timeout value is latency or redundancy.
The latency tuning option, which is set with chvdisk -mirrowritepriority latency, is the
default. It prioritizes host I/O latency, which yields a preference to host I/O over availability.
However, you might need to give preference to redundancy in your environment when
availability is more important than I/O response time. Use the chvdisk -mirror
writepriority redundancy command to set the redundancy option.
Regardless of which option you choose, volume mirroring can provide extra protection for
your environment.
Migration: Although these migration methods do not disrupt access, you must take a brief
outage to install the host drivers for your SVC if they are not installed. For more
information, see the IBM SVC Host Attachment User’s Guide, SC26-7905. Ensure that you
consult the revision of the document that applies to your SVC.
With volume mirroring, you can move data to different MDisks within the same storage pool or
move data between different storage pools. Using volume mirroring over volume migration is
beneficial because with volume mirroring storage pools do not need to have the same extent
size as is the case with volume migration.
Note: Volume mirroring does not create a second volume before you split copies. Volume
mirroring adds a second copy of the data under the same volume so the result is one
volume presented to the host with two copies of data connected to this volume. Only
splitting copies creates another volume and then both volumes have only one copy of the
data.
Starting with V7.3 and the introduction of the new cache architecture, mirrored volume
performance has been significantly improved. Now, lower cache is beneath the volume
mirroring layer, which means that both copies have their own cache. This approach helps in
cases of copies of different types, for example, generic and compressed, because each copy
uses its independent cache and performs its own read prefetch. Destaging of the cache can
506 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
now be independent for each copy, so one copy does not affect the performance of a second
copy.
Also, because the Storwize destage algorithm is MDisk aware, it can tune or adapt the
destaging process, depending on the MDisk type and utilization, for each copy independently.
With an Ethernet network data flow the data transfer can slow down over time. This condition
occurs because of the latency that is caused by waiting for the acknowledgment of each set of
packets that are sent. The next packet set cannot be sent until the previous packet is
acknowledged, as shown in Figure 9-13.
However, by using the embedded IP replication this behavior can be eliminated with the
enhanced parallelism of the data flow by using multiple virtual connections (VC) that share IP
links and addresses. The artificial intelligence engine can dynamically adjust the number of
VCs, receive window size, and packet size as appropriate to maintain optimum performance.
While the engine is waiting for one VC’s ACK, it sends more packets across other VCs. If
packets are lost from any VC, data is automatically retransmitted, as shown in Figure 9-14.
Figure 9-14 Optimized network data flow by using Bridgeworks SANSlide technology
For more information about this technology, see IBM Storwize V7000 and SANSlide
Implementation, REDP-5023.
With native IP partnership, the following Copy Services features are supported:
Metro Mirror (MM)
Referred to as synchronous replication, MM provides a consistent copy of a source virtual
disk on a target virtual disk. Data is written to the target virtual disk synchronously after it
is written to the source virtual disk so that the copy is continuously updated.
Global Mirror (GM) and GM with Change Volumes
Referred to as asynchronous replication, GM provides a consistent copy of a source
virtual disk on a target virtual disk. Data is written to the target virtual disk asynchronously
so that the copy is continuously updated. However, the copy might not contain the last few
updates if a disaster recovery operation is performed. An added extension to GM is GM
with Change Volumes. GM with Change Volumes is the preferred method for use with
native IP replication.
In storage layer, a SVC or Storwize family system has the following characteristics and
requirements:
The system can perform MM and GM replication with other storage-layer systems.
The system can provide external storage for replication-layer systems or SVC.
The system cannot use a storage-layer system as external storage.
In replication layer, a SVC or Storwize family system has the following characteristics and
requirements:
The system can perform MM and GM replication with other replication-layer systems or
SVC.
The system cannot provide external storage for a replication-layer system or SVC.
The system can use a storage-layer system as external storage.
508 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
A Storwize family system is in the storage layer by default, but the layer can be changed. For
example, you might want to change an SVC to a replication layer if you want to virtualize
Storwize V3700 systems.
Note: Before you change the system layer, the following conditions must be met:
No host object can be configured with worldwide port names (WWPNs) from a Storwize
family system.
No system partnerships can be defined.
No Storwize family system can be visible on the SAN fabric.
Use the lssystem command to check the current system layer, as shown in Example 9-2.
Example 9-2 Output from lssystem command showing the system layer
IBM_2145:ITSO SVC DH8:superuser>lssystem
id 000002007FE02102
name ITSO SVC DH8
...
lines omited for brevity
...
easy_tier_acceleration off
has_nas_key no
layer replication
...
Note: Consider the following rules for creating remote partnerships between the SVC and
Storwize Family systems:
An SVC is always in the replication layer.
By default, the SVC is in the storage layer but can be changed to the replication layer.
A system can form partnerships only with systems in the same layer.
An SVC can virtualize an SVC only if the SVC is in the storage layer.
Starting in V6.4, an SVC in the replication layer can virtualize an SVC in the storage layer.
Note: A physical link is the physical IP link between the two sites A (local) and B
(remote). Multiple IP addresses on local SVC cluster A could be connected (via
ethernet switches) to this physical link. Similarly, multiple IP address on SVC remote
cluster B could be connected (via ethernet switches) to the same physical link. At any
point of time, only a single IP address on cluster A can form a RC data session with an
IP address on cluster B.
The following maximum throughput is restricted based on the use of 1 Gbps or 10 Gbps
ports:
– One 1 Gbps port might transfer up to 110 Mbps
– Two 1 Gbps ports might transfer up to 220 Mbps
– One 10 Gbps port might transfer up to 190 Mbps
– One 10 Gbps port might transfer up to 280 Mbps
510 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Note: The Bandwidth setting definition when the IP partnerships are created changed.
Previously, the bandwidth setting defaulted to 50 MBs and was the maximum transfer
rate from the primary site to the secondary site for initial sync/resyncs of volumes.
The Link Bandwidth setting is now configured by using Mbits not MBs. You set the Link
Bandwidth setting to a value that the communication link can sustain or to what is
allocated for replication. The Background Copy Rate setting is now a percentage of the
Link Bandwidth. The Background Copy Rate setting determines the available
bandwidth for the initial sync and resyncs or for GM with Change Volumes.
When the VLAN ID is configured for the IP addresses that are used for either iSCSI host
attach or IP replication, the appropriate VLAN settings on the Ethernet network and servers
must be configured correctly in order not to experience connectivity issues. After the VLANs
are configured, changes to the VLAN settings will disrupt iSCSI and IP replication traffic to
and from the partnerships.
During the VLAN configuration for each IP address, the VLAN settings for the local and
failover ports on two nodes of an I/O Group can differ. To avoid any service disruption,
switches must be configured so the failover VLANs are configured on the local switch ports
and the failover of IP addresses from a failing node to a surviving node succeeds. If failover
VLANs are not configured on the local switch ports, there will be no paths to SVC during a
node failure and the replication will fail.
Consider the following requirements and procedures when implementing VLAN tagging:
VLAN tagging is supported for IP partnership traffic between two systems.
VLAN provides network traffic separation at the layer 2 level for Ethernet transport.
VLAN tagging by default is disabled for any IP address of a node port. You can use the
command-line interface (CLI) to optionally set the VLAN ID for port IPs on both systems in
the IP partnership.
When a VLAN ID is configured for the port IP addresses that are used in remote copy port
groups, appropriate VLAN settings on the Ethernet network must also be properly
configured to prevent connectivity issues.
Setting VLAN tags for a port is disruptive. Therefore, VLAN tagging requires that you stop
the partnership first before you configure VLAN tags. Then, restart again when the
configuration is complete.
Remote copy group or remote copy port group The following numbers group a set of IP addresses that are
connected to the same physical link. Therefore, only IP
addresses that are part of the same remote copy group can
form remote copy connections with the partner system:
0: Ports that are not configured for remote copy
1: Ports that belong to remote copy port group 1
2: Ports that belong to remote copy port group 2
Each IP address can be shared for iSCSI host-attach and
remote copy functionality. Therefore, the correct settings
must be applied to each IP address.
IP partnership Two SVC systems that are partnered to perform remote copy
over native IP links.
FC partnership Two SVC systems that are partnered to perform remote copy
over native FC links.
Failover Failure of a node within an I/O Group causes all virtual disks
that are owned by this node to fail over to the surviving node.
When the configuration node of the system fails,
management IPs also fail over to an alternative node.
Failback When the failed node rejoins the system, all IP addresses that
failed over are failed back from the surviving node to the
rejoined node, and virtual disk access is restored through this
node.
IP partnership or partnership over native IP links These terms are used to describe the IP partnership feature.
512 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
The following steps must be completed to establish two systems in the IP partnerships:
1. The administrator configures the CHAP secret on both the systems. This step is not
mandatory and users can choose to not configure the CHAP secret.
2. The administrator configures the system IP addresses on both local and remote systems
so that they can discover each other over the network.
3. If you want to use VLANs, configure your LAN switches and Ethernet ports to use VLAN
tagging (for more information on VLAN tagging, refer to 9.6.4, “VLAN support” on
page 511).
4. The administrator configures the SVC ports on each node in both of the systems by using
the GUI or cfgportip command and completes the following steps:
a. Configure the IP addresses for remote copy data.
b. Add the IP addresses in the respective remote copy port group.
c. Define whether the host access on these ports over iSCSI is allowed.
5. The administrator establishes the partnership with the remote system from the local
system where the partnership state then transitions to the Partially_Configured_Local
state.
6. The administrator establishes the partnership from the remote system with the local
system, and if successful, the partnership state then transitions to the Fully_Configured
state, which implies that the partnerships over the IP network were successfully
established. The partnership state momentarily remains in the not_present state before
transitioning to the fully_configured state.
7. The administrator creates MM, GM, and GM with Change Volume relationships.
Remote copy port group ID is a numerical tag associated with an IP port of SVC to indicate
which physical IP link it is connected to. Multiple SVC nodes could be connected to the same
physical long distance link and must therefore share the same remote copy port group id. In
scenarios where there are two physical links between the local and remote clusters, 2 remote
copy port group IDs must be used to designate which IP addresses are connected to which
physical link. This configuration must be done by the system administrator using the GUI or
the cfgportip CLI command.
Note: IP ports on both partners must have been configured with identical remote copy port
group IDs for the partnership to be established correctly.
The SVC IP addresses that are connected to the same physical link are designated with
identical remote copy port groups. The SVC supports three remote copy groups: 0, 1, and 2.
The SVC IP addresses are, by default, in remote copy port group 0. Ports in port group 0 are
not considered for creating remote copy data paths between two systems. For partnerships to
be established over IP links directly, IP ports must be configured in remote copy group 1 if a
single inter-site link exists, or in remote copy groups 1 and 2 if two inter-site links exist.
You can assign one IPv4 address and one IPv6 address to each Ethernet port on the SVC
platforms. Each of these IP addresses can be shared between iSCSI host attach and the IP
partnership. The user must configure the required IP address (IPv4 or IPv6) on an Ethernet
port with a remote copy port group. The administrator might want to use IPv6 addresses for
remote copy operations and use IPv4 addresses on that same port for iSCSI host attach. This
configuration also implies that for two systems to establish an IP partnership, both systems
must have IPv6 addresses that are configured.
Administrators can choose to dedicate an Ethernet port for IP partnership only. In that case,
host access must be explicitly disabled for that IP address and any other IP address that is
configured on that Ethernet port.
Note: To establish an IP partnership, each SVC node must have only a single remote copy
port group that is configured, that is, 1 or 2. The remaining IP addresses must be in remote
copy port group 0.
514 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Figure 9-15 Single link with only one remote copy port group that is configured in each system
As shown in Figure 9-15, two systems exist: System A and System B. A single remote
copy port group 1 is created on Node A1 on System A and on Node B2 on System B
because only a single inter-site link exists to facilitate the IP partnership traffic. (The
administrator might choose to configure the remote copy port group on Node B1 on
System B instead of Node B2.) At any time, only the IP addresses that are configured in
remote copy port group 1 on the nodes in System A and System B participate in
establishing data paths between the two systems after the IP partnerships are created. In
this configuration, no failover ports are configured on the partner node in the same I/O
Group.
This configuration has the following characteristics:
– Only one node in each system has a configured remote copy port group, and no
failover ports are configured.
– If Node A1 in System A or Node B2 in System B failed, the IP partnership stops and
enters the not_present state until the failed nodes recover.
– After the nodes recover, the IP ports fail back, the IP partnership recovers, and the
partnership state changes to the fully_configured state.
– If the inter-site system link fails, the IP partnerships transition to the not_present state.
– This configuration is not recommended because it is not resilient to node failures.
Two 2-node systems are in an IP partnership over a single inter-site link (with configured
failover ports), as shown in Figure 9-16 (configuration 2).
Figure 9-16 Only one remote copy group on each system and nodes with failover ports configured
As shown in Figure 9-16, two systems exist: System A and System B. A single remote
copy port group 1 is configured on two Ethernet ports, one each, on Node A1 and Node
A2 on System A and similarly, on Node B1 and Node B2 on System B. Although two ports
on each system are configured for remote copy port group 1, only one Ethernet port in
each system actively participates in the IP partnership process. This selection is
determined by a path configuration algorithm that is designed to choose data paths
between the two systems to optimize performance.
The other port on the partner node in the I/O Group behaves as a standby port that is
used in a node failure. If Node A1 fails in System A, the IP partnership continues servicing
replication I/O from Ethernet Port 2 because a failover port is configured on Node A2 on
Ethernet Port 2. However, discovery and path configuration logic to re-establish paths
post-failover might take time, which can cause partnerships to transition to the not_present
state for that period. The details of the particular IP port that actively participates in the IP
partnership are provided in the lsportip output (reported as used).
This configuration has the following characteristics:
– Each node in the I/O Group has the same remote copy port group that is configured.
However, only one port in that remote copy port group is active at any time at each
system.
– If Node A1 in System A or Node B2 in System B fails in its system, the rediscovery of
the IP partnerships is triggered and continues servicing the I/O from the failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
Two 4-node systems are in an IP partnership over a single inter-site link (with failover ports
that are configured), as shown in Figure 9-17 (configuration 3).
516 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Figure 9-17 Multinode systems single inter-site link with only one remote copy port group
As shown in Figure 9-17, there are two 4-node systems: System A and System B. A single
remote copy port group 1 is configured on nodes A1, A2, A3, and A4 on System A at Site
A, and on nodes B1, B2, B3, and B4 on System B at Site B. Although four ports are
configured for remote copy group 1, only one Ethernet port in each remote copy port
group on each system actively participates in the IP partnership process. Port selection is
determined by a path configuration algorithm. The other ports play the role of standby
ports.
If Node A1 fails in System A, the IP partnership selects one of the remaining ports that is
configured with remote copy port group 1 from any of the nodes from either of the two I/O
Groups in System A. However, it might take time (generally seconds) for discovery and
path configuration logic to re-establish the paths after the failover and this process can
cause partnerships to transition to the not_present state. This result leads remote copy
relationships to stop and the administrator might need to manually verify the issues in the
event log and start the relationships or remote copy Consistency Groups, if they do not
auto recover. The details about the particular IP port that is actively participating in the IP
partnership process are provided in the lsportip view (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both I/O Groups.
However, only one port in that remote copy port group remains active and participates
in the IP partnership on each system.
– If Node A1 in System A or Node B2 in System B encountered a failure in the system,
the discovery of the IP partnerships is triggered and it continues servicing the I/O from
the failover port.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
An eight-node system is in an IP partnership with a four-node system over a single
inter-site link, as shown in Figure 9-18 (configuration 4).
Figure 9-18 Multinode systems with single inter-site link with only one remote copy port group
As shown in Figure 9-18 on page 518, there is an eight-node system (System A in Site A)
and a four-node system (System B in Site B). A single remote copy port group 1 is
configured on nodes A1, A2, A5, and A6 on System A at Site A. Similarly, a single remote
copy port group 1 is configured on nodes B1, B2, B3, and B4 on System B.
Although there are four I/O Groups (eight nodes) in System A, any two I/O Groups at
maximum are supported to be configured for IP partnerships. If Node A1 fails in System A,
the IP partnership continues using one of the ports that is configured in the remote copy
port group from any of the nodes from either of the two I/O Groups in System A. However,
it might take time for discovery and path configuration logic to re-establish paths
post-failover and this delay might cause partnerships to transition to the not_present state.
518 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
This process can lead the remote copy relationships to stop and the administrator must
manually start them if the relationships do not auto recover. The details of which particular
IP port is actively participating in the IP partnership process is provided in the lsportip
output (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in both the I/O Groups
that are identified for participating in IP replication. However, only one port in that
remote copy port group remains active on each system and participates in IP
replication.
– If Node A1 in System A or Node B2 in System B fails in the system, the IP partnerships
trigger discovery and continue servicing the I/O from the failover ports.
– The discovery mechanism that is triggered because of failover might introduce a delay
in which the partnerships momentarily transition to the not_present state and then
recover.
– The bandwidth of the single link is used completely.
Two 2-node systems exist with two inter-site links, as shown in Figure 9-19 (configuration
5).
Figure 9-19 Dual links with two remote copy groups on each system are configured
As shown in Figure 9-19, remote copy port groups 1 and 2 are configured on the nodes in
System A and System B because two inter-site links are available. In this configuration,
the failover ports are not configured on partner nodes in the I/O Group. Instead, the ports
are maintained in different remote copy port groups on both of the nodes and they remain
active and participate in the IP partnership by using both of the links.
However, if either of the nodes in the I/O Group fails (that is, if Node A1 on System A fails),
the IP partnership continues only from the available IP port that is configured in remote
copy port group 2. Therefore, the effective bandwidth of the two links is reduced to 50%.
Only the bandwidth of a single link is available until the failure is resolved.
Figure 9-20 Multinode systems with dual inter-site links between the two systems
As shown in Figure 9-20, there are two 4-node systems: System A and System B. This
configuration is an extension of configuration 5 to a multinode multi-I/O Group
environment. As seen in this configuration, two I/O Groups exist and each node in the I/O
520 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Group has a single port that is configured in remote copy port group 1 or 2. Although two
ports are configured in remote copy port groups 1 and 2 on each system, only one IP port
in each remote copy port group on each system actively participates in the IP partnership.
The other ports that are configured in the same remote copy port group act as standby
ports in a failure. Which port in a configured remote copy port group participates in the IP
partnership at any moment is determined by a path configuration algorithm.
In this configuration, if Node A1 fails in System A, the IP partnership traffic continues from
Node A2 (that is, remote copy port group 2) and at the same time the failover also causes
discovery in remote copy port group 1. Therefore, the IP partnership traffic continues from
Node A3 on which remote copy port group 1 is configured. The details of the particular IP
port that is actively participating in the IP partnership process is provided in the lsportip
output (reported as used).
This configuration has the following characteristics:
– Each node has the remote copy port group that is configured in the I/O Group 1 or 2.
However, only one port per system in both remote copy port groups remains active and
participates in the IP partnership.
– Only a single port per system from each configured remote copy port group
participates simultaneously in the IP partnership. Therefore, both of the links are used.
– During node failure or port failure of a node that is actively participating in the IP
partnership, the IP partnership continues from the alternative port because another
port is in the system in the same remote copy port group but in a different I/O Group.
– The pathing algorithm can start the discovery of an available port in the affected
remote copy port group in the second I/O Group and pathing is re-established, which
restores the total bandwidth, that is, both of the links are available to support the IP
partnership.
An eight-node system is in an IP partnership with a four-node system over dual inter-site
links, as shown in Figure 9-21 on page 522 (configuration 7).
Figure 9-21 Multinode systems (two I/O Groups on each system) with dual inter-site links between
the two systems
522 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
If Node A1 fails in System A, IP partnership traffic continues from Node A2 (that is, remote
copy port group 2) and the failover also causes IP partnership traffic to continue from
Node A5 on which remote copy port group 1 is configured. The details of the particular IP
port actively participating in the IP partnership process are provided in the lsportip
output (reported as used).
This configuration has the following characteristics:
– Two I/O Groups exist with nodes in those I/O Groups that are configured in two remote
copy port groups because two inter-site links are available for participating in the IP
partnership. However, only one port per system in a particular remote copy port group
remains active and participates in the IP partnership.
– One port per system from each remote copy port group participates in the IP
partnership simultaneously. Therefore, both of the links are used.
– If a node or a port on the node that is actively participating in the IP partnership fails,
the remote copy data path is established from that port because another port is
available on an alternative node in the system with the same remote copy port group.
– The path selection algorithm starts discovery of the available port in the affected
remote copy port group in the alternative I/O Groups and paths are re-established,
restoring the total bandwidth across both links.
– The remaining or all of the I/O Groups can be in remote copy partnerships with other
systems.
An example of unsupported configuration for single inter-site link is shown in Figure 9-22
(configuration 8).
Figure 9-22 Two node systems with single inter-site link and remote copy port groups are
configured
On any node, only one port at any time can participate in the IP partnership. Configuring
multiple ports in the same remote copy group on the same node is not supported.
An example of an unsupported configuration for dual inter-site link is shown in Figure 9-23
(configuration 9).
Figure 9-23 Dual links with two remote copy port groups with failover port groups are configured
524 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
In this configuration, one port on each node in System A and System B is configured in
remote copy group 1 to establish an IP partnership and to support remote copy
relationships. A dedicated inter-site link is used for IP partnership traffic and iSCSI host
attach is disabled on those ports.
The following configuration steps are used:
a. Configure system IP addresses correctly so that they can be reached over the inter-site
link.
b. Qualify whether the partnerships must be created over IPv4 or IPv6 and then assign IP
addresses and open firewall ports 3260 and 3265.
c. Configure the IP ports for remote copy on both systems by using the following settings:
• Remote copy group: 1
• Host: No
• Assign IP address
d. Check that the maximum transmission unit (MTU) levels across the network meet the
requirements as set. (The default MTU is 1500 on the SVC.)
e. Establish the IP partnerships from both of the systems.
f. After the partnerships are in the fully_configured state, you can create the remote copy
relationships.
An example deployment for configuration 5 with ports that are shared with host access is
shown in Figure 9-25 (configuration 11).
In this configuration, IP ports are shared by both iSCSI hosts and the IP partnership.
The following configuration steps are used:
a. Configure the system IP addresses correctly so that they can be reached over the
inter-site link.
b. Qualify whether the IP partnerships must be created over IPv4 or IPv6 and then assign
IP addresses and open firewall ports 3260 and 3265.
c. Configure the IP ports for remote copy on System A1 by using the following settings.
Node 1:
• Port 1, remote copy port group 1
• Host: Yes
• Assign IP address
Node 2:
• Port 4, remote copy port group 2
• Host: Yes
• Assign IP address
d. Configure the IP ports for remote copy on System B1 by using the following settings.
Node 1:
• Port 1, remote copy port group 1
• Host: Yes
• Assign IP address
Node 2:
• Port 4, remote copy port group 2
• Host: Yes
• Assign IP address
e. Check the MTU levels across the network and meet the requirements as set. (The
default MTU is 1500 on the SVC.)
f. Establish the IP partnerships from both systems.
526 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
g. After the partnerships are in the fully_configured state, you can create the remote copy
relationships.
The IBM SVC provides a single point of control when remote copy is enabled in your network
(regardless of the disk subsystems that are used) if those disk subsystems are supported by
the SVC.
The general application of remote copy services is to maintain two real-time synchronized
copies of a volume. Often, two copies are geographically dispersed between two IBM SVC
systems, although it is possible to use MM or GM within a single system (within an I/O
Group). If the master copy fails, you can enable an auxiliary copy for I/O operation.
Tips: Intracluster MM/GM uses more resources within the system when compared to an
intercluster MM/GM relationship, where resource allocation is shared between the
systems.
Use intercluster MM/GM when possible. For mirroring volumes in the same IO Group, it is
better to use Volume Mirroring or the FlashCopy feature.
A typical application of this function is to set up a dual-site solution that uses two IBM SVC
systems. The first site is considered the primary or production site, and the second site is
considered the backup site or failover site, which is activated when a failure at the first site is
detected.
Note: For more information about restrictions and limitations of native IP replication, see
9.6.3, “IP partnership limitations” on page 509.
IBM Spectrum Virtualize software level restrictions for multiple system mirroring:
Starting with V6.1 object names up to 63 characters are supported. Previous levels
supported up to 15 characters only.
When V6.1 systems are partnered with V4.3 and V5.1 systems, various object names are
truncated at 15 characters when they are displayed from V4.3 and V5.1 systems.
528 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Figure 9-27 shows four systems in a star topology, with System A at the center. System A can
be a central DR site for the three other locations.
By using a star topology, you can migrate applications by using a process, such as the one
described in the following example:
1. Suspend application at A.
2. Remove the A → B relationship.
3. Create the A → C relationship (or the B → C relationship).
4. Synchronize to system C, and ensure that A → C is established:
– A → B, A → C, A → D, B → C, B → D, and C → D
– A → B, A → C, and B → C
A → B, A → C, and B → C.
A → B, A → C, A → D, B → D, and C → D.
Figure 9-29 is a fully connected mesh in which every system has a partnership to each of the
three other systems. This topology allows volumes to be replicated between any pair of
systems, for example:
A → B, A → C, and B → C.
Although systems can have up to three partnerships, volumes can be part of only one remote
copy relationship, for example, A → B.
System partnership intermix: All of the preceding topologies are valid for the intermix of
the IBM SAN Volume Controller with the Storwize V7000 if the Storwize V7000 is set to the
replication layer and running V6.3 or later.
An application that performs a high volume of database updates is designed with the concept
of dependent writes. With dependent writes, it is important to ensure that an earlier write
completed before a later write is started. Reversing or performing the order of writes
differently than the application intended can undermine the application’s algorithms and can
lead to problems, such as detected or undetected data corruption.
The IBM Spectrum Virtualize Metro Mirror and Global Mirror implementation operates in a
manner that is designed to always keep a consistent image at the secondary site. The Global
Mirror implementation uses complex algorithms that operate to identify sets of data and
number those sets of data in sequence. The data is then applied at the secondary site in the
defined sequence.
530 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
For more information about dependent writes, see 9.4.3, “Consistency Groups” on page 486.
Figure 9-31 on page 531 shows the concept of Metro Mirror Consistency Groups. The same
applies to Global Mirror Consistency Groups.
Because the MM_Relationship 1 and 2 are part of the Consistency Group, they can be
handled as one entity. The stand-alone MM_Relationship 3 is handled separately.
Certain uses of MM/GM require the manipulation of more than one relationship. Remote
Copy Consistency Groups can group relationships so that they are manipulated in unison.
All relationships in a Consistency Group must have corresponding master and auxiliary
volumes.
All relationships in one Consistency Group must be the same type, for example only Metro
Mirror or only Global Mirror.
Although Consistency Groups can be used to manipulate sets of relationships that do not
need to satisfy these strict rules, this manipulation can lead to undesired side effects. The
rules behind a Consistency Group mean that certain configuration commands are prohibited.
These configuration commands are not prohibited if the relationship is not part of a
Consistency Group.
For example, consider the case of two applications that are independent, yet they are placed
into a single Consistency Group. If an error occurs, synchronization is lost and a background
copy process is required to recover synchronization. While this process is progressing,
MM/GM rejects attempts to enable access to the auxiliary volumes of either application.
If one application finishes its background copy more quickly than the other application,
MM/GM still refuses to grant access to its auxiliary volumes even though access is safe in this
case. The MM/GM policy is to refuse access to the entire Consistency Group if any part of it is
inconsistent.
Stand-alone relationships and Consistency Groups share a common configuration and state
model. All of the relationships in a non-empty Consistency Group have the same state as the
Consistency Group.
Zoning
The IBM SVC node ports on each IBM SVC system must communicate with each other to
create the partnership. Switch zoning is critical to facilitating intercluster communication.
These channels are maintained and updated as nodes and links appear and disappear from
the fabric, and are repaired to maintain operation where possible. If communication between
the IBM SVC systems is interrupted or lost, an event is logged (and the Metro Mirror and
Global Mirror relationships stop).
Alerts: You can configure the IBM SVC to raise Simple Network Management Protocol
(SNMP) traps to the enterprise monitoring system to alert on events that indicate an
interruption in internode communication occurred.
532 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Intercluster links
All IBM SVC nodes maintain a database of other devices that are visible on the fabric. This
database is updated as devices appear and disappear.
Devices that advertise themselves as IBM SVC nodes are categorized according to the IBM
SVC system to which they belong. The IBM SVC nodes that belong to the same system
establish communication channels between themselves and begin to exchange messages to
implement clustering and the functional protocols of the IBM SVC.
Nodes that are in separate systems do not exchange messages after initial discovery is
complete, unless they are configured together to perform a remote copy relationship.
The intercluster link carries control traffic to coordinate activity between two systems. The link
is formed between one node in each system. The traffic between the designated nodes is
distributed among logins that exist between those nodes.
If the designated node fails (or all of its logins to the remote system fail), a new node is
chosen to carry control traffic. This node change causes the I/O to pause, but it does not put
the relationships in a ConsistentStopped state.
Increased distance directly affects host I/O performance because the writes are synchronous.
Use the requirements for application performance when you are selecting your Metro Mirror
auxiliary location.
Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups (FlashCopy Consistency Groups are described in
9.4, “Implementing FlashCopy” on page 484).
I/O Group, bitmap space must be sufficient within the I/O Group for both sets of volumes and
licensing must be on the system.
Important: Performing Metro Mirror across I/O Groups within a system is not supported.
Two IBM SVC systems must be defined in a partnership, which must be performed on both
IBM SVC systems in order to establish a fully functional Metro Mirror partnership.
By using standard single-mode connections, the supported distance between two SVC
systems in an MM partnership is 10 km (6.2 miles), although greater distances can be
achieved by using extenders. For extended distance solutions, contact your IBM marketing
representative.
Limit: When a local fabric and a remote fabric are connected for MM purposes, the
inter-switch link (ISL) hop count between a local node and a remote node cannot exceed
seven.
Events, such as a loss of connectivity between systems, can cause mirrored writes from the
master volume and the auxiliary volume to fail. In that case, Metro Mirror suspends writes to
the auxiliary volume and allows I/O to the master volume to continue to avoid affecting the
operation of the master volumes.
Figure 9-32 shows how a write to the master volume is mirrored to the cache of the auxiliary
volume before an acknowledgment of the write is sent back to the host that issued the write.
This process ensures that the auxiliary is synchronized in real time if it is needed in a failover
situation.
534 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
However, this process also means that the application is exposed to the latency and
bandwidth limitations (if any) of the communication link between the master and auxiliary
volumes. This process might lead to unacceptable application performance, particularly when
placed under peak load. Therefore, the use of traditional Fibre Channel Metro Mirror has
distance limitations that are based on your performance requirements. The IBM SVC does
not support more than 300 km (186.4 miles).
The IBM SVC allows the resynchronization of changed data so that write failures that occur
on the master or auxiliary volumes do not require a complete resynchronization of the
relationship.
Global Mirror establishes a Global Mirror relationship between two volumes of equal size. The
volumes in a Global Mirror relationship are referred to as the master (source) volume and the
auxiliary (target) volume, which is the same as Metro Mirror.
Consistency Groups can be used to maintain data integrity for dependent writes, which is
similar to FlashCopy Consistency Groups.
Global Mirror writes data to the auxiliary volume asynchronously, which means that host
writes to the master volume provide the host with confirmation that the write is complete
before the I/O completes on the auxiliary volume.
536 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Limit: When a local fabric and a remote fabric are connected for Global Mirror purposes,
the ISL hop count between a local node and a remote node must not exceed seven hops.
The Global Mirror function provides the same function as Metro Mirror remote copy, but over
long-distance links with higher latency without requiring the hosts to wait for the full round-trip
delay of the long distance link.
Figure 9-33 on page 537 shows that a write operation to the master volume is acknowledged
back to the host that is issuing the write before the write operation is mirrored to the cache for
the auxiliary volume.
The Global Mirror algorithms maintain a consistent image on the auxiliary always. They
achieve this consistent image by identifying sets of I/Os that are active concurrently at the
master, assigning an order to those sets, and applying those sets of I/Os in the assigned
order at the secondary. As a result, Global Mirror maintains the features of Write Ordering
and Read Stability.
The multiple I/Os within a single set are applied concurrently. The process that marshals the
sequential sets of I/Os operates at the secondary system. Therefore, the process is not
subject to the latency of the long distance link. These two elements of the protocol ensure
that the throughput of the total system can be grown by increasing system size while
maintaining consistency across a growing data set.
Global Mirror write I/O from the production IBM SVC system to a secondary IBM SVC system
requires serialization and sequence-tagging before being sent across the network to the
remote site (to maintain a write-order consistent copy of data).
To avoid impact on production site IBM SVC allows more parallelism in processing and
managing Global Mirror writes on a secondary system by using the following methods:
Nodes on the secondary system store replication writes in new redundant non-volatile
cache
Cache content details are shared between nodes
Cache content details are batched together to make node-to-node latency less of an issue
Nodes intelligently apply these batches in parallel as soon as possible
Nodes internally manage and optimize Global Mirror secondary write I/O processing
In a failover scenario where the secondary site must become the master source of data,
certain updates might be missing at the secondary site. Therefore, any applications that use
this data must have an external mechanism for recovering the missing updates and
reapplying them, for example, a transaction log replay.
Global Mirror is supported over FC, FC over IP (FCIP), FC over Ethernet (FCOE), and native
IP connections. The maximum distance cannot exceed 80 ms round trip, which is about 4000
km (2485.48 miles) between mirrored systems. But, starting with V7.4, this distance was
significantly increased for certain IBM Storwize Gen2 and IBM SVC configurations.
Figure 9-34 shows the current supported distances for Global Mirror remote copy.
538 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
The IBM SVC implements the Global Mirror relationship between a volume pair, with each
volume in the pair being managed by an IBM SVC or Storwize V7000.
The IBM SVC supports intracluster Global Mirror where both volumes belong to the same
system (and I/O Group).
The IBM SVC intercluster Global Mirror is supported if each volume belongs to a separate
IBM SVC system. An IBM SVC system can be configured for partnership with between
one and three other systems. For more information about IP partnership restrictions, see
9.6.3, “IP partnership limitations” on page 509.
Intercluster and intracluster Global Mirror can be used concurrently but not for the same
volume.
The IBM SVC does not require a control network or fabric to be installed to manage Global
Mirror. For intercluster Global Mirror, the IBM SVC maintains a control link between the
two systems. This control link is used to control the state and to coordinate the updates at
either end. The control link is implemented on top of the same FC fabric connection that
the IBM SVC uses for Global Mirror I/O.
The IBM SVC implements a configuration model that maintains the Global Mirror
configuration and state through major events, such as failover, recovery, and
resynchronization, to minimize user configuration action through these events.
The IBM SVC implements flexible resynchronization support, enabling it to resynchronize
volume pairs that experienced write I/Os to both disks and to resynchronize only those
regions that changed.
An optional feature for Global Mirror is a delay simulation to be applied on writes that are
sent to auxiliary volumes. It is useful in intracluster scenarios for testing purposes.
Colliding writes
Before V4.3.1, the Global Mirror algorithm required that only a single write is active on any
512-byte logical block address (LBA) of a volume. If a further write is received from a host
while the auxiliary write is still active (even though the master write might complete), the new
host write is delayed until the auxiliary write is complete. This restriction is needed if a series
of writes to the auxiliary must be retried (which is called reconstruction). Conceptually, the
data for reconstruction comes from the master volume.
If multiple writes are allowed to be applied to the master for a sector, only the most recent
write gets the correct data during reconstruction. If reconstruction is interrupted for any
reason, the intermediate state of the auxiliary is inconsistent.
Applications that deliver such write activity do not achieve the performance that GM is
intended to support. A volume statistic is maintained about the frequency of these collisions.
An attempt is made to allow multiple writes to a single location to be outstanding in the GM
algorithm. Master writes still need to be serialized, and the intermediate states of the master
data must be kept in a non-volatile journal while the writes are outstanding to maintain the
correct write ordering during reconstruction. Reconstruction must never overwrite data on the
auxiliary with an earlier version. The volume statistic that is monitoring colliding writes is now
limited to those writes that are not affected by this change.
The following numbers correspond to the numbers that are shown in Figure 9-35 on
page 540:
(1) The first write is performed from the host to LBA X.
(2) The completion of the write is acknowledged to the host even though the mirrored write
to the auxiliary volume is not yet complete.
(1’) and (2’) Steps occur asynchronously with the first write.
(3) The second write is performed from the host also to LBA X. If this write occurs before
(2’), the write is written to the journal file.
(4) The completion of the second write is acknowledged to the host.
Delay simulation
An optional feature for Global Mirror permits a delay simulation to be applied on writes that
are sent to auxiliary volumes. This feature allows you to test to detect colliding writes.
Therefore, you can use this feature to test an application before the full deployment of the
feature. The feature can be enabled separately for each intracluster or intercluster Global
Mirrors. You specify the delay setting by using the chsystem command and view the delay by
using the lssystem command. The gm_intra_cluster_delay_simulation field expresses the
amount of time that intracluster auxiliary I/Os are delayed. The
gm_inter_cluster_delay_simulation field expresses the amount of time that intercluster
auxiliary I/Os are delayed. A value of zero disables the feature.
Tip: If you are experiencing repeated problems with the delay on your link, ensure that the
delay simulator was properly disabled.
540 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
situations with low network link quality, congested or overloaded hosts, you might be affected
by multiple 1920 congestion errors.
Global Mirror has functionality that is designed to address the following conditions, which
might negatively affect certain Global Mirror implementations:
The estimation of the bandwidth requirements tends to be complex.
Ensuring the latency and bandwidth requirements can be met is often difficult.
Congested hosts on the source or target site can cause disruption.
Congested network links can cause disruption with only intermittent peaks.
To address these issues, Change Volumes were added as an option for Global Mirror
relationships. Change Volumes use the FlashCopy functionality, but they cannot be
manipulated as FlashCopy volumes because they are for a special purpose only. Change
Volumes replicate point-in-time images on a cycling period. The default is 300 seconds. Your
change rate needs to include only the condition of the data at the point in time that the image
was taken, instead of all the updates during the period. The use of this function can provide
significant reductions in replication volume.
Figure 9-36 shows a simple Global Mirror relationship without Change Volumes.
With Global Mirror with Change Volumes, this environment looks as shown in Figure 9-37.
With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary Global Mirror volume at
the target site, which is then captured in another Change Volume on the target site. This
approach provides an always consistent image at the target site and protects your data from
being inconsistent during resynchronization.
How Change Volumes might save you replication traffic is shown in Figure 9-38 on page 542.
In Figure 9-38, you can see a number of I/Os on the source and the same number on the
target, and in the same order. Assuming that this data is the same set of data being updated
repeatedly, this approach results in wasted network traffic. The I/O can be completed much
more efficiently, as shown in Figure 9-39.
542 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
In Figure 9-39, the same data is being updated repeatedly; therefore, Change Volumes
demonstrate significant I/O transmission savings by needing to send I/O number 16 only,
which was the last I/O before the cycling period.
You can adjust the cycling period by using the chrcrelationship -cycleperiodseconds
<60 - 86400> command from the CLI. If a copy does not complete in the cycle period, the
next cycle does not start until the prior cycle completes. For this reason, the use of Change
Volumes offers the following possibilities for RPO:
If your replication completes in the cycling period, your RPO is twice the cycling period.
If your replication does not complete within the cycling period, your RPO is twice the
completion time. The next cycling period starts immediately after the prior cycling period is
finished.
Carefully consider your business requirements versus the performance of Global Mirror with
Change Volumes. Global Mirror with Change Volumes increases the intercluster traffic for
more frequent cycling periods. Therefore, selecting the shortest cycle periods possible is not
always the answer. In most cases, the default must meet requirements and perform well.
Important: When you create your Global Mirror volumes with Change Volumes, ensure
that you remember to select the Change Volume on the auxiliary (target) site. Failure to do
so leaves you exposed during a resynchronization operation.
If this preferred practice is not maintained, for example, source volumes are assigned to only
one node in the I/O group, you can change the preferred node for each volume to distribute
volumes evenly between the nodes. You can also change the preferred node for volumes that
are in a remote copy relationship without affecting the host IO to a particular volume. The
remote copy relationship type does not matter. (The remote copy relationship type can be
MM, GM, or GM with Change Volumes.) You can change the preferred node both to the
source and target volumes that are participating in the remote copy relationship.
Background copy I/O is scheduled to avoid bursts of activity that might have an adverse affect
on system behavior. An entire grain of tracks on one volume is processed at around the same
time but not as a single I/O. Double buffering is used to try to use sequential performance
within a grain. However, the next grain within the volume might not be scheduled for a while.
Multiple grains might be copied simultaneously and might be enough to satisfy the requested
rate, unless the available resources cannot sustain the requested rate.
Global Mirror paces the rate at which background copy is performed by the appropriate
relationships. Background copy occurs on relationships that are in the InconsistentCopying
state with a status of Online.
The quota of background copy (configured on the intercluster link) is divided evenly between
all nodes that are performing background copy for one of the eligible relationships. This
allocation is made irrespective of the number of disks for which the node is responsible. Each
node, in turn, divides its allocation evenly between the multiple relationships that are
performing a background copy.
Important: The background copy value is a system-wide parameter that can be changed
dynamically but only per system basis and not per relationship basis. Therefore, the copy
rate of all relationships changes when this value is increased or decreased. In systems
with many remote copy relationships, increasing this value might affect overall system or
intercluster link performance. The background copy rate can be changed between 1 - 1000
MBps.
With this technique, do not allow I/O on the master or auxiliary before the relationship is
established.
544 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Important: Failure to perform these steps correctly can cause MM/GM to report the
relationship as consistent when it is not, therefore, creating a data loss or data integrity
exposure for hosts that access data on the auxiliary volume.
Switching copy direction: The copy direction for a Metro Mirror relationship can be
switched so that the auxiliary volume becomes the master, and the master volume
becomes the auxiliary, which is similar to the FlashCopy restore option. However, although
the FlashCopy target volume can operate in read/write mode, the target volume of the
started remote copy is always in read-only mode.
While the Metro Mirror relationship is active, the auxiliary volume is not accessible for host
application write I/O at any time. The IBM SVC allows read-only access to the auxiliary
volume when it contains a consistent image. Storwize allows boot time operating system
discovery to complete without an error, so that any hosts at the secondary site can be ready
to start the applications with minimum delay, if required.
For example, many operating systems must read logical block address (LBA) zero to
configure a logical unit. Although read access is allowed at the auxiliary in practice, the data
on the auxiliary volumes cannot be read by a host because most operating systems write a
“dirty bit” to the file system when it is mounted. Because this write operation is not allowed on
the auxiliary volume, the volume cannot be mounted.
This access is provided only where consistency can be ensured. However, coherency cannot
be maintained between reads that are performed at the auxiliary and later write I/Os that are
performed at the master.
To enable access to the auxiliary volume for host operations, you must stop the Metro Mirror
relationship by specifying the -access parameter. While access to the auxiliary volume for
host operations is enabled, the host must be instructed to mount the volume before the
application can be started, or instructed to perform a recovery process.
For example, the Metro Mirror requirement to enable the auxiliary copy for access
differentiates it from third-party mirroring software on the host, which aims to emulate a
single, reliable disk regardless of what system is accessing it. Metro Mirror retains the
property that there are two volumes in existence but it suppresses one volume while the copy
is being maintained.
The use of an auxiliary copy demands a conscious policy decision by the administrator that a
failover is required and that the tasks to be performed on the host that is involved in
establishing the operation on the auxiliary copy are substantial. The goal is to make this copy
rapid (much faster when compared to recovering from a backup copy) but not seamless.
The failover process can be automated through failover management software. The IBM SVC
provides SNMP traps and programming (or scripting) for the CLI to enable this automation.
Number of MM or GM 256
Consistency Groups per system
Total volume size per I/O Group A per I/O Group limit of 1,024 TiB exists on the quantity of master
and auxiliary volume address spaces that can participate in
Metro Mirror and Global Mirror relationships. This maximum
configuration uses all 512 MiB of bitmap space for the I/O Group,
and it allows 10 MiB of space for all remaining copy services
features.
In Figure 9-40 on page 547, the MM/GM relationship diagram shows an overview of the
status that can apply to a MM/GM relationship in a connected state.
546 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
When the MM/GM relationship is created, you can specify whether the auxiliary volume is
already in sync with the master volume, and the background copy process is then skipped.
This capability is useful when MM/GM relationships are established for volumes that were
created with the format option.
Step 3
When the background copy completes, the MM/GM relationship transitions from the
InconsistentCopying state to the ConsistentSynchronized state.
Step 4:
a. When a MM/GM relationship is stopped in the ConsistentSynchronized state, the
MM/GM relationship enters the Idling state when you specify the -access option,
which enables write I/O on the auxiliary volume.
b. When a MM/GM relationship is stopped in the ConsistentSynchronized state without
an -access parameter, the auxiliary volumes remain read-only and the state of the
relationship changes to ConsistentStopped.
c. To enable write I/O on the auxiliary volume, when the MM/GM relationship is in the
ConsistentStopped state, issue the command Storwizetask stoprcrelationship,
which specifies the -access option, and the MM/GM relationship enters the Idling
state.
Step 5:
a. When a MM/GM relationship is started from the Idling state, you must specify the
-primary argument to set the copy direction. If no write I/O was performed (to the
master or auxiliary volume) while in the Idling state, the MM/GM relationship enters the
ConsistentSynchronized state.
b. If write I/O was performed to the master or auxiliary volume, the -force option must be
specified and the MM/GM relationship then enters the InconsistentCopying state while
the background copy is started. The background copy copies only the data that
changed on the primary volume while the relationship was stopped.
Stop or error
When a MM/GM relationship is stopped (intentionally or because of an error), the state
changes.
For example, the MM/GM relationships in the ConsistentSynchronized state enter the
ConsistentStopped state, and the MM/GM relationships in the InconsistentCopying state
enter the InconsistentStopped state.
If the connection is broken between the IBM SVC systems that are in a partnership, all
(intercluster) MM/GM relationships enter a Disconnected state. For more information, see
“Connected versus disconnected” on page 548.
State overview
In the following sections, we provide an overview of the various MM/GM states.
548 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
When the two systems can communicate, the systems and the relationships that span them
are described as connected. When they cannot communicate, the systems and the
relationships spanning them are described as disconnected.
In this state, both systems are left with fragmented relationships and are limited regarding the
configuration commands that can be performed. The disconnected relationships are
portrayed as having a changed state. The new states describe what is known about the
relationship and the configuration commands that are permitted.
When the systems can communicate again, the relationships are reconnected. MM/GM
automatically reconciles the two state fragments, considering any configuration or other event
that occurred while the relationship was disconnected. As a result, the relationship can return
to the state that it was in when it became disconnected or enter a new state.
Relationships that are configured between volumes in the same IBM SVC system
(intracluster) are never described as being in a disconnected state.
An auxiliary volume is described as consistent if it contains data that might be read by a host
system from the master if power failed at an imaginary point while I/O was in progress, and
power was later restored. This imaginary point is defined as the recovery point. The
requirements for consistency are expressed regarding activity at the master up to the
recovery point.
The auxiliary volume contains the data from all of the writes to the master for which the host
received successful completion and that data was not overwritten by a subsequent write
(before the recovery point).
For writes for which the host did not receive a successful completion (that is, it received a bad
completion or no completion at all), and the host then performed a read from the master of
that data that returned successful completion and no later write was sent (before the recovery
point), the auxiliary contains the same data as the data that was returned by the read from the
master.
From the point of view of an application, consistency means that an auxiliary volume contains
the same data as the master volume at the recovery point (the time at which the imaginary
power failure occurred).
For more information about dependent writes, see 9.4.3, “Consistency Groups” on page 486.
Because of the risk of data corruption, and in particular undetected data corruption, MM/GM
strongly enforces the concept of consistency and prohibits access to inconsistent data.
When you are deciding how to use Consistency Groups, the administrator must consider the
scope of an application’s data and consider all of the interdependent systems that
communicate and exchange information.
If two programs or systems communicate and store details as a result of the information
exchanged, either of the following actions might occur:
All of the data that is accessed by the group of systems must be placed into a single
Consistency Group.
The systems must be recovered independently (each within its own Consistency Group).
Then, each system must perform recovery with the other applications to become
consistent with them.
Consistency does not mean that the data is up-to-date. A copy can be consistent and yet
contain data that was frozen at a point in the past. Write I/O might continue to a master but
not be copied to the auxiliary. This state arises when it becomes impossible to keep data
up-to-date and maintain consistency. An example is a loss of communication between
systems when you are writing to the auxiliary.
When communication is lost for an extended period, MM/GM tracks the changes that
occurred on the master, but not the order or the details of such changes (write data). When
communication is restored, it is impossible to synchronize the auxiliary without sending write
data to the auxiliary out of order and, therefore, losing consistency.
Detailed states
In the following sections, we describe the states that are portrayed to the user for either
Consistency Groups or relationships. We also describe information that is available in each
state. The major states are designed to provide guidance about the available configuration
commands.
550 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
InconsistentStopped
InconsistentStopped is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O. A copy process must be
started to make the auxiliary consistent.
This state is entered when the relationship or Consistency Group was InconsistentCopying
and suffered a persistent error or received a stop command that caused the copy process to
stop.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
InconsistentCopying
InconsistentCopying is a connected state. In this state, the master is accessible for read and
write I/O, but the auxiliary is not accessible for read or write I/O.
In this state, a background copy process runs that copies data from the master to the auxiliary
volume.
In the absence of errors, an InconsistentCopying relationship is active, and the copy progress
increases until the copy process completes. In certain error situations, the copy progress
might freeze or even regress.
A persistent error or stop command places the relationship or Consistency Group into an
InconsistentStopped state. A start command is accepted but has no effect.
If the relationship or Consistency Group becomes disconnected, the auxiliary side transitions
to InconsistentDisconnected. The master side transitions to IdlingDisconnected.
ConsistentStopped
ConsistentStopped is a connected state. In this state, the auxiliary contains a consistent
image, but it might be out-of-date in relation to the master.
This state can arise when a relationship was in a ConsistentSynchronized state and
experiences an error that forces a Consistency Freeze. It can also arise when a relationship is
created with a CreateConsistentFlag set to TRUE.
Normally, write activity that follows an I/O error causes updates to the master and the
auxiliary is no longer synchronized. In this case, consistency must be given up for a period to
reestablish synchronization. You must use a start command with the -force option to
acknowledge this condition, and the relationship or Consistency Group transitions to
InconsistentCopying. Enter this command only after all outstanding events are repaired.
In the unusual case where the master and the auxiliary are still synchronized (perhaps
following a user stop, and no further write I/O was received), a start command takes the
relationship to ConsistentSynchronized. No -force option is required. Also, in this case, you
can enter a switch command that moves the relationship or Consistency Group to
ConsistentSynchronized and reverses the roles of the master and the auxiliary.
ConsistentSynchronized
ConsistentSynchronized is a connected state. In this state, the master volume is accessible
for read and write I/O, and the auxiliary volume is accessible for read-only I/O.
Writes that are sent to the master volume are also sent to the auxiliary volume. Either
successful completion must be received for both writes, the write must be failed to the host, or
a state must transition out of the ConsistentSynchronized state before a write is completed to
the host.
A stop command takes the relationship to the ConsistentStopped state. A stop command
with the -access parameter takes the relationship to the Idling state.
If the relationship or Consistency Group becomes disconnected, the same transitions are
made as for ConsistentStopped.
Idling
Idling is a connected state. Both master and auxiliary volumes operate in the master role.
Therefore, both master and auxiliary volumes are accessible for write I/O.
In this state, the relationship or Consistency Group accepts a start command. MM/GM
maintains a record of regions on each disk that received write I/O while they were idling. This
record is used to determine the areas that must be copied after a start command.
The start command must specify the new copy direction. A start command can cause a
loss of consistency if either volume in any relationship received write I/O, which is indicated by
the Synchronized status. If the start command leads to loss of consistency, you must specify
the -force parameter.
Also, the relationship or Consistency Group accepts a -clean option on the start command
while in this state. If the relationship or Consistency Group becomes disconnected, both sides
change their state to IdlingDisconnected.
IdlingDisconnected
IdlingDisconnected is a disconnected state. The target volumes in this half of the relationship
or Consistency Group are all in the master role and accept read or write I/O.
The priority in this state is to recover the link to restore the relationship or consistency.
552 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
No configuration activity is possible (except for deletes or stops) until the relationship
becomes connected again. At that point, the relationship transitions to a connected state. The
exact connected state that is entered depends on the state of the other half of the relationship
or Consistency Group, which depends on the following factors:
The state when it became disconnected
The write activity since it was disconnected
The configuration activity since it was disconnected
If both halves are IdlingDisconnected, the relationship becomes Idling when it is reconnected.
While IdlingDisconnected, if a write I/O is received that causes the loss of synchronization
(synchronized attribute transitions from true to false) and the relationship was not already
stopped (either through a user stop or a persistent error), an event is raised to notify you of
the condition. This same event also is raised when this condition occurs for the
ConsistentSynchronized state.
InconsistentDisconnected
InconsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and do not accept read or write
I/O.
Except for deletes, no configuration activity is permitted until the relationship becomes
connected again.
When the relationship or Consistency Group becomes connected again, the relationship
becomes InconsistentCopying automatically unless either of the following conditions are true:
The relationship was InconsistentStopped when it became disconnected.
The user issued a stop command while disconnected.
ConsistentDisconnected
ConsistentDisconnected is a disconnected state. The target volumes in this half of the
relationship or Consistency Group are all in the auxiliary role and accept read I/O but not write
I/O.
In this state, the relationship or Consistency Group displays an attribute of FreezeTime, which
is the point when Consistency was frozen. When it is entered from ConsistentStopped, it
retains the time that it had in that state. When it is entered from ConsistentSynchronized, the
FreezeTime shows the last time at which the relationship or Consistency Group was known to
be consistent. This time corresponds to the time of the last successful heartbeat to the other
system.
A stop command with the -access flag set to true transitions the relationship or Consistency
Group to the IdlingDisconnected state. This state allows write I/O to be performed to the
auxiliary volume and is used as part of a DR scenario.
When the relationship or Consistency Group becomes connected again, the relationship or
Consistency Group becomes ConsistentSynchronized only if this action does not lead to a
loss of consistency. The following conditions must be true:
The relationship was ConsistentSynchronized when it became disconnected.
No writes received successful completion at the master while disconnected.
Empty
This state applies only to Consistency Groups. It is the state of a Consistency Group that has
no relationships and no other state information to show.
It is entered when a Consistency Group is first created. It is exited when the first relationship
is added to the Consistency Group, at which point the state of the relationship becomes the
state of the Consistency Group.
The remote host server is mapped to the auxiliary volume and the disk is available for I/O.
For more information about MM/GM commands, see IBM System Storage SAN Volume
Controller and IBM Storwize V7000 Command-Line Interface User’s Guide, GC27-2287.
The command set for MM/GM contains the following broad groups:
Commands to create, delete, and manipulate relationships and Consistency Groups
Commands to cause state changes
If a configuration command affects more than one system, MM/GM performs the work to
coordinate configuration activity between the systems. Certain configuration commands can
be performed only when the systems are connected and fail with no effect when they are
disconnected.
Other configuration commands are permitted even though the systems are disconnected. The
state is reconciled automatically by MM/GM when the systems become connected again.
554 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
For any command (with one exception) a single system receives the command from the
administrator. This design is significant for defining the context for a CreateRelationship
mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case the
system that is receiving the command is called the local system.
The exception is the command that sets systems into an MM/GM partnership. The
mkfcpartnership and mkippartnership command must be issued on both, the local and
remote systems.
The commands in this section are described as an abstract command set and are
implemented by either of the following methods:
The CLI can be used for scripting and automation.
The GUI can be used for one-off tasks.
Note: This command is not supported on IP partnerships. Use the mkippartnership for IP
connections.
Important: Do not set this value higher than the default without first establishing that
the higher bandwidth can be sustained without affecting the host’s performance. The
limit must never be higher than the maximum that is supported by the infrastructure
connecting the remote sites, regardless of the compression rates that you might
achieve.
-gmlinktolerance link_tolerance
This parameter specifies the maximum period that the system tolerates delay before
stopping GM relationships. Specify values 60 - 86400 seconds in increments of 10
seconds. The default value is 300. Do not change this value except under the direction of
IBM Support.
-gmmaxhostdelay max_host_delay
Specifies the maximum time delay, in milliseconds, at which the Global Mirror link
tolerance timer starts counting down. This threshold value determines the additional
impact that Global Mirror operations can add to the response times of the Global Mirror
source volumes. You can use this parameter to increase the threshold from the default
value of 5 milliseconds.
-gminterdelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intercluster copying
to an auxiliary volume) is delayed. This parameter permits you to test performance
implications before GM is deployed and a long-distance link is obtained. Specify a value of
0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this argument
to test each intercluster GM relationship separately.
-gmintradelaysimulation link_tolerance
This parameter specifies the number of milliseconds that I/O activity (intracluster copying
to an auxiliary volume) is delayed. By using this parameter, you can test performance
implications before GM is deployed and a long-distance link is obtained. Specify a value of
0 - 100 milliseconds in 1-millisecond increments. The default value is 0. Use this argument
to test each intracluster GM relationship separately.
-maxreplicationdelay max_replication_delay
Sets a maximum replication delay in seconds. The value must be a number from 1 to 360.
This feature sets the maximum number of seconds to be tolerated to complete single IO. If
IO can’t complete within the max_replication_delay the 1920 event is reported. This is
the system wide setting. When set to 0 the feature is disabled. Applies to Metro Mirror and
Global Mirror relationships.
Use the chsystem command to adjust these values, as shown in the following example:
chsystem -gmlinktolerance 300
You can view all of these parameter values by using the lssystem <system_name> command.
gmlinktolerance
We focus on the gmlinktolerance parameter in particular. If poor response extends past the
specified tolerance, a 1920 event is logged and one or more GM relationships automatically
stop to protect the application hosts at the primary site. During normal operations, application
hosts experience a minimal effect from the response times because the GM feature uses
asynchronous replication.
However, if GM operations experience degraded response times from the secondary system
for an extended period, I/O operations begin to queue at the primary system. This queue
results in an extended response time to application hosts. In this situation, the
gmlinktolerance feature stops GM relationships and the application host’s response time
returns to normal. After a 1920 event occurs, the GM auxiliary volumes are no longer in the
consistent_synchronized state until you fix the cause of the event and restart your GM
relationships. For this reason, ensure that you monitor the system to track when these 1920
events occur.
You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0
(zero). However, the gmlinktolerance feature cannot protect applications from extended
response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature
under the following circumstances:
During SAN maintenance windows in which degraded performance is expected from SAN
components and application hosts can withstand extended response times from GM
volumes.
556 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
During periods when application hosts can tolerate extended response times and it is
expected that the gmlinktolerance feature might stop the GM relationships. For example,
if you test by using an I/O generator that is configured to stress the back-end storage, the
gmlinktolerance feature might detect the high latency and stop the GM relationships.
Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test
host to extended response times.
A 1920 event indicates that one or more of the SAN components cannot provide the
performance that is required by the application hosts. This situation can be temporary (for
example, a result of a maintenance activity) or permanent (for example, a result of a hardware
failure or an unexpected host I/O workload).
If 1920 events are occurring, it can be necessary to use a performance monitoring and
analysis tool, such as the IBM Virtual Storage Center to help identify and resolve the problem.
To establish a fully functional MM/GM partnership, you must issue this command on both
systems. This step is a prerequisite for creating MM/GM relationships between volumes on
the IBM SVC systems.
When the partnership is created, you can specify the bandwidth to be used by the
background copy process between the local and remote IBM SVC system. If it is not
specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is
less than or equal to the bandwidth that can be sustained by the intercluster link.
To set the background copy bandwidth optimally, ensure you consider all three resources:
primary storage, intercluster link bandwidth, and auxiliary storage. Provision the most
restrictive of these three resources between the background copy bandwidth and the peak
chpartnership command
To change the bandwidth that is available for background copy in an IBM SVC system
partnership, use the chpartnership -backgroundcopyrate percentage_of_link_bandwidth
command to specify the percentage of whole link capacity to be used by background copy
process.
The MM/GM consistency group name must be unique across all consistency groups that are
known to the systems owning this consistency group. If the consistency group involves two
systems, the systems must be in communication throughout the creation process.
The new consistency group does not contain any relationships and is in the empty state. You
can add MM/GM relationships to the group (upon creation or afterward) by using the
chrelationship command.
Optional parameter: If you do not use the -global optional parameter, a Metro Mirror
relationship is created instead of a Global Mirror relationship.
The auxiliary volume must be equal in size to the master volume or the command fails. If both
volumes are in the same system, they must be in the same I/O Group. The master and
auxiliary volume cannot be in an existing relationship and they cannot be the targets of a
FlashCopy mapping. This command returns the new relationship (relationship_id) when
successful.
When the MM/GM relationship is created, you can add it to a Consistency Group that exists
or it can be a stand-alone MM/GM relationship if no Consistency Group is specified.
When the command is issued, you can specify the master volume name and auxiliary system
to list the candidates that comply with the prerequisites to create an MM/GM relationship. If
the command is issued with no parameters all of the volumes that are not disallowed by
another configuration state, such as being a FlashCopy target, are listed.
558 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
When the command is issued, you can set the copy direction if it is undefined, and, optionally,
you can mark the auxiliary volume of the relationship as clean. The command fails if it is used
as an attempt to start a relationship that is already a part of a consistency group.
You can issue this command only to a relationship that is connected. For a relationship that is
idling, this command assigns a copy direction (master and auxiliary roles) and begins the
copy process. Otherwise, this command restarts a previous copy process that was stopped
by a stop command or by an I/O error.
If the resumption of the copy process leads to a period when the relationship is inconsistent,
you must specify the -force parameter when the relationship is restarted. This situation can
arise if, for example, the relationship was stopped and then further writes were performed on
the original master of the relationship. The use of the -force parameter here is a reminder
that the data on the auxiliary becomes inconsistent while resynchronization (background
copying) takes place and, therefore, is unusable for DR purposes before the background copy
completes.
In the Idling state, you must specify the master volume to indicate the copy direction. In other
connected states, you can provide the -primary argument, but it must match the existing
setting.
If the relationship is in an inconsistent state, any copy operation stops and does not resume
until you issue a startrcrelationship command. Write activity is no longer copied from the
master to the auxiliary volume. For a relationship in the ConsistentSynchronized state, this
command causes a Consistency Freeze.
For a consistency group that is idling, this command assigns a copy direction (master and
auxiliary roles) and begins the copy process. Otherwise, this command restarts a previous
copy process that was stopped by a stop command or by an I/O error.
If the consistency group is in an inconsistent state, any copy operation stops and does not
resume until you issue the startrcconsistgrp command. Write activity is no longer copied
from the master to the auxiliary volumes that belong to the relationships in the group. For a
consistency group in the ConsistentSynchronized state, this command causes a Consistency
Freeze.
If the relationship is disconnected at the time that the command is issued, the relationship is
deleted only on the system on which the command is being run. When the systems
reconnect, the relationship is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the relationship on
both systems, you can issue the rmrcrelationship command independently on both of the
systems.
A relationship cannot be deleted if it is part of a consistency group. You must first remove the
relationship from the consistency group.
560 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
If you delete an inconsistent relationship, the auxiliary volume becomes accessible even
though it is still inconsistent. This situation is the one case in which MM/GM does not inhibit
access to inconsistent data.
If the consistency group is disconnected at the time that the command is issued, the
consistency group is deleted only on the system on which the command is being run. When
the systems reconnect, the consistency group is automatically deleted on the other system.
Alternatively, if the systems are disconnected and you still want to remove the consistency
group on both systems, you can issue the rmrcconsistgrp command separately on both of
the systems.
If the consistency group is not empty, the relationships within it are removed from the
consistency group before the group is deleted. These relationships then become stand-alone
relationships. The state of these relationships is not changed by the action of removing them
from the consistency group.
Important: Remember, by reversing the roles, your current source volumes become
targets and target volumes become source volumes. Therefore, you will lose write access
to your current primary volumes.
In practice, the most often overlooked cause is latency. Global Mirror has a round-trip-time
tolerance limit of 80 or 250 milliseconds, depending on the firmware version and the hardware
model. See Figure 9-34 on page 538. A message that is sent from your source IBM SVC
system to your target IBM SVC system and the accompanying acknowledgment must have a
total time of 80 or 250 milliseconds round trip. In other words, it has to have up to 40 or
125-milliseconds latency each way.
The primary component of your round-trip time is the physical distance between sites. For
every 1000 kilometers (621.4 miles), you observe a 5-millisecond delay each way. This delay
does not include the time that is added by equipment in the path. Every device adds a varying
amount of time depending on the device, but a good rule is 25 microseconds for pure
hardware devices. For software-based functions (such as compression that is implemented in
applications), the added delay tends to be much higher (usually in the millisecond plus
range.) Next, we describe an example of a physical delay.
Company A has a production site that is 1900 kilometers (1180.6 miles) away from its
recovery site. The network service provider uses a total of five devices to connect the two
sites. In addition to those devices, Company A employs a SAN FC router at each site to
provide Fibre Channel over IP (FCIP) to encapsulate the FC traffic between sites.
Now, there are seven devices, and 1900 kilometers (1180.6 miles) of distance delay. All the
devices are adding 200 microseconds of delay each way. The distance adds 9.5 milliseconds
each way, for a total of 19 milliseconds. Combined with the device latency, the delay is
19.4 milliseconds of physical latency minimum, which is under the 80-millisecond limit of
Global Mirror until you realize that this number is the best case number.
The link quality and bandwidth play a large role. Your network provider likely ensures a
latency maximum on your network link; therefore, be sure to stay as far beneath the Global
Mirror round-trip-time (RTT) limit as possible. You can easily double or triple the expected
physical latency with a lower quality or lower bandwidth network link. Then, you are within the
range of exceeding the limit if high I/O occurs that exceeds the existing bandwidth capacity.
When you get a 1920 event, always check the latency first. The FCIP routing layer can
introduce latency if it is not correctly configured. If your network provider reports a much lower
latency, you might have a problem at your FCIP routing layer. Most FCIP routing devices have
built-in tools to allow you to check the RTT. When you are checking latency, remember that
TCP/IP routing devices (including FCIP routers) report RTT or round-trip time by using
standard 64-byte ping packets.
In Figure 9-41 on page 563, you can see why the effective transit time must be measured only
by using packets that are large enough to hold an FC frame, or 2148 bytes (2112 bytes of
payload and 36 bytes of header). Allow some overhead to be safe because various switch
vendors have optional features that might increase this size. After you verify your latency by
using the proper packet size, proceed with normal hardware troubleshooting.
562 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 08 Advanced Copy Services Tarik Marcin.fm
Before we proceed, we look at the second largest component of your RTT, which is
serialization delay. Serialization delay is the amount of time that is required to move a packet
of data of a specific size across a network link of a certain bandwidth. The required time to
move a specific amount of data decreases as the data transmission rate increases.
Figure 9-41 on page 563 shows the orders of magnitude of difference between the link
bandwidths. It is easy to see how 1920 errors can arise when your bandwidth is insufficient.
Never use a TCP/IP ping to measure RTT for FCIP traffic.
Figure 9-41 Effect of packet size (in bytes) versus the link size
In Figure 9-41, the amount of time in microseconds that is required to transmit a packet
across network links of varying bandwidth capacity is compared. The following packet sizes
are used:
64 bytes: The size of the common ping packet
1500 bytes: The size of the standard TCP/IP packet
2148 bytes: The size of an FC frame
Finally, your path maximum transmission unit (MTU) affects the delay that is incurred to get a
packet from one location to another location. An MTU might cause fragmentation or be too
large and cause too many retransmits when a packet is lost.
The source of this error is most often a fabric problem or a problem in the network path
between your partners. When you receive this error, check your fabric configuration for zoning
of more than one host bus adapter (HBA) port for each node per I/O Group if your fabric has
more than 64 HBA ports zoned. One port for each node per I/O Group per fabric that is
associated with the host is the recommended zoning configuration for fabrics. For those
fabrics with 64 or more host ports, this recommendation becomes a rule. Therefore, you will
see four paths to each volume that is discovered on the host because each host needs to
have at least two FC ports from separate HBA cards, each in a separate fabric. On each
fabric, each host FC port is zoned to two of the IB M SVC ports, and each IBM SVC port
comes from one IBM SVC node. This gives four paths per host volume. More than four paths
per volume are supported but not recommended.
Improper zoning can lead to SAN congestion, which can inhibit remote link communication
intermittently. Checking the zero buffer credit timer via IBM Virtual Storage Center and
comparing against your sample interval reveals potential SAN congestion. If a zero buffer
credit timer is above 2% of the total time of the sample interval, it might cause problems.
Next, always ask your network provider to check the status of the link. If the link is acceptable,
watch for repeats of this error. It is possible in a normal and functional network setup to have
occasional 1720 errors, but multiple occurrences could indicate a larger problem.
If you receive multiple 1720 errors, recheck your network connection and then check the
Storwize partnership information to verify its status and settings. Then, proceed to perform
diagnostics for every piece of equipment in the path between the two Storwize systems. It
often helps to have a diagram that shows the path of your replication from both logical and
physical configuration viewpoints.
If your investigations fail to resolve your remote copy problems, contact your IBM Support
representative for more complete analysis.
564 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
10
Command prefix changes: The svctask and svcinfo command prefixes are no longer
needed when you are issuing a command. If you have existing scripts that use those
prefixes, they continue to function. You do not need to change your scripts.
When the command syntax is shown, you see certain parameters in square brackets, for
example [parameter]. These brackets indicate that the parameter is optional in most (if not
all) instances. Any information that is not in square brackets is required information. You can
view the syntax of a command by entering one of the following commands:
svcinfo -? shows a complete list of informational commands.
svctask -? shows a complete list of task commands.
svcinfo commandname -? shows the syntax of informational commands.
svctask commandname -? shows the syntax of task commands.
svcinfo commandname -filtervalue? shows the filters that you can use to reduce the
output of the informational commands.
Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask
commandname -h command.
If you review the syntax of the command by entering svcinfo command name -?, you often see
-filter listed as a parameter. Be aware that the correct parameter is -filtervalue.
Tip: You can use the up and down arrow keys on your keyboard to recall commands that
were recently issued. Then, you can use the left and right, Backspace, and Delete keys to
edit commands before you resubmit them.
Using shortcuts
You can use the shortcuts command to display a list of display or execution commands. This
command produces an alphabetical list of actions that are supported. The command parameter
must be svcinfo for display commands or svctask for execution commands. The model
parameter allows for different shortcuts on different platforms, 2145 or 2076, as shown in the
following example:
566 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
cpdumps
deactivatefeature
detectmdisk
dumpallmdiskbadblocks
dumpauditlog
dumperrlog
dumpmdiskbadblocks
enablecli
expandvdisksize
finderr
includemdisk
migrateexts
migratetoimage
migratevdisk
mkarray
mkdistributedarray
mkemailserver
mkemailuser
mkfcconsistgrp
mkfcmap
mkfcpartnership
mkhost
mkimagevolume
mkippartnership
mkldapserver
mkmdiskgrp
mkmetadatavdisk
mkpartnership
mkquorumapp
mkrcconsistgrp
mkrcrelationship
mksnmpserver
mksyslogserver
mkuser
mkusergrp
mkvdisk
mkvdiskhostmap
mkvolume
movevdisk
ping
preplivedump
prestartfcconsistgrp
prestartfcmap
recoverarray
recoverarraybysystem
recovervdisk
recovervdiskbyiogrp
recovervdiskbysystem
repairsevdiskcopy
repairvdiskcopy
resetleds
rmarray
rmemailserver
rmemailuser
rmfcconsistgrp
rmfcmap
rmhost
rmhostiogrp
rmhostport
rmldapserver
568 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
rmmdisk
rmmdiskgrp
rmmetadatavdisk
rmnode
rmpartnership
rmportip
rmrcconsistgrp
rmrcrelationship
rmsnmpserver
rmsyslogserver
rmuser
rmusergrp
rmvdisk
rmvdiskaccess
rmvdiskcopy
rmvdiskhostmap
rmvolume
rmvolumecopy
sendinventoryemail
setdisktrace
setlocale
setpwdreset
setsystemtime
settimezone
settrace
shrinkvdisksize
splitvdiskcopy
startemail
startfcconsistgrp
startfcmap
startrcconsistgrp
startrcrelationship
startstats
starttrace
stopemail
stopfcconsistgrp
stopfcmap
stoprcconsistgrp
stoprcrelationship
stopsystem
stoptrace
switchrcconsistgrp
switchrcrelationship
testemail
triggerdrivedump
triggerenclosuredump
triggerlivedump
writesernum
As shown in Example 10-2, we ran a lsiogrp command. By pressing Ctrl+R and entering s,
the command that we needed was recalled from history.
Filtering
To reduce the output that is displayed by a command, you can specify a number of filters,
depending on the command that you are running. To see which filters are available, enter the
command followed by the -filtervalue? flag, as shown in Example 10-3.
vdisk_UID
fc_map_count
570 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
copy_count
fast_write_state
se_copy_count
filesystem
preferred_node_id
mirror_write_priority
RC_change
compressed_copy_count
access_IO_group_count
block_size
owner_type
owner_id
owner_name
parent_mdisk_grp_id
parent_mdisk_grp_name
formatting
volume_id
volume_name
volume_function
When you know the filters, you can be more selective in generating output. Consider the
following points:
Multiple filters can be combined to create specific searches.
You can use an asterisk (*) as a wildcard when names are used.
When capacity is used, the units must also be specified by using -u b | kb | mb | gb | tb |
pb.
For example, if we run the lsvdisk command with no filters but with the -delim parameter, we
see the output that is shown in Example 10-4 on page 571.
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons as opposed to wrapping text over multiple lines. This parameter is often used if
you must get reports during script execution.
If we now add a filter (mdisk_grp_name) to our lsvdisk command, we can reduce the output,
as shown in Example 10-5.
Note: After one hour, a fixed SSH interactive session times out, which means the SSH
session is automatically closed. This session timeout limit is not configurable.
You can use the following UNIX commands to manage interactive SSH sessions:
less Moves through the output bidirectionally a page at a time. (secure mode)
tr Translate characters.
572 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
To determine if current volumes on your system could be compressed for additional capacity
savings, the system supports CLI commands that analyze the volumes for potential
compression savings. The analyzevdisk command can be run on a single volume and all the
volumes that are on the system can be analyzed using the analyzevdiskbysystem command.
Any volumes created after the compression analysis completes can be evaluated individually
for compression savings. Ensure that volumes to be analyzed contain as much active data as
possible rather than volumes that are mostly empty of data. Analyzing active data increases
accuracy and reduces the risk of analyzing old data that is already deleted but can still have
traces on the device. These commands provide the functionality of the Comprestimator Utility
which is a tool that can be downloaded to hosts to evaluate compression savings. In some
After the analysis completes, you can display the results using the lsvdiskanalysis
command. You can display results for all the volumes or single volumes by specifying a
volume name or identifier for individual analysis.
To analyze a single volume for compression savings, complete these steps:
– In the command-line interface, enter the following command:
analyzevdisk -vdisk_name | -vdisk_id
where -vdisk_name or -vdisk_name is either the name or identifier for the volume that
you want to analyze for compression savings.
– Analysis results can be displayed after the process completes by issuing the following
command:
lsvdiskanalysis -vdisk_name | -vdisk_id
where -vdisk_name or -vdisk_id is either the name or identifier for the volume that you
want to analyze for compression savings.
To analyze all the volumes that are currently on the system, complete these steps:
– In the command-line interface, enter the following command:
analyzevdiskbysystem
This command analyzes all the current volumes that are created on the system.
Volumes that are created during or after the analysis are not included and can be
analyzed individually with the analyzevdisk. Progress for analyzing of all the volumes
on system depends on the number of volumes being analyzed and results can be
expected at about a minute per volume. For example if a system has 50 volumes,
compression savings analysis would take approximately 50 minutes.
– To check the progress of the analysis, enter the following command:
lsvdiskanalysisprogress
This command displays the total number of volumes on the system, the total number of
volumes that are remaining to be analyzed, and the estimated time of completion.
chbanner:
You can create or change a message that displays when users log on to the system. When
users log on to the system with the management GUI, command-line interface, or service
assistant, the message displays before they log on to the system.
To create or change the login message, you can use the chbanner command or the
management GUI. If you are using the command, you must create the message in a
supported text editor and use Secure Copy (SCP) to copy the file to the configuration node on
the system.
To change the login message from a SAN administrator workstation, complete the following
steps.
1. Use a suitable text editor to create a file that contains the text of the message.
Note: The message cannot exceed 4 Kbytes.
574 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
2. Use a Secure Copy client to copy the file to the configuration node of the system to be
configured (e.g. /tmp/loginmessage). Specify the management IP address of the system
to be configured.
3. Log on to the system to be configured.
4. In the command-line interface, type the following command to set the login message:
chbanner -file filepath
where filepath specifies the file that contains the text of the new message (e.g.
/tmp/loginmessage).
chsystemcert:
Use the chsystemcert command to manage the Secure Sockets Layer (SSL) certificate that is
installed on a clustered system. You can also generate a new self-signed SSL certificate. This
command can also be used to create a certificate request to be copied from the system and
signed by a certificate authority (CA). The signed certificate that is returned by the CA can be
installed. You can also use this command to export the current SSL certificate (for example to
allow the certificate to be imported into a key server).
During system setup, an initial certificate is created to use for secure connections between
web browsers. Based on the security requirements for your system, you can create either a
new self-signed certificate or install a signed certificate that is created by a third-party
certificate authority. Self-signed certificates are generated automatically by the system and
encrypt communications between the browser and the system. Self-signed certificates can
generate web browser security warnings and might not comply with organizational security
guidelines.
mkdistributedarray:
Use the mkdistributedarray command to create a distributed array and add it to a storage
pool.
Distributed array configurations may contain between 4 - 128 drives. Distributed arrays
remove the need for separate drives that are idle until a failure occurs. Instead of allocating
one or more drives as spares, the spare capacity is distributed over specific rebuild areas
across all the member drives. Data can be copied faster to the rebuild area and redundancy is
restored much more rapidly. Additionally, as the rebuild progresses, the performance of the
pool is more uniform because all of the available drives are used for every volume extent.
After the failed drive is replaced, data is copied back to the drive from the distributed spare
capacity. Unlike "hot spare" drives, read/write requests are processed on other parts of the
drive that are not being used as rebuild areas. The number of rebuild areas is based on the
width of the array. The size of the rebuild area determines how many times the distributed
array can recover failed drives without risking becoming degraded. For example, a distributed
array that uses RAID 6 drives can handle two concurrent failures. After the failed drives have
been rebuilt, the array can tolerate another two drive failures. If all of the rebuild areas are
used to recover data, the array becomes degraded on the next drive failure.
Distributed RAID 5 arrays stripe data over the member drives with one parity strip on every
stripe. These distributed arrays can support 4 - 128 drives. RAID 5 distributed arrays can
tolerate the failure of one member drive.
Distributed RAID 6
RAID 6 arrays stripe data over the member drives with two parity strips on every stripe.
These distributed arrays can support 6 - 128 drives. A RAID 6 distributed array can
tolerate any two concurrent member drive failures.
mkimagevolume:
Use the mkimagevolume command to create a new image mode volume. This command is for
high availability configurations including HyperSwap or stretched systems.
mkmetadatavdisk / rmmetadatavdisk:
To enable Virtual Volumes by using the command-line interface (CLI), a utility volume is
required to store critical metadata for Virtual Volumes. To create the utility volume on the
system, you must have either the administrator or the security administrator user role. If
possible, have a mirrored copy of the utility volume stored in a second storage pool in a
separate failure domain. Use a storage pool that is made from MDisks that are presented
from a different storage controller or a different I/O group. For a single storage pool, enter
the following command:
svctask mkmetadatavdisk -mdiskgrp mdiskgrpid
For multiple storage pools, enter the following command:
svctask mkmetadatavdisk -mdiskgrp mdiskgrpid_1:mdiskgrpid_2
To complete the Virtual Volumes implementation, see the documentation in the IBM
Spectrum Control Base Edition User Guide about Creating a VVOL-enabled service on
Spectrum Virtualize/Storwize storage systems.
Use the rmmetadatavdisk command to remove the metadata volume from a storage pool.
When -ignorevvolsexist is specified, only the metadata volume is deleted.
rmmetadatavdisk -ignorevvolsexist
mkquorumapp:
To use an IP-based quorum application as the quorum device for the third site, no Fibre
Channel connectivity is used. Java applications are run on hosts at the third site. However,
there are strict requirements on the IP network and slight disadvantages with using IP
quorum applications. Unlike quorum disks, all IP quorum applications must be reconfigured
and redeployed to hosts when certain aspects of the system configuration change. These
aspects include adding or removing a node from the system or when node service IP
addresses are changed.
For stable quorum resolutions, an IP network must provide the following requirements:
Connectivity from the hosts to the service IP addresses of all nodes. The network must
also deal with possible security implications of exposing the service IP addresses, as this
576 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
connectivity can also be used to access the service GUI if IP quorum is configured
incorrectly.
Port 1260 is used by IP quorum applications to communicate from the hosts to all nodes.
The maximum round-trip delay must not exceed 80 milliseconds (ms), which means 40 ms
each direction.
A minimum bandwidth of 2 megabytes per second is guaranteed for node-to-quorum
traffic.
Even with IP quorum applications at the third site, quorum disks at site one and site two are
required, as they are used to store metadata. To provide quorum resolution, use the
mkquorumapp command to generate a Java application that is copied from the system and run
on a host at a third site. The maximum number of applications that can be deployed is five.
Currently, supported Java Runtime Environments are IBM Java 7.1 and IBM Java 8.
mkvolume / rmvolume:
Use the mkvolume command to create an empty volume from existing storage pools. This
command is for high availability configurations including HyperSwap or stretched systems.
A HyperSwap volume has one copy on each site. Before you create HyperSwap volumes,
you must configure the HyperSwap topology.
After you configure the HyperSwap topology, complete one of the following steps.
– If you are using the management GUI, use the Create Volumes wizard to create
HyperSwap volumes.
– If you are using the command-line interface, the mkvolume command creates a
volume. A HyperSwap volume is created by specifying two storage pools in
independent sites. For example:
mkvolume -size 100 -pool site1pool:site2pool
Use the rmvolume command to remove a volume and all copies and mirrors. For a
HyperSwap volume, this includes deleting the active-active relationship and the change
volumes.
rmvolumecopy:
Use the rmvolumecopy command to remove a volume copy and all copies and mirrors.
HyperSwap volumes that are part of a consistency group must be removed from that
consistency group before you can remove the last volume copy from that site.
V7.6 includes command changes and the addition of attributes and variables for several
existing commands. For more information, see the command reference or help, which is
available at this website:
https://ibm.biz/BdHnKF
To display more detailed information about a specific controller, run the command again and
append the controller name parameter, for example, controller ID 4, as shown in
Example 10-7.
lscontroller -delim 7
id 7
controller_name controller7
WWNN 20000004CF2412AC
mdisk_link_count 1
max_mdisk_link_count 1
degraded no
vendor_id SEAGATE
product_id_low ST373405
product_id_high FC
product_revision 0003
ctrl_s/n 3EK0J5Y8
allow_quorum no
site_id 2
site_name DR
WWPN 22000004CF2412AC
path_count 1
max_path_count 1
WWPN 21000004CF2412AC
path_count 0
max_path_count 0
fabric_type sas_direct
578 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Choosing a new name: The chcontroller command specifies the new name first. You
can use letters A - Z, a to z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “controller” because this prefix is reserved for SVC assignment only.
This command displays the state of all discoveries in the clustered system. During discovery,
the system updates the drive and MDisk records. You must wait until the discovery finishes
and is inactive before you attempt to use the system. This command displays one of the
following results:
Active: A discovery operation is in progress at the time that the command is issued.
Inactive: No discovery operations are in progress at the time that the command is issued.
If new storage was attached and the clustered system did not detect the new storage, you
might need to run this command before the system can detect the new MDisks.
Use the detectmdisk command to scan for newly added MDisks, as shown in
Example 10-10.
To check whether any newly added MDisks were successfully detected, run the lsmdisk
command and look for new unmanaged MDisks.
If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk
subsystem and that the zones are set up correctly.
Discovery process: If you assigned many logical unit numbers (LUNs) to your SVC, the
discovery process can take time. Check several times by using the lsmdisk command to
see whether all the expected MDisks are present.
When all the disks that are allocated to the SVC are seen from the SVC system, the following
procedure is a useful way to verify the MDisks that are unmanaged and ready to be added to
the storage pool.
Alternatively, you can list all MDisks (managed or unmanaged) by running the lsmdisk
command, as shown in Example 10-12.
From this output, you can see more information, such as the status, about each MDisk.
For our current task, we are interested only in the unmanaged disks because they are
candidates for a storage pool.
Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.
580 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
2. If not all of the MDisks that you expected are visible, rescan the available FC network by
entering the detectmdisk command, as shown in Example 10-13.
3. If you run the lsmdiskcandidate command again and your MDisk or MDisks are still not
visible, check that the LUNs from your subsystem were correctly assigned to the SVC and
that the appropriate zoning is in place (for example, the SVC can see the disk subsystem).
The summary for an individual MDisk is lsmdisk name or ID. Include the name or ID of the
MDisk from which you want the information, as shown in Example 10-15.
max_path_count 2
ctrl_LUN_# 0000000000000006
UID 6005076802880102c00000000000001a00000000000000000000000000000000
preferred_WWPN 5005076802102B6D
active_WWPN many
fast_write_state empty
raid_status
raid_level
redundancy
strip_size
spare_goal
spare_protection_min
balanced
tier nearline
slow_write_priority
fabric_type fc
site_id 2
site_name site2
easy_tier_load high
encrypt no
distributed no
drive_class_id
drive_count 0
stripe_width 0
rebuild_areas_total
rebuild_areas_available
rebuild_areas_goal
The chmdisk command: The chmdisk command specifies the new name first. You can
use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name
can be 1 - 63 characters. However, the new name cannot start with a number, dash, or the
word “MDisk” because this prefix is reserved for SVC assignment only.
582 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
By running the lsmdisk command, you can see that mdisk4 is excluded, as shown in
Example 10-17.
After the necessary corrective action is taken to repair the MDisk (replace the failed disk,
repair the SAN zones, and so on), we must include the MDisk again. We issue the
includemdisk command (Example 10-18) because the SVC system does not include the
MDisk automatically.
Running the lsmdisk command again shows that mdisk4 is online again, as shown in
Example 10-19.
You can add only unmanaged MDisks to a storage pool. This command adds the MDisk
named mdisk6 to the storage pool that is named STGPool_Multi_Tier.
Important: Do not add this MDisk to a storage pool if you want to create an image mode
volume from the MDisk that you are adding. When you add an MDisk to a storage pool, it
becomes managed and extent mapping is not necessarily one-to-one anymore.
Example 10-21 lsmdisk -filtervalue: MDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC_DH8:superuser>lsmdisk -filtervalue mdisk_grp_name=Site2_Pool
id name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID
tier encrypt site_id site_name
0 mdisk0 online managed 0 Site2_Pool 500.0GB 0000000000000000 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001400000000000000000000000000000000 nearline no 2 site2
1 mdisk1 online managed 0 Site2_Pool 500.0GB 0000000000000001 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001500000000000000000000000000000000 nearline no 2 site2
2 mdisk2 online managed 0 Site2_Pool 500.0GB 0000000000000002 Site2_V7K_EXTSG03_A
6005076802880102c00000000000001600000000000000000000000000000000 nearline no 2 site2
By using a wildcard with this command, you can see all of the MDisks that are present in the
storage pools that are named Site2_Pool* (the asterisk (*) indicates a wildcard).
This section describes the operations that use MDisks and the storage pool. It also explains
the tasks that we can perform at the storage pool level.
Create a storage pool by using the mkmdiskgrp command, as shown in Example 10-22.
584 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
This command creates a storage pool that is called CompressedV7000. The extent size that is
used within this group is 256 MiB. We did not add any MDisks to the storage pool, so it is an
empty storage pool.
You can add unmanaged MDisks and create the storage pool in the same command. Use the
mkmdiskgrp command with the -mdisk parameter and enter the IDs or names of the MDisks to
add the MDisks immediately after the storage pool is created.
Before the creation of the storage pool, enter the lsmdisk command, as shown in
Example 10-23. This command lists all of the available MDisks that are seen by the SVC
system.
By using the same command (mkmdiskgrp) and knowing the MDisk IDs that we are using, we
can add multiple MDisks to the storage pool at the same time. We now add the unmanaged
MDisks to the storage pool that we created, as shown in Example 10-24 on page 585.
This command creates a storage pool that is called ITSO_Pool1. The extent size that is used
within this group is 256 MiB, and two MDisks (7 and 8) are added to the storage pool.
Storage pool name: The -name and -mdisk parameters are optional. If you do not enter a
-name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the
SVC internally. If you do not enter the -mdisk parameter, an empty storage pool is created.
If you want to provide a name, you can use letters A - Z, a - z, numbers 0 - 9, and the
underscore (_). The name can be 1 - 63 characters, but it cannot start with a number or the
word “MDiskgrp” because this prefix is reserved for SVC assignment only.
By running the lsmdisk command, you now see the MDisks as managed and as part of the
ITSO_Pool1, as shown in Example 10-25.
In SVC 7.6, you can also create a child pool, which is a storage pool that is inside a parent
pool (Example 10-26 on page 586).
586 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
5:ITSO_Pool1:online:2:0:2.00GB:256:2.00GB:0.00MB:0.00MB:0.00MB:0:0:auto:balanced:no:0.00MB:
0.00MB:0.00MB:5:ITSO_Pool1:0:0.00MB:parent:no:none::
Changing the storage pool: The chmdiskgrp command specifies the new name first. You
can use letters A - Z, a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new
name can be 1 - 63 characters. However, the new name cannot start with a number, dash,
or the word “mdiskgrp” because this prefix is reserved for SVC assignment only.
This command removes storage pool STGPool_DS3500-2_new from the SVC system
configuration.
Removing a storage pool from the SVC system configuration: If there are MDisks
within the storage pool, you must use the -force flag to remove the storage pool from the
SVC system configuration, as shown in the following example:
rmmdiskgrp STGPool_DS3500-2_new -force
Confirm that you want to use this flag because it destroys all mapping information and data
that is held on the volumes. The mapping information and data cannot be recovered.
This command removes the MDisk with ID 8 from the storage pool with ID 2. The -force flag
is set because volumes are using this storage pool.
Sufficient space: The removal takes place if there is sufficient space to migrate the
volume’s data to other extents on MDisks that remain in the same storage pool. If there is
no sufficient space available, the command will not be issued and an error message
(CMMVC5860E The action failed because there were not enough extents in the
managed disk group.) will be displayed. After you remove the MDisk from the storage pool,
changing the mode from managed to unmanaged takes time, depending on the size of the
MDisk that you are removing.
Host is powered on, connected, and zoned to the SAN Volume Controller
When you create your host on the SVC, it is a preferred practice to check whether the host
bus adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By
checking, you ensure that zoning is done and that the correct WWPN is used. Run the
lshbaportcandidate command, as shown in Example 10-31.
588 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
After you know the WWPNs that are displayed, match your host (use host or SAN switch
utilities to verify) and use the mkhost command to create your host.
Name: If you do not provide the -name parameter, the SVC automatically generates the
name hostx (where x is the ID sequence number that is assigned by the SVC internally).
You can use the letters A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore
(_). The name can be 1 - 63 characters. However, the name cannot start with a number,
dash, or the word “host” because this prefix is reserved for SVC assignment only.
This command creates a host that is called Almaden that uses WWPN
21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA.
Ports: You can define 1 - 8 ports per host, or you can use the addport command for
additional ports later on, which is shown in 10.4.5, “Adding ports to a defined host” on
page 592.
In this case, you can enter the WWPN of your HBA or HBAs and use the -force flag to create
the host, regardless of whether they are connected, as shown in Example 10-33.
This command forces the creation of a host that is called Almaden that uses WWPN
210000E08B89C1CD:210000E08B054CAA.
The iSCSI functionality allows the host to access volumes through the SVC without being
attached to the SAN. Back-end storage and node-to-node communication still need the FC
network to communicate, but the host does not necessarily need to be connected to the SAN.
When we create a host that uses iSCSI as a communication method, iSCSI initiator software
must be installed on the host to initiate the communication between the SVC and the host.
This installation creates an iSCSI qualified name (IQN) identifier that is needed before we
create our host.
Before we start, we check our server’s IQN address (we are running Windows Server 2008 in
the example shown below). We select Start → Programs → Administrative tools, and we
select iSCSI initiator. The IQN in our example is
iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com, as shown in Figure 10-1.
We create the host by issuing the mkhost command, as shown in Example 10-34. When the
command completes successfully, we display our created host.
590 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IBM_2145:ITSO_SVC_DH8:superuser>lshost 4
id 4
name Baldur
port_count 1
type generic
mask 1111111111111111111111111111111111111111111111111111111111111111
iogrp_count 1
status offline
site_id
site_name
iscsi_name iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com
node_logged_in_count 0
state offline
Important: When the host is initially configured, the default authentication method is set to
no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is
set. To set a CHAP secret for authenticating the iSCSI host with the SVC system, use the
chhost command with the chapsecret parameter. If you must display a CHAP secret for a
defined server, use the lsiscsiauth command. The lsiscsiauth command lists the
CHAP secret that is configured for authenticating an entity to the SVC system.
We now created our host definition. We map a volume to our new iSCSI server, as shown in
Example 10-35. We created the volume, as described in 10.6.1, “Creating a volume” on
page 595. In our scenario, our volume’s ID is 21 and the host name is Baldur. We map it to
our iSCSI host.
Tip: FC hosts and iSCSI hosts are handled in the same way operationally after they are
created.
IBM_2145:ITSO_SVC_DH8:superuser>lshost
id name port_count iogrp_count status site_id site_name
0 Palau 2 4 online
1 Nile 2 1 online
2 Kanaga 2 1 online
3 Siam 2 2 online
4 Angola 1 4 online
Host name: The chhost command specifies the new name first. You can use letters A - Z
and a - z, numbers 0 - 9, the dash (-), and the underscore (_). The new name can be 1 - 63
characters. However, it cannot start with a number, dash, or the word “host” because this
prefix is reserved for SVC assignment only.
Hosts that require the -type parameter: If you use Hewlett-Packard UNIX (HP-UX), you
use the -type option. For more information about the hosts that require the -type
parameter, see IBM System Storage Open Software Family SAN Volume Controller: Host
Attachment Guide, SC26-7563.
The command that is shown in Example 10-37 deletes the host that is called Angola from the
SVC configuration.
Deleting a host: If any volumes are assigned to the host, you must use the -force flag, for
example, rmhost -force Angola.
If your host is connected through SAN with FC and if the WWPN is zoned to the SVC system,
issue the lshbaportcandidate command to compare with the information that you have from
the server administrator, as shown in Example 10-38.
Use host or SAN switch utilities to verify whether the WWPN matches your information. If the
WWPN matches your information, use the addhostport command to add the port to the host,
as shown in Example 10-39.
592 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Adding multiple ports: You can add multiple ports at one time by using the separator or
colon (:) between WWPNs, as shown in the following example:
addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau
If the new HBA is not connected or zoned, the lshbaportcandidate command does not
display your WWPN. In this case, you can manually enter the WWPN of your HBA or HBAs
and use the -force flag to create the host, as shown in Example 10-40.
This command forces the addition of the WWPN that is named 210000E08B054CAA to the host
called Palau.
If you run the lshost command again, you can see your host with an updated port count of 3,
as shown in Example 10-41.
If your host uses iSCSI as a connection method, you must have the new iSCSI IQN ID before
you add the port. Unlike FC-attached hosts, you cannot check for available candidates with
iSCSI.
After you acquire the other iSCSI IQN, use the addhostport command, as shown in
Example 10-42.
Before you remove the WWPN, ensure that it is the correct WWPN by issuing the lshost
command, as shown in Example 10-43.
type generic
mask 1111
iogrp_count 4
WWPN 210000E08B054CAA
node_logged_in_count 2
state active
WWPN 210000E08B89C1CD
node_logged_in_count 2
state offline
When you know the WWPN or iSCSI IQN, use the rmhostport command to delete a host
port, as shown in Example 10-44.
This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI
IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.
Removing multiple ports: You can remove multiple ports at one time by using the
separator or colon (:) between the port names, as shown in the following example:
rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola
Example 10-45 shows the lsportip command that lists the iSCSI IP addresses that are
assigned for each port on each node in the system.
594 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Example 10-46 shows how the cfgportip command assigns an IP address to each node
Ethernet port for iSCSI I/O.
When a volume is created, you must enter several parameters at the CLI. Mandatory and
optional parameters are available.
Creating an image mode disk: If you do not specify the -size parameter when you create
an image mode disk, the entire MDisk capacity is used.
When you are ready to create a volume, you must know the following information before you
start to create the volume:
In which storage pool the volume has its extents
From which I/O Group the volume is accessed
Which SVC node is the preferred node for the volume
Size of the volume
Name of the volume
Type of the volume
Whether this volume is managed by Easy Tier to optimize its performance
When you are ready to create your striped volume, use the mkvdisk command. In
Example 10-47, this command creates a 10 GB striped volume with volume ID 20 within the
storage pool STGPool_DS3500-2 and assigns it to the io_grp0 I/O Group. Its preferred node is
node 1.
To verify the results, use the lsvdisk command, as shown in Example 10-48.
copy_id 0
596 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
status online
sync yes
auto_delete no
primary yes
mdisk_grp_id 3
mdisk_grp_name Site1_Pool
type striped
mdisk_id
mdisk_name
fast_write_state not_empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status balanced
tier ssd
tier_capacity 0.00MB
tier enterprise
tier_capacity 10.00GB
tier nearline
tier_capacity 0.00MB
compressed_copy no
uncompressed_used_capacity 10.00GB
parent_mdisk_grp_id 3
parent_mdisk_grp_name Site1_Pool
encrypt no
This command creates a space-efficient 10 GB volume. The volume belongs to the storage
pool that is named Site1_Pool and is owned by I/O Group io_grp0. The real capacity
automatically expands until the volume size of 10 GB is reached. The grain size is set to 256
KB, which is the default.
Disk size: When the -rsize parameter is used, you have the following options: disk_size,
disk_size_percentage, and auto.
Specify the units for a disk_size integer by using the -unit parameter; the default is MB.
The -rsize value can be greater than, equal to, or less than the size of the volume.
The auto option creates a volume copy that uses the entire size of the MDisk. If you specify
the -rsize auto option, you must also specify the -vtype image option.
You can use this command to bring a non-virtualized disk under the control of the clustered
system. After it is under the control of the clustered system, you can migrate the volume from
the single managed disk.
598 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
When the first MDisk extent is migrated, the volume is no longer an image mode volume. You
can add an image mode volume to an already populated storage pool with other types of
volumes, such as striped or sequential volumes.
Size: An image mode volume must be at least 512 bytes (the capacity cannot be 0). That
is, the minimum size that can be specified for an image mode volume must be the same as
the storage pool extent size to which it is added, with a minimum of 16 MiB.
You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The
-fmtdisk parameter cannot be used to create an image mode volume.
Capacity: If you create a mirrored volume from two image mode MDisks without specifying
a -capacity value, the capacity of the resulting volume is the smaller of the two MDisks
and the remaining space on the larger MDisk is inaccessible.
If you do not specify the -size parameter when you create an image mode disk, the entire
MDisk capacity is used.
Use the mkvdisk command to create an image mode volume, as shown in Example 10-51.
This command creates an image mode volume that is called Image_Volume_A that uses the
mdisk10 MDisk. The volume belongs to the storage pool STGPool_DS3500-1 and the volume is
owned by the io_grp0 I/O Group.
If we run the lsvdisk command again, the volume that is named Image_Volume_A has a
status of image, as shown in Example 10-52.
In addition, you can use volume mirroring as an alternative method of migrating volumes
between storage pools.
For example, if you have a non-mirrored volume in one storage pool and want to migrate that
volume to another storage pool, you can add a copy of the volume and specify the second
storage pool. After the copies are synchronized, you can delete the copy on the first storage
pool. The volume is copied to the second storage pool while remaining online during the copy.
To create a mirrored copy of a volume, use the addvdiskcopy command. This command adds
a copy of the chosen volume to the selected storage pool, which changes a non-mirrored
volume into a mirrored volume.
In the following scenario, we show creating a mirrored volume from one storage pool to
another storage pool.
As you can see in Example 10-53, the volume has a copy with copy_id 0.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
600 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
In Example 10-54, we add the volume copy mirror by using the addvdiskcopy command.
During the synchronization process, you can see the status by using the
lsvdisksyncprogress command. As shown in Example 10-55, the first time that the status is
checked, the synchronization progress is at 48%, and the estimated completion time is
151026203918 (Estimated completion time is displayed in the YYMMDDHHMMSS format. In
our example it is 2015, Oct-26 20:39:18). The second time that the command is run, the
progress status is at 100%, and the synchronization is complete.
As you can see in Example 10-56, the new mirrored volume copy (copy_id 1) was added and
can be seen by using the lsvdisk command.
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 2
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 1.00GB
copy_id 1
status online
sync yes
primary no
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
602 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
tier_capacity
1.00GB
While you are adding a volume copy mirror, you can define a mirror with different parameters
to the volume copy. Therefore, you can define a thin-provisioned volume copy for a
non-volume copy volume and vice versa, which is one way to migrate a non-thin-provisioned
volume to a thin-provisioned volume.
Volume copy mirror parameters: To change the parameters of a volume copy mirror, you
must delete the volume copy and redefine it with the new values.
Now, we can change the name of the volume that was mirrored from Volume_no_mirror to
Volume_mirrored, as shown in Example 10-57.
We will show the usage of addvdiskcopy with the -autodelete flag set. The -autodelete flag
specifies the primary copy is deleted once the secondary copy is synchronized.
copy_id 0
status online
sync yes
auto_delete no
primary yes
..
compressed_copy no
uncompressed_used_capacity 2.00GB
..
In Example 10-59 we will add a compressed copy with the -autodelete flag set.
Example 10-60 shows the lsvdisk output with an additional compressed volume (copy 1) and
volume copy 0 being set to auto_delete yes.
copy_id 0
status online
sync yes
auto_delete yes
primary yes
..
compressed_copy no
uncompressed_used_capacity 2.00GB
..
copy_id 1
status online
sync no
auto_delete no
primary no
..
compressed_copy yes
uncompressed_used_capacity 0.00MB
..
Once copy 1 is synchronized, copy 0 will be deleted. You can monitor the progress of volume
copy synchronization by using lsvdisksyncprogress.
Note: Consider the compression best practices before adding the first compressed copy to
a system.
Example 10-61 shows the splitvdiskcopy command, which is used to split a mirrored
volume. It creates a volume that is named Volume_new from the volume that is named
Volume_mirrored.
604 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
As you can see in Example 10-62 on page 605, the new volume that is named Volume_new
was created as an independent volume.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 2
mdisk_grp_name STGPool_DS5000-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 1.00GB
real_capacity 1.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
1.00GB
By issuing the command that is shown in Example 10-61 on page 604, Volume_mirrored does
not have its mirrored copy and a volume is created automatically.
You can specify a new name or label. The new name can be used to reference the volume.
The I/O Group with which this volume is associated can be changed. Changing the I/O Group
with which this volume is associated requires a flush of the cache within the nodes in the
current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host
level before you perform this operation.
Tips: If the volume has a mapping to any hosts, it is impossible to move the volume to an
I/O Group that does not include any of those hosts.
This operation fails if insufficient space exists to allocate bitmaps for a mirrored volume in
the target I/O Group.
If the -force parameter is used and the system is unable to destage all write data from the
cache, the contents of the volume are corrupted by the loss of the cached data.
If the -force parameter is used to move a volume that has out-of-sync copies, a full
resynchronization is required.
Base the choice between I/O and MB as the I/O governing throttle on the disk access profile
of the application. Database applications generally issue large amounts of I/O, but they
transfer only a relatively small amount of data. In this case, setting an I/O governing throttle
that is based on MB per second does not achieve much. It is better to use an I/Os per second
as a second throttle.
At the other extreme, a streaming video application generally issues a small amount of I/O,
but it transfers large amounts of data. In contrast to the database example, setting an I/O
governing throttle that is based on I/Os per second does not achieve much, so it is better to
use an MB per second throttle.
I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of
the lsvdisk command) does not mean that zero I/Os per second (or MB per second) can
be achieved. It means that no throttle is set.
606 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
New name first: The chvdisk command specifies the new name first. The name can
consist of letters A - Z and a - z, numbers 0 - 9, the dash (-), and the underscore (_). It can
be 1 - 63 characters. However, it cannot start with a number, dash, or the word “vdisk”
because this prefix is reserved for SVC assignment only.
The first command changes the volume throttling of volume_7 to 20 MBps. The second
command changes the thin-provisioned volume warning to 85%. To verify the changes, issue
the lsvdisk command, as shown in Example 10-64.
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
autoexpand on
warning 85
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity
2.02GB
If any remote copy, FlashCopy, or host mappings still exist for this volume, the delete fails
unless the -force flag is specified. This flag ensures the deletion of the volume and any
volume to host mappings and copy mappings.
If the volume is the subject of a “migrate to image mode” process, the delete fails unless the
-force flag is specified. This flag halts the migration and then deletes the volume.
If the command succeeds (without the -force flag) for an image mode volume, the underlying
back-end controller logical unit is consistent with the data that a host might previously read
from the image mode volume. That is, all fast write data was flushed to the underlying LUN. If
the -force flag is used, consistency is not guaranteed.
If any non-destaged data exists in the fast write cache for this volume, the deletion of the
volume fails unless the -force flag is specified. Now, any non-destaged data in the fast write
cache is deleted.
Use the rmvdisk command to delete a volume from your SVC configuration, as shown in
Example 10-65.
This command deletes the volume_A volume from the SVC configuration. If the volume is
assigned to a host, you must use the -force flag to delete the volume, as shown in
Example 10-66.
608 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Use the chsystem command to set an interval at which the volume must be idle before it can
be deleted from the system. Any command that is affected by this setting will fail. If the -force
parameter is used the deletion will fail if the volume has not been idle for the specified
interval.
Assuming that your operating systems support expansion, you can use the expandvdisksize
command to increase the capacity of a volume, as shown in Example 10-67.
This command expands the volume_C volume (which was 35 GB) by another 5 GB to give it a
total size of 40 GB.
To expand a thin-provisioned volume, you can use the -rsize option, as shown in
Example 10-68. This command changes the real size of the volume_B volume to a real
capacity of 55 GB. The capacity of the volume is unchanged.
used_capacity 0.41MB
real_capacity 50.02GB
free_capacity 50.02GB
overallocation 199
autoexpand on
warning 80
grainsize 32
se_copy yes
IBM_2145:ITSO_SVC_DH8:superuser>expandvdisksize -rsize 5 -unit gb volume_B
IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk volume_B
id 26
name volume_B
capacity 100.00GB
type striped
.
.
copy_id 0
status online
used_capacity 0.41MB
real_capacity 55.02GB
free_capacity 55.02GB
overallocation 181
autoexpand on
warning 80
grainsize 32
se_copy yes
Important: If a volume is expanded, its type becomes striped even if it was previously
sequential or in image mode. If there are not enough extents to expand your volume to the
specified size, you receive the following error message:
CMMVC5860E Ic_failed_vg_insufficient_virtual_extents
When the HBA on the host scans for devices that are attached to it, the HBA discovers all of
the volumes that are mapped to its FC ports. When the devices are found, each one is
allocated an identifier (SCSI LUN ID).
For example, the first disk that is found is generally SCSI LUN 1. You can control the order in
which the HBA discovers volumes by assigning the SCSI LUN ID, as required. If you do not
specify a SCSI LUN ID, the system automatically assigns the next available SCSI LUN ID,
based on any mappings that exist with that host.
By using the volume and host definition that we created in the previous sections, we assign
volumes to hosts that are ready for their use. We use the mkvdiskhostmap command, as
shown in Example 10-69.
610 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
This command displays volume_B and volume_C that are assigned to host Almaden, as shown
in Example 10-70.
Assigning a specific LUN ID to a volume: The optional -scsi scsi_num parameter can
help assign a specific LUN ID to a volume that is to be associated with a host. The default
(if nothing is specified) is to increment based on what is already assigned to the host.
Certain HBA device drivers stop when they find a gap in the SCSI LUN IDs, as shown in the
following examples:
Volume 1 is mapped to Host 1 with SCSI LUN ID 1.
Volume 2 is mapped to Host 1 with SCSI LUN ID 2.
Volume 3 is mapped to Host 1 with SCSI LUN ID 4.
When the device driver scans the HBA, it might stop after discovering volumes 1 and 2
because no SCSI LUN is mapped with ID 3.
It is not possible to map a volume to a host more than one time at separate LUNs
(Example 10-71).
This command maps the volume that is called volume_A to the host that is called Siam.
All tasks that are required to assign a volume to an attached host are complete.
From this command, you can see that the host Siam has only one assigned volume that is
called volume_A. The SCSI LUN ID is also shown, which is the ID by which the volume is
presented to the host. If no host is specified, all defined host-to-volume mappings are
returned.
Specifying the flag before the host name: Although the -delim flag normally comes at
the end of the command string, in this case, you must specify this flag before the host
name. Otherwise, it returns the following message:
CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or
incorrect argument sequence has been detected. Ensure that the input is as per
the help.
This command unmaps the volume that is called volume_D from the host that is called Tiger.
As you can see from the parameters that are shown in Example 10-74, before you can
migrate your volume, you must know the name of the volume that you want to migrate and the
name of the storage pool to which you want to migrate it. To discover the names, run the
lsvdisk and lsmdiskgrp commands.
After you know these details, you can run the migratevdisk command, as shown in
Example 10-74.
612 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Tips: If insufficient extents are available within your target storage pool, you receive an
error message. Ensure that the source MDisk group and target MDisk group have the
same extent size.
By using the optional threads parameter, you can assign a priority to the migration process.
The default is 4, which is the highest priority setting. However, if you want the process to
take a lower priority over other types of I/O, you can specify 3, 2, or 1.
You can run the lsmigrate command at any time to see the status of the migration process,
as shown in Example 10-75.
IBM_2145:ITSO_SVC_DH8:superuser>lsmigrate
migrate_type MDisk_Group_Migration
progress 76
migrate_source_vdisk_index 27
migrate_target_mdisk_grp 2
max_thread_count 4
migrate_source_vdisk_copy_id
0
Progress: The progress is shown as percent complete. If you receive no more replies, it
means that the process finished.
To migrate a fully managed volume to an image mode volume, the following rules apply:
The destination MDisk must be greater than or equal to the size of the volume.
The MDisk that is specified as the target must be in an unmanaged state.
Regardless of the mode in which the volume starts, it is reported as a managed mode
during the migration.
Both of the MDisks that are involved are reported as being image mode volumes during
the migration.
If the migration is interrupted by a system recovery or cache problem, the migration
resumes after the recovery completes.
In this example, you migrate the data from volume_A onto mdisk10, and the MDisk must be put
into the STGPool_IMAGE storage pool.
You can use this command to shrink the physical capacity that is allocated to a particular
volume by the specified amount. You also can use this command to shrink the virtual capacity
of a thin-provisioned volume without altering the physical capacity that is assigned to the
volume. Use the following parameters:
For a non-thin-provisioned volume, use the -size parameter.
For a thin-provisioned volume’s real capacity, use the -rsize parameter.
For the thin-provisioned volume’s virtual capacity, use the -size parameter.
When the virtual capacity of a thin-provisioned volume is changed, the warning threshold is
automatically scaled to match. The new threshold is stored as a percentage.
The system arbitrarily reduces the capacity of the volume by removing a partial extent, one
extent, or multiple extents from those extents that are allocated to the volume. You cannot
control which extents are removed; therefore, you cannot assume that it is unused space that
is removed.
Image mode volumes cannot be reduced in size. Instead, they must first be migrated to fully
managed mode. To run the shrinkvdisksize command on a mirrored volume, all copies of
the volume must be synchronized.
Important: Consider the following guidelines when you are shrinking a disk:
If the volume contains data, do not shrink the disk.
Certain operating systems or file systems use the outer edge of the disk for
performance reasons. This command can shrink a FlashCopy target volume to the
same capacity as the source.
Before you shrink a volume, validate that the volume is not mapped to any host objects.
If the volume is mapped, data is displayed. You can determine the exact capacity of the
source or master volume by issuing the svcinfo lsvdisk -bytes vdiskname command.
Shrink the volume by the required amount by issuing the shrinkvdisksize -size
disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.
Assuming that your operating system supports it, you can use the shrinkvdisksize command
to decrease the capacity of a volume, as shown in Example 10-77.
614 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
This command shrinks a volume that is called volume_D from a total size of 80 GB by 44 GB,
to a new total size of 36 GB.
This command displays a list of all of the volume IDs that correspond to the volume copies
that use mdisk8.
To correlate the IDs that are displayed in this output to volume names, we can run the
lsvdisk command. For more information, see 10.6, “Working with volumes” on page 595.
Example 10-79 lsvdisk -filtervalue: VDisks in the managed disk group (MDG)
IBM_2145:ITSO_SVC_DH8:superuser>lsvdisk -filtervalue mdisk_grp_name=STGPool_DS3500-2 -delim
,
id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC
_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count,fast_write_state,se_copy_count,RC_cha
nge
7,W2K3_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000008,0,1,empty,0,0,no
8,W2K3_SRV2_VOL02,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
000000000000009,0,1,empty,0,0,no
9,W2K3_SRV2_VOL03,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F1
00000000000000A,0,1,empty,0,0,no
10,W2K3_SRV2_VOL04,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000B,0,1,empty,0,0,no
11,W2K3_SRV2_VOL05,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000C,0,1,empty,0,0,no
12,W2K3_SRV2_VOL06,0,io_grp0,online,1,STGPool_DS3500-2,10.00GB,striped,,,,,6005076801AF813F
100000000000000D,0,1,empty,0,0,no
16,AIX_SRV2_VOL01,0,io_grp0,online,1,STGPool_DS3500-2,20.00GB,striped,,,,,6005076801AF813F1
000000000000011,0,1,empty,0,0,no
If you want to know more about these MDisks, you can run the lsmdisk command, as
described in 10.2, “New commands and functions” on page 573 (by using the ID that is
displayed in Example 10-80 rather than the name).
10.6.22 Showing from which storage pool a volume has its extents
Use the lsvdisk command to show to which storage pool a specific volume belongs, as
shown in Example 10-81.
copy_id 0
status online
sync yes
primary yes
616 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
mdisk_grp_id 0
mdisk_grp_name STGPool_DS3500-1
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 2.02GB
free_capacity 2.02GB
overallocation 496
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
easy_tier_status inactive
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 2.02GB
To learn more about these storage pools, you can run the lsmdiskgrp command, as
described in 10.3.10, “Working with a storage pool” on page 584.
This command shows the host or hosts to which the volume_B volume was mapped. Duplicate
entries are normal because more paths exist between the clustered system and the host. To
be sure that the operating system on the host sees the disk only one time, you must install
and configure a multipath software application, such as IBM Subsystem Driver (SDD).
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.
This command shows which volumes are mapped to the host called Almaden.
Specifying the -delim flag: Although the optional -delim flag normally comes at the end
of the command string, you must specify this flag before the volume name in this case.
Otherwise, the command does not return any data.
Instead, you must enter the command that is shown in Example 10-84 from your multipath
command prompt.
State: In Example 10-84, the state of each path is OPEN. Sometimes, the state is
CLOSED. This state does not necessarily indicate a problem because it might be a
result of the path’s processing stage.
2. Run the lshostvdiskmap command to return a list of all assigned volumes, as shown in
Example 10-85.
618 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
2,Almaden,2,28,volume_C,60050768018301BF2800000000000006
Look for the disk serial number that matches your datapath query device output. This
host was defined in our SVC as Almaden.
3. Run the lsvdiskmember vdiskname command for the MDisk or a list of the MDisks that
make up the specified volume, as shown in Example 10-86.
4. Query the MDisks with the lsmdisk mdiskID command to discover their controller and
LUN information, as shown in Example 10-87. The output displays the controller name
and the controller LUN ID to help you to track back to a LUN within the disk subsystem (if
you gave your controller a unique name, such as a serial number). See Example 10-87.
10.7 Scripting under the CLI for SAN Volume Controller task
automation
Command prefix changes: The svctask and svcinfo command prefixes are no longer
necessary when a command is run. If you have existing scripts that use those prefixes,
they continue to function. You do not need to change the scripts.
The use of scripting constructs works better for the automation of regular operational jobs.
You can use available shells to develop scripts. Scripting enhances the productivity of SVC
administrators and the integration of their storage virtualization environment. You can create
your own customized scripts to automate many tasks for completion at various times and run
them through the CLI.
We suggest that you keep the scripting as simple as possible in large SAN environments
where scripting commands are used. It is harder to manage fallback, documentation, and the
verification of a successful script before execution in a large SAN environment.
In this section, we present an overview of how to automate various tasks by creating scripts
by using the SVC CLI.
C re a te
c o n n e c tio n
(S S H ) to th e
SVC
S c h e d u le d
a c tiv a t io n
R u n th e or
ccoommmmaanndds M anual
a c tiv a t io n
P e r fo r m
lo g g in g
Secure Shell Key: The use of a Secure Shell (SSH) key is optional. (You can use a user
ID and password to access the system.) However, we suggest the use of an SSH key for
security reasons. We provide a sample of its use in this section.
620 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
When you create a connection to the SVC, you must have access to a public key that
corresponds to a public key that was previously uploaded to the SVC if you are running a
script.
The key is used to establish the SSH connection that is needed to use the CLI on the SVC. If
the SSH keypair is generated without a passphrase, you can connect without the need of
special scripting to parse in the passphrase.
On UNIX systems, you can use the ssh command to create an SSH connection with the SVC.
On Windows systems, you can use a utility that is called plink.exe (which is provided with the
PuTTY tool) to create an SSH connection with the SVC. In the following examples, we use
plink to create the SSH connection to the SVC.
When you use the CLI, not all commands provide a response to determine the status of the
started command. Therefore, always create checks that can be logged for monitoring and
troubleshooting purposes.
The private key for authentication (for example, icat.ppk). This key is the private key that
you created. Set this parameter by clicking Connection → Session → Auth, as shown in
Figure 10-4 on page 622.
The IP address of the SVC clustered system. Set this parameter by clicking Session, as
shown in Figure 10-5.
622 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IBM provides a suite of scripting tools that are based on Perl. You can download these
scripting tools from this website:
http://www.alphaworks.ibm.com/tech/svctools
Important changes: The following changes were made since SVC 6.3:
The svcinfo lscluster command was changed to lssystem.
The svctask chcluster command was changed to chsystem, and several optional
parameters were moved to new commands. For example, to change the IP address of
the system, you can now use the chsystemip command. All of the old commands are
maintained for compatibility.
Use the lssystem command to display summary information about the clustered system, as
shown in Example 10-88.
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply [email protected]
email_contact Support team
email_contact_primary 123456789
email_contact_alternate 123456789
email_contact_location IBM
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 7
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 50
tier ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier enterprise
tier_capacity 571.00GB
tier_free_capacity 493.00GB
tier nearline
tier_capacity 0.00MB
tier_free_capacity 0.00MB
has_nas_key no
layer replication
rc_buffer_size 48
compression_active no
compression_virtual_capacity 0.00MB
compression_compressed_capacity 0.00MB
compression_uncompressed_capacity 0.00MB
cache_prefetch on
email_organization IBM
email_machine_address Street
email_machine_city City
email_machine_state CA
email_machine_zip 99999
email_machine_country CA
total_drive_raw_capacity 0
compression_destage_mode off
local_fc_port_mask 1111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask 11111111111111111111111111111111111111111111111111111111111
high_temp_mode off
topology standard
topology_status
rc_auth_method none
vdisk_protection_time 60
vdisk_protection_enabled yes
product_name IBM SAN Volume Controller
624 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Use the lssystemstats command to display the most recent values of all node statistics
across all nodes in a clustered system, as shown in Example 10-89.
All command parameters are optional; however, you must specify at least one parameter.
Important: Changing the speed on a running system breaks I/O service to the attached
hosts. Before the fabric speed is changed, stop the I/O from the active hosts and force
these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by
removing drive letters (for Windows host types). You might need to reboot specific hosts to
detect the new fabric speed.
Example 10-90 shows configuring the Network Time Protocol (NTP) IP address.
For more information about how iSCSI works, see Chapter 2, “IBM SAN Volume Controller”
on page 11. In this section, we show how we configured our system for use with iSCSI.
We configured our nodes to use the primary and secondary Ethernet ports for iSCSI and to
contain the clustered system IP. When we configured our nodes to be used with iSCSI, we did
not affect our clustered system IP. The clustered system IP is changed, as described in
10.8.2, “Changing system settings” on page 625.
Important: You can have more than a one IP address to one physical connection
relationship. We can have a four-to-one relationship (4:1), which consists of two IPv4
addresses plus two IPv6 addresses (four total) to one physical connection per port per
node.
Tip: When you are reconfiguring IP ports, be aware that configured iSCSI connections
must reconnect if changes are made to the IP addresses of the nodes.
Example 10-91 Setting a CHAP secret for the entire clustered system to passw0rd
IBM_2145:ITSO_SVC_DH8:superuser>chsystem -iscsiauthmethod chap -chapsecret passw0rd
In our scenario, our clustered system IP address is 9.64.210.64, which is not affected during
our configuration of the node’s IP addresses.
We start by listing our ports by using the lsportip command (not shown). We see that we
have two ports per node with which to work. Both ports can have two IP addresses that can
be used for iSCSI.
We configure the secondary port in both nodes in our I/O Group, as shown in Example 10-92.
Example 10-92 Configuring the secondary Ethernet port on both SVC nodes
IBM_2145:ITSO_SVC_DH8:superuser>cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask
255.255.255.0 2
IBM_2145:ITSO_SVC_DH8:superuser>cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask
255.255.255.0 2
While both nodes are online, each node is available to iSCSI hosts on the IP address that we
configured. iSCSI failover between nodes is enabled automatically. Therefore, if a node goes
offline for any reason, its partner node in the I/O Group becomes available on the failed
626 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
node’s port IP address. This design ensures that hosts can continue to perform I/O. The
lsportip command displays the port IP addresses that are active on each node.
Now, two active system ports are on the configuration node. If the system IP address is
changed, the open command-line shell closes during the processing of the command. You
must reconnect to the new IP address if connected through that port.
If the clustered system IP address is changed, the open command-line shell closes during the
processing of the command and you must reconnect to the new IP address. If this node
cannot rejoin the clustered system, you can start the node in service mode. In this mode, the
node can be accessed as a stand-alone node by using the service IP address.
List the IP addresses of the clustered system by issuing the lssystemip command, as shown
in Example 10-93.
Modify the IP address by running the chsystemip command. You can specify a static IP
address or have the system assign a dynamic IP address, as shown in Example 10-94.
Important: If you specify a new system IP address, the existing communication with the
system through the CLI is broken and the PuTTY application automatically closes. You
must relaunch the PuTTY application and point to the new IP address, but your SSH key
still works.
List the IP service addresses of the clustered system by running the lsserviceip command.
The required tasks to change the IP addresses of the clustered system are complete.
This command checks whether the specified IP address is accessible from the node on which
the command is run using the specified IP address.
Use this command to ping from any port on any node as long as you are logged on to the
service assistant on that node.
Example 10-95 shows an invocation example of the ping command from 10.18.228.140 to
10.18.228.142.
Tip: If you changed the time zone, you must clear the event log dump directory before you
can view the event log through the web application.
628 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
2. To find the time zone code that is associated with your time zone, enter the lstimezones
command, as shown in Example 10-97. A truncated list is provided for this example. If this
setting is correct (for example, 522 UTC), go to Step 4. If the setting is incorrect, continue
with Step 3.
3. Set the time zone by running the settimezone command, as shown in Example 10-98.
4. Set the system time by running the setsystemtime command, as shown in Example 10-99.
The clustered system time zone and time are now set.
Specify the interval (1 - 60) in minutes. This command starts statistics collection and gathers
data at 5-minute intervals.
Statistics collection: To verify that the statistics collection is set, display the system
properties again, as shown in Example 10-101.
V6.3: Starting with V6.3, the command svctask stopstats is deprecated. You can no
longer disable statistics collection.
When input power is restored to the uninterruptible power supply units, they start to recharge.
However, the SVC does not permit any I/O activity to be performed to the volumes until the
630 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
uninterruptible power supply units are charged enough to enable all of the data on the SVC
nodes to be destaged in a subsequent unexpected power loss. Recharging the uninterruptible
power supply can take up to two hours.
Shutting down the clustered system before input power is removed to the uninterruptible
power supply units prevents the battery power from being drained. It also makes it possible for
I/O activity to be resumed when input power is restored.
This command shuts down the SVC clustered system. All data is flushed to disk before the
power is removed. You lose administrative contact with your system and the PuTTY
application automatically closes.
2. You are presented with the following message:
Warning: Are you sure that you want to continue with the shut down?
Ensure that you stopped all FlashCopy mappings, Metro Mirror (remote copy)
relationships, data migration operations, and forced deletions before you continue. Enter y
in response to this message to run the command. No feedback is displayed. Entering
anything other than y or Y results in the command not running. No feedback is displayed.
Important: Before a clustered system is shut down, ensure that all I/O operations are
stopped that are destined for this system because you lose all access to all volumes
that are provided by this system. Failure to do so can result in failed I/O operations
being reported to the host operating systems.
Begin the process of quiescing all I/O to the system by stopping the applications on the
hosts that are using the volumes that are provided by the clustered system.
We completed the tasks that are required to shut down the system. To shut down the
uninterruptible power supply units, press the power-on button on the front panel of each
uninterruptible power supply unit.
Restarting the system: To restart the clustered system, you must first restart the
uninterruptible power supply units by pressing the power button on their front panels. Then,
press the power-on button on the service panel of one of the nodes within the system. After
the node is fully booted (for example, displaying Cluster: on line 1 and the cluster name
on line 2 of the panel), you can start the other nodes in the same way.
As soon as all of the nodes are fully booted, you can reestablish administrative contact by
using PuTTY, and your system is fully operational again.
10.9 Nodes
In this section, we describe the tasks that can be performed at an individual node level.
Tip: The -delim parameter truncates the content in the window and separates data fields
with colons (:) as opposed to wrapping text over multiple lines.
632 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
service_IP_address 10.18.228.101
service_gateway 10.18.228.1
service_subnet_mask 255.255.255.0
service_IP_address_6
service_gateway_6
service_prefix_6
To have a fully functional SVC system, you must add a second node to the configuration. To
add a node to a clustered system, complete the following steps to gather the necessary
information:
1. Before you can add a node, you must know which unconfigured nodes are available as
candidates. Issue the lsnodecandidate command, as shown in Example 10-105.
2. You must specify to which I/O Group you are adding the node. If you enter the lsnode
command, you can identify the I/O Group ID of the group to which you are adding your
node, as shown in Example 10-106.
Tip: The node that you want to add must have a separate uninterruptible power supply
unit serial number from the uninterruptible power supply unit on the first node.
IBM_2145:ITSO_SVC_DH8:superuser>lsnode -delim ,
id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS
_unique_id,hardware,iscsi_name,iscsi_alias,panel_name,enclosure_id,canister_id,
enclosure_serial_number
4,SVC1N3,1000739007,50050768010037E5,online,1,io_grp1,no,10000000000037E5,8G4,i
qn.1986-03.com.ibm:2145.itsosvc1.svc1n3,,104643,,,
3. Now that you know the available nodes, use the addnode command to add the node to the
SVC clustered system configuration, as shown in Example 10-107.
This command adds the candidate node with the wwnodename of 50050768010037E5 to
the I/O Group called io_grp1.
The -wwnodename parameter (50050768010037E5) was used. However, you can also use the
-panelname parameter (104643) instead, as shown in Example 10-108. If you are standing
in front of the node, it is easier to read the panel name than it is to get the worldwide node
name (WWNN).
The optional -name parameter (SVC1N3) also was used. If you do not provide the -name
parameter, the SVC automatically generates the name nodex (where x is the ID sequence
number that is assigned internally by the SVC).
Name: If you want to provide a name, you can use letters A - Z and a - z, numbers 0 - 9,
the dash (-), and the underscore (_). The name can be 1 - 63 characters. However, the
name cannot start with a number, dash, or the word “node” because this prefix is
reserved for SVC assignment only.
4. If the addnode command returns no information, your second node is powered on, the
zones are correctly defined, and the preexisting system configuration data can be stored
in the node. If you are sure that this node is not part of another active SVC system, you
can use the service panel to delete the existing system information. After this action is
complete, reissue the lsnodecandidate command and you see that the node is listed.
Name: The chnode command specifies the new name first. You can use letters A - Z and
a - z, numbers 0 - 9, the dash (-), and the underscore (_). The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or the word “node”
because this prefix is reserved for SVC assignment only.
Because SVC1N2 also was the configuration node, the SVC transfers the configuration node
responsibilities to a surviving node within the I/O Group. Unfortunately, the PuTTY session
cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses
communication and closes automatically.
634 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
We must restart the PuTTY application to establish a secure session with the new
configuration node.
Important: If this node is the last node in an I/O Group and volumes are still assigned to
the I/O Group, the node is not deleted from the clustered system.
If this node is the last node in the system and the I/O Group has no remaining volumes, the
clustered system is destroyed and all virtualization information is lost. Any data that is still
required must be backed up or migrated before the system is destroyed.
Use the stopsystem -node command to shut down a single node, as shown in
Example 10-111.
This command shuts down node SVC1N3 in a graceful manner. When this node is shut down,
the other node in the I/O Group destages the contents of its cache and enters write-through
mode until the node is powered up and rejoins the clustered system.
Important: You do not need to stop FlashCopy mappings, remote copy relationships, and
data migration operations. The other node handles these activities, but be aware that the
system has a single point of failure now.
If this node is the last node in an I/O Group, all access to the volumes in the I/O Group is lost.
Verify that you want to shut down this node before this command is run. You must specify the
-force flag.
By reissuing the lsnode command (as shown in Example 10-112 on page 635), we can see
that the node is now offline.
Restart: To restart the node manually, press the power-on button that is on the service
panel of the node.
We completed the tasks that are required to view, add, delete, rename, and shut down a node
within an SVC environment.
In our example, the SVC predefines five I/O Groups. In a four-node clustered system (similar
to our example), only two I/O Groups are in use. The other I/O Groups (io_grp2 and io_grp3)
are for a six-node or eight-node clustered system.
The recovery I/O Group is a temporary home for volumes when all nodes in the I/O Group
that normally owns them experience multiple failures. By using this design, the volumes can
be moved to the recovery I/O Group and then into a working I/O Group. While temporarily
assigned to the recovery I/O Group, I/O access is not possible.
636 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
To see whether the renaming was successful, run the lsiogrp command again to see the
change.
Use the rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in
Example 10-116.
To list all of the host objects that are mapped to the specified I/O Group, use the lsiogrphost
command, as shown in Example 10-118.
Example 10-120 shows a simple example of creating a user. User John is added to the user
group Monitor with the password m0nitor.
Example 10-120 mkuser creates a user called John with password m0nitor
IBM_2145:ITSO_SVC_DH8:superuser>mkuser -name John -usergrp Monitor -password m0nitor
User, id [6], successfully created
Local users are users that are not authenticated by a remote authentication server. Remote
users are users that are authenticated by a remote central registry server.
The user groups include a defined authority role, as listed in Table 10-3.
638 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Copy operator All display commands and the Controls all of the copy
following commands: functionality of the cluster
prestartfcconsistgrp
startfcconsistgrp
stopfcconsistgrp
chfcconsistgrp
prestartfcmap
startfcmap
stopfcmap
chfcmap
startrcconsistgrp
stoprcconsistgrp
switchrcconsistgrp
chrcconsistgrp
startrcrelationship
stoprcrelationship
switchrcrelationship
chrcrelationship
chpartnership
In addition, all commands
allowed by the Monitor role.
Monitor All display commands and the Need view access only
following commands:
finderr
dumperrlog
dumpinternallog
chcurrentuser
ping
svcconfig backup
As of SVC 6.3, you can connect to the clustered system by using the same user name with
which you log in to an SVC GUI.
To view the user roles on your system, use the lsusergrp command, as shown in
Example 10-121.
To view the defined users and the user groups to which they belong, use the lsuser
command, as shown in Example 10-122.
640 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
By using the chuser command, you can modify a user. You can rename a user, assign a new
password (if you are logged on with administrative privileges), and move a user from one user
group to another user group. However, be aware that a member can be a member of only one
group at a time.
The SVC console performs actions by issuing Common Information Model (CIM) commands
to the CIM object manager (CIMOM), which then runs the CLI programs.
Actions that are performed by using the native GUI and the SVC Console are recorded in the
audit log.
The audit log contains approximately 1 MB of data, which can contain about 6,000
average-length commands. When this log is full, the system copies it to a new file in the
/dumps/audit directory on the configuration node and resets the in-memory audit log.
To display entries from the audit log, use the catauditlog -first 5 command to return a list
of five in-memory audit log entries, as shown in Example 10-123.
If you must dump the contents of the in-memory audit log to a file on the current configuration
node, use the dumpauditlog command. This command does not provide any feedback; it
provides the prompt only. To obtain a list of the audit log dumps, use the lsdumps command,
as shown in Example 10-124.
Scenario description
We use the scenario that is described in this section in both the CLI section and the GUI
section. In this scenario, we want to FlashCopy the following volumes:
DB_Source: Database files
Log_Source: Database log files
App_Source: Application files
642 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
In our scenario, the application files are independent of the database; therefore, we create a
single FlashCopy mapping for App_Source. We make two FlashCopy targets for DB_Source
and Log_Source and, therefore, two Consistency Groups. The scenario is shown in
Figure 10-6 on page 643.
In Example 10-125, the FCCG1 and FCCG2 Consistency Groups are created to hold the
FlashCopy maps of DB and Log. This step is important for FlashCopy on database
applications because it helps to maintain data integrity during FlashCopy.
In Example 10-126, we checked the status of the Consistency Groups. Each Consistency
Group has a status of empty.
If you want to change the name of a Consistency Group, you can use the chfcconsistgrp
command. Type chfcconsistgrp -h for help with this command.
When this command is run, a FlashCopy mapping logical object is created. This mapping
persists until it is deleted. The mapping specifies the source and destination volumes. The
destination must be identical in size to the source or the mapping fails. Issue the lsvdisk
-bytes command to find the exact size of the source volume for which you want to create a
target disk of the same size.
In a single mapping, source and destination cannot be on the same volume. A mapping is
triggered at the point in time when the copy is required. The mapping can optionally be given
a name and assigned to a Consistency Group. These groups of mappings can be triggered at
the same time, which enables multiple volumes to be copied at the same time and creates a
consistent copy of multiple disks. A consistent copy of multiple disks is required for database
products in which the database and log files are on separate disks.
If no Consistency Group is defined, the mapping is assigned to the default group 0, which is a
special group that cannot be started as a whole. Mappings in this group can be started only
on an individual basis.
644 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
The background copy rate specifies the priority that must be given to completing the copy. If 0
is specified, the copy does not proceed in the background. The default is 50.
Tip: You can use a parameter to delete FlashCopy mappings automatically after the
background copy is completed (when the mapping gets to the idle_or_copied state). Use
the following command:
mkfcmap -autodelete
This command does not delete mappings in cascade with dependent mappings because it
cannot get to the idle_or_copied state in this situation.
Example 10-127 shows the creation of the first FlashCopy mapping for DB_Source,
Log_Source, and App_Source.
Example 10-127 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source
IBM_2145:ITSO_SVC3:superuser>mkfcmap -source DB_Source -target DB_Target1 -name DB_Map1
-consistgrp FCCG1
FlashCopy Mapping, id [0], successfully created
Example 10-128 shows the command to create a second FlashCopy mapping for volume
DB_Source and volume Log_Source.
Example 10-129 shows the result of these FlashCopy mappings. The status of the mapping is
idle_or_copied.
If you want to change the FlashCopy mapping, you can use the chfcmap command. Enter
chfcmap -h to get help with this command.
When the prestartfcmap command is run, the mapping enters the Preparing state. After the
preparation is complete, it changes to the Prepared state. At this point, the mapping is ready
for triggering. Preparing and the subsequent triggering are performed on a Consistency
Group basis.
Only mappings that belong to Consistency Group 0 can be prepared on their own because
Consistency Group 0 is a special group that contains the FlashCopy mappings that do not
belong to any Consistency Group. A FlashCopy must be prepared before it can be triggered.
In our scenario, App_Map1 is not in a Consistency Group. In Example 10-130, we show how to
start the preparation for App_Map1.
IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status prepared
progress 0
copy_rate 50
start_time
dependent_mappings 0
autodelete off
clean_progress 0
clean_rate 50
incremental off
difference 0
grain_size 256
646 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
Another option is to add the -prep parameter to the startfcmap command, which prepares
the mapping and then starts the FlashCopy.
In Example 10-130 on page 646, we also show how to check the status of the current
FlashCopy mapping. The status of App_Map1 is prepared.
When you assign several mappings to a FlashCopy Consistency Group, you must issue only
a single prepare command for the whole group to prepare all of the mappings at one time.
Example 10-131 shows how we prepare the Consistency Groups for DB and Log and check
the result. After the command runs all of the FlashCopy maps that we have, all of the maps
and Consistency Groups are in the prepared status. Now, we are ready to start the
FlashCopy.
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp FCCG1
id 1
name FCCG1
status prepared
autodelete off
FC_mapping_id 0
FC_mapping_name DB_Map1
FC_mapping_id 1
FC_mapping_name Log_Map1
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 prepared
2 FCCG2 prepared
When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy
proceeds depends on the background copy rate attribute of the mapping. If the mapping is set
to 0 (NOCOPY), only data that is then updated on the source is copied to the destination. We
suggest that you use this scenario as a backup copy while the mapping exists in the Copying
state. If the copy is stopped, the destination is unusable.
If you want a duplicate copy of the source at the destination, set the background copy rate
greater than 0. By setting this rate, the system copies all of the data (even unchanged data) to
the destination and eventually reaches the idle_or_copied state. After this data is copied, you
can delete the mapping and have a usable point-in-time copy of the source at the destination.
In Example 10-132, App_Map1 changes to the copying status after the FlashCopy is started.
648 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
rc_controlled no
Alternatively, you can also query the copy progress by using the lsfcmap command. As
shown in Example 10-134, DB_Map1 returns information that the background copy is 23%
completed and Log_Map1 returns information that the background copy is 41% completed.
DB_Map2 returns information that the background copy is 5% completed and Log_Map2 returns
information that the background copy is 4% completed.
When the background copy completes, the FlashCopy mapping enters the idle_or_copied
state. When all of the FlashCopy mappings in a Consistency Group enter this status, the
Consistency Group is at the idle_or_copied status.
When in this state, the FlashCopy mapping can be deleted and the target disk can be used
independently if, for example, another target disk is to be used for the next FlashCopy of the
particular source volume.
Tip: If you want to stop a mapping or group in a Multiple Target FlashCopy environment,
consider whether you want to keep any of the dependent mappings. If you do not want to
keep these mappings, run the stop command with the -force parameter. This command
stops all of the dependent maps and negates the need for the stopping copy process to
run.
When a FlashCopy mapping is stopped, the target volume becomes invalid. The target
volume is set offline by the SVC. The FlashCopy mapping must be prepared again or
retriggered to bring the target volume online again.
Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by the SVC if the mapping
is in the copying state and progress=100.
Example 10-135 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 changed
to idle_or_copied.
IBM_2145:ITSO_SVC3:superuser>lsfcmap App_Map1
id 2
name App_Map1
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 10
target_vdisk_name App_Target1
group_id
group_name
status idle_or_copied
progress 100
copy_rate 50
start_time 110929113407
dependent_mappings 0
autodelete off
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
650 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
Important: Stop a FlashCopy mapping only when the data on the target volume is not in
use or when you want to modify the FlashCopy Consistency Group. When a Consistency
Group is stopped, the target volume might become invalid and be set offline by the SVC,
depending on the state of the mapping.
As shown in Example 10-136, we stop the FCCG1 and FCCG2 Consistency Groups. The status
of the two Consistency Groups changed to stopped. Most of the FlashCopy mapping
relationships now have the status of stopped. As you can see, several of them completed the
copy operation and are now in a status of idle_or_copied.
IBM_2145:ITSO_SVC3:superuser>stopfcconsistgrp FCCG2
IBM_2145:ITSO_SVC3:superuser>lsfcconsistgrp
id name status
1 FCCG1 idle_or_copied
2 FCCG2 idle_or_copied
IBM_2145:ITSO_SVC3:superuser>lsfcmap -delim ,
id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_
name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,res
toring,start_time,rc_controlled
0,DB_Map1,3,DB_Source,4,DB_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,110929113806,
no
1,Log_Map1,6,Log_Source,7,Log_Target1,1,FCCG1,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
2,App_Map1,9,App_Source,10,App_Target1,,,idle_or_copied,100,50,100,off,,,no,110929113407,no
3,DB_Map2,3,DB_Source,5,DB_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,110929113806,
no
4,Log_Map2,6,Log_Source,8,Log_Target2,2,FCCG2,idle_or_copied,100,50,100,off,,,no,1109291138
06,no
the command fails unless the -force flag is specified. If the mapping is active (copying), it
must first be stopped before it can be deleted.
Deleting a mapping deletes only the logical relationship between the two volumes. However,
when issued on an active FlashCopy mapping that uses the -force flag, the delete renders
the data on the FlashCopy mapping target volume as inconsistent.
Tip: If you want to use the target volume as a normal volume, monitor the background copy
progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.
Another option is to set the -autodelete option when the FlashCopy mapping is created.
If you also want to delete all of the mappings in the Consistency Group, first delete the
mappings and then delete the Consistency Group.
As shown in Example 10-138, we delete all of the maps and Consistency Groups and then
check the result.
652 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE00000000000000B
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 1
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 0.41MB
real_capacity 221.17MB
free_capacity 220.77MB
overallocation 4629
autoexpand on
warning 80
grainsize 32
se_copy yes
easy_tier on
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 221.17MB
2. Define a FlashCopy mapping in which the non-thin-provisioned volume is the source and
the thin-provisioned volume is the target. Specify a copy rate as high as possible and
activate the -autodelete option for the mapping, as shown in Example 10-140.
IBM_2145:ITSO_SVC3:superuser>lsfcmap 0
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status idle_or_copied
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
3. Run the prestartfcmap command and the lsfcmap MigrtoThinProv command, as shown
in Example 10-141.
IBM_2145:ITSO_SVC3:superuser>prestartfcmap MigrtoThinProv
IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status prepared
progress 0
copy_rate 100
start_time
dependent_mappings 0
autodelete on
clean_progress 0
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
654 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IBM_2145:ITSO_SVC3:superuser>lsfcmap MigrtoThinProv
id 0
name MigrtoThinProv
source_vdisk_id 9
source_vdisk_name App_Source
target_vdisk_id 11
target_vdisk_name App_Source_SE
group_id
group_name
status copying
progress 67
copy_rate 100
start_time 110929135848
dependent_mappings 0
autodelete on
clean_progress 100
clean_rate 50
incremental off
difference 100
grain_size 256
IO_group_id 0
IO_group_name io_grp0
partner_FC_id
partner_FC_name
restoring no
rc_controlled no
IBM_2145:ITSO_SVC3:superuser>lsfcmapprogress MigrtoThinProv
CMMVC5804E The action failed because an object that was specified in the command does
not exist.
IBM_2145:ITSO_SVC3:superuser>
An independent copy of the source volume (App_Source) was created. The migration
completes, as shown in Example 10-145.
IBM_2145:ITSO_SVC3:superuser>lsvdisk App_Source
id 9
name App_Source
IO_group_id 0
IO_group_name io_grp0
status online
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
capacity 10.00GB
type striped
formatted no
mdisk_id
mdisk_name
FC_id
FC_name
RC_id
RC_name
vdisk_UID 60050768018281BEE000000000000009
throttling 0
preferred_node_id 1
fast_write_state empty
cache readwrite
udid
fc_map_count 0
sync_rate 50
copy_count 1
se_copy_count 0
filesystem
mirror_write_priority latency
copy_id 0
status online
sync yes
primary yes
mdisk_grp_id 1
mdisk_grp_name Multi_Tier_Pool
type striped
mdisk_id
mdisk_name
fast_write_state empty
used_capacity 10.00GB
real_capacity 10.00GB
free_capacity 0.00MB
overallocation 100
autoexpand
warning
grainsize
se_copy no
easy_tier on
easy_tier_status active
tier generic_ssd
tier_capacity 0.00MB
tier generic_hdd
tier_capacity 10.00GB
Real size: Independently of what you defined as the real size of the target thin-provisioned
volume, the real size is at least the capacity of the source volume.
To migrate a thin-provisioned volume to a fully allocated volume, you can follow the same
scenario.
656 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
In Example 10-146, FCMAP_1 is the forward FlashCopy mapping, and FCMAP_rev_1 is a reverse
FlashCopy mapping. We also have a cascade FCMAP_2 where its source is FCMAP_1’s target
volume, and its target is a separate volume that is named Volume_FC_T1.
In our example, we started the FCMAP_1 and later FCMAP_2 after the environment was created.
CMMVC6298E The command failed because a target VDisk has dependent FlashCopy
mappings.
When a reverse FlashCopy mapping is started, you must use the -restore option to indicate
that you want to overwrite the data on the source disk of the forward mapping.
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
idle_or_copied 0 50 100 off 1 FCMAP_rev_1
no no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1 copying 0
50 100 off 1 FCMAP_rev_1 no
no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
idle_or_copied 0 50 100 off 0 FCMAP_1
no no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 4 50 100 off
no 110929143739 no
IBM_2145:ITSO_SVC3:superuser>lsfcmap
id name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id
group_name status progress copy_rate clean_progress incremental partner_FC_id
partner_FC_name restoring start_time rc_controlled
0 FCMAP_1 3 Volume_FC_S 4 Volume_FC_T_S1
copying 43 100 56 off 1 FCMAP_rev_1 no
110929151911 no
1 FCMAP_rev_1 4 Volume_FC_T_S1 3 Volume_FC_S
copying 56 100 43 off 0 FCMAP_1 yes
110929152030 no
2 FCMAP_2 4 Volume_FC_T_S1 5 Volume_FC_T1
copying 37 100 100 off no
110929151926 no
As you can see in Example 10-146 on page 657, FCMAP_rev_1 shows a restoring value of yes
while the FlashCopy mapping is copying. After it finishes copying, the restoring value field is
changed to no.
For example, if we have four volumes in a cascade (A → B → C → D), and the map A → B is
100% complete, the use of the stopfcmap -split mapAB command results in mapAB becoming
idle_copied and the remaining cascade becoming B → C → D.
658 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Without the -split option, volume A remains at the head of the cascade (A → C → D).
Consider the following sequence of steps:
1. The user takes a backup that uses the mapping A → B. A is the production volume and B
is a backup.
2. At a later point, the user experiences corruption on A and so reverses the mapping to
B → A.
3. The user then takes another backup from the production disk A, which results in the
cascade B → A → C.
Stopping A → B without the -split option results in the cascade B → C. The backup disk B is
now at the head of this cascade.
When the user next wants to take a backup to B, the user can still start mapping A → B (by
using the -restore flag), but the user cannot then reverse the mapping to A (B → A or C →
A).
Stopping A → B with the -split option results in the cascade A → C. This action does not
result in the same problem because the production disk A is at the head of the cascade
instead of the backup disk B.
Intercluster example: The example in this section is for intercluster operations only.
If you want to set up intracluster operations, we highlight the parts of the following
procedure that you do not need to perform.
In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC
system ITSO_SVC_DH8 primary site and the SVC system ITSO_SVC4 at the secondary site.
Table 10-4 shows the details of the volumes.
Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri volumes, a
CG_WIN2K3_MM Consistency Group is created to handle Metro Mirror relationships for them.
Because application files are independent of the database in this scenario, a stand-alone
Metro Mirror relationship is created for the MM_App_Pri volume. Figure 10-7 shows the Metro
Mirror setup.
660 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform
the next step; instead, go to 10.13.3, “Creating a Metro Mirror Consistency Group” on
page 664.
Pre-verification
To verify that both systems can communicate with each other, use the
lspartnershipcandidate command.
IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006AC03A42 no ITSO_SVC_DH8
0000020060A06FB8 no ITSO_SVC3
00000200A0C006B2 no ITSO-Storwize-V7000-2
000002006BE04FC4 no ITSO_SVC_DH8
Example 10-148 on page 661 shows the output of the lspartnership and lssystem
commands before setting up the Metro Mirror relationship. We show them so that you can
compare with the same relationship after setting up the Metro Mirror relationship.
As of SVC 6.3, you can create a partnership between the SVC system and the IBM Storwize
V7000 system. Be aware that to create this partnership, you must change the layer
parameter on the IBM Storwize V7000 system. It must be changed from storage to
replication with the chsystem command.
This parameter cannot be changed on the SVC system. It is fixed to the value of appliance,
as shown in Example 10-148 on page 661.
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
IBM_2145:ITSO_SVC_DH8:superuser>lssystem
id 000002006BE04FC4
name ITSO_SVC_DH8
location local
partnership
bandwidth
total_mdisk_capacity 766.5GB
space_in_mdisk_grps 766.5GB
space_allocated_to_vdisks 0.00MB
total_free_space 766.5GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 1.50GB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.81:443
id_alias 000002006BE04FC4
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method chap
iscsi_chap_secret passw0rd
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 766.50GB
has_nas_key no
layer appliance
662 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IBM_2145:ITSO_SVC4:superuser>lssystem
id 0000020061C06FCA
name ITSO_SVC4
location local
partnership
bandwidth
total_mdisk_capacity 768.0GB
space_in_mdisk_grps 0
space_allocated_to_vdisks 0.00MB
total_free_space 768.0GB
total_vdiskcopy_capacity 0.00MB
total_used_capacity 0.00MB
total_overallocation 0
total_vdisk_capacity 0.00MB
total_allocated_extent_capacity 0.00MB
statistics_status on
statistics_frequency 15
cluster_locale en_US
time_zone 520 US/Pacific
code_level 6.3.0.0 (build 54.0.1109090000)
console_IP 10.18.228.84:443
id_alias 0000020061C06FCA
gm_link_tolerance 300
gm_inter_cluster_delay_simulation 0
gm_intra_cluster_delay_simulation 0
gm_max_host_delay 5
email_reply
email_contact
email_contact_primary
email_contact_alternate
email_contact_location
email_contact2
email_contact2_primary
email_contact2_alternate
email_state stopped
inventory_mail_interval 0
cluster_ntp_IP_address
cluster_isns_IP_address
iscsi_auth_method none
iscsi_chap_secret
auth_service_configured no
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
has_nas_key no
layer appliance
To check the status of the newly created partnership, run the lspartnership command. Also,
the new partnership is only partially configured. It remains partially configured until the Metro
Mirror relationship is created on the other node.
Example 10-149 Creating the partnership from ITSO_SVC_DH8 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 50 ITSO_SVC4
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
After the partnership is created, verify that the partnership is fully configured on both systems
by reissuing the lspartnership command.
Example 10-150 Creating the partnership from ITSO_SVC4 to ITSO_SVC_DH8 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 50 ITSO_SVC_DH8
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 50
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp
id name master_cluster_id master_cluster_name aux_cluster_id
aux_cluster_name primary state relationship_count copy_type cycling_mode
0 CG_W2K3_MM 000002006BE04FC4 ITSO_SVC_DH8 0000020061C06FCA ITSO_SVC4
empty 0 empty_group none
664 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
To verify the new Metro Mirror relationships, list them with the lsrcrelationship command.
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship
id name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id
aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state
bg_copy_priority progress copy_type cycling_mode
0 MMREL1 000002006BE04FC4 ITSO_SVC_DH8 0 MM_DB_Pri 0000020061C06FCA
ITSO_SVC4 0 MM_DB_Sec master 0 CG_W2K3_MM
inconsistent_stopped 50 0 metro none
3 MMREL2 000002006BE04FC4 ITSO_SVC_DH8 3 MM_Log_Pri
0000020061C06FCA ITSO_SVC4 3 MM_Log_Sec master 0
CG_W2K3_MM inconsistent_stopped 50 0 metro none
The state of MMREL3 is consistent_stopped. MMREL3 is in this state because it was created with
the -sync option. The -sync option indicates that the secondary (auxiliary) volume is
synchronized with the primary (master) volume. Initial background synchronization is skipped
when this option is used, even though the volumes are not synchronized in this scenario.
We want to show the option of pre-synchronized master and auxiliary volumes before the
relationship is set up. We created the relationship for MM_App_Sec by using the -sync option.
Tip: The -sync option is used only when the target volume mirrored all of the data from the
source volume. By using this option, there is no initial background copy between the
primary volume and the secondary volume.
MMREL2 and MMREL1 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship 2
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
When Metro Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy for a data set if a failure occurs that affects the production site.
666 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
In this section, we show how to stop and start stand-alone Metro Mirror relationships and
Consistency Groups.
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Metro Mirror Consistency Groups or relationships change their state.
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL2
id 3
name MMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 3
master_vdisk_name MM_Log_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 3
aux_vdisk_name MM_Log_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_MM
state inconsistent_copying
bg_copy_priority 50
progress 82
freeze_time
status online
668 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
When all Metro Mirror relationships complete the background copy, the Consistency Group
enters the consistent_synchronized state, as shown in Example 10-157.
Example 10-158 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary
IBM_2145:ITSO_SVC_DH8:superuser>stoprcrelationship -access MMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary
consistency_group_id
consistency_group_name
state idling
bg_copy_priority 50
progress
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
If we want to enable access (write I/O) to the secondary volume later, we reissue the
stoprcconsistgrp command and specify the -access flag. The Consistency Group changes
to the idling state, as shown in Example 10-160.
Example 10-160 Stopping a Metro Mirror Consistency Group and enabling access to the secondary
IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp -access CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
670 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary
state idling
relationship_count 2
freeze_time
status
sync in_sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
If any updates were performed on the master or the auxiliary volume in any of the Metro
Mirror relationships in the Consistency Group, the consistency is compromised. Therefore, we
must use the -force flag to start a relationship. If the -force flag is not used, the command
fails.
In Example 10-162, we change the copy direction by specifying the auxiliary volumes to
become the primaries.
Example 10-162 Restarting a Metro Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC_DH8:superuser>startrcconsistgrp -force -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
In Example 10-163, we change the copy direction for the stand-alone Metro Mirror
relationship by specifying the auxiliary volume to become the primary volume.
672 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from the primary to the secondary because all of the I/O is
inhibited to that volume when it becomes the secondary. Therefore, careful planning is
required before the switchrcrelationship command is used.
Example 10-163 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC_DH8:superuser>switchrcrelationship -primary aux MMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship MMREL3
id 2
name MMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name MM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name MM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
10.13.16 Switching the copy direction for a Metro Mirror Consistency Group
When a Metro Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the Consistency Group by using the switchrcconsistgrp
command and specifying the primary volume.
If the specified volume is already a primary volume when you issue this command, the
command has no effect.
In Example 10-164, we change the copy direction for the Metro Mirror Consistency Group by
specifying the auxiliary volume to become the primary volume.
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all of the I/O is inhibited when
that volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.
Example 10-164 Switching the copy direction for a Metro Mirror Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
IBM_2145:ITSO_SVC_DH8:superuser>switchrcconsistgrp -primary aux CG_W2K3_MM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_MM
id 0
name CG_W2K3_MM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
674 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
copy_type metro
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name MMREL1
RC_rel_id 3
RC_rel_name MMREL2
In this section, we describe how to configure the SVC system partnership for each
configuration.
Important: To have a supported and working configuration, all SVC systems must be at
level 5.1 or higher.
In our scenarios, we configure the SVC partnership by referring to the clustered systems as
A, B, C, and D, as shown in the following examples:
ITSO_SVC_DH8 = A
ITSO_SVC_DH8 = B
ITSO_SVC3 = C
ITSO_SVC4 = D
Example 10-165 shows the available systems for a partnership by using the
lsclustercandidate command on each system.
IBM_2145:ITSO_SVC_DH8:superuser>lspartnershipcandidate
id configured name
0000020061C06FCA no ITSO_SVC4
000002006BE04FC4 no ITSO_SVC_DH8
0000020060A06FB8 no ITSO_SVC3
IBM_2145:ITSO_SVC3:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC_DH8
0000020061C06FCA no ITSO_SVC4
000002006AC03A42 no ITSO_SVC_DH8
IBM_2145:ITSO_SVC4:superuser>lspartnershipcandidate
id configured name
000002006BE04FC4 no ITSO_SVC_DH8
0000020060A06FB8 no ITSO_SVC3
000002006AC03A42 no ITSO_SVC_DH8
Example 10-166 shows the sequence of mkpartnership commands that are run to create a
star configuration.
From ITSO_SVC_DH8
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
000002006AC03A42 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote fully_configured 50
0000020061C06FCA ITSO_SVC4 remote fully_configured 50
676 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
From ITSO_SVC_DH8
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC_DH8 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
From ITSO_SVC3
IBM_2145:ITSO_SVC3:superuser>lspartnership
id name location partnership bandwidth
0000020060A06FB8 ITSO_SVC3 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 50
From ITSO_SVC4
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 50
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
Triangle configuration
Figure 10-9 shows the triangle configuration.
Example 10-167 shows the sequence of mkpartnership commands that are run to create a
triangle configuration.
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
Example 10-168 on page 678 shows the sequence of mkpartnership commands that are run
to create a fully connected configuration.
678 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006AC03A42 ITSO_SVC_DH8 local
000002006BE04FC4 ITSO_SVC1 remote fully_configured 50
0000020060A06FB8 ITSO_SVC3 remote partially_configured_local 50
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 50
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
Daisy-chain configuration
Figure 10-11 shows the daisy-chain configuration.
Example 10-169 shows the sequence of mkpartnership commands that are run to create a
daisy-chain configuration.
After the SVC partnership is configured, you can configure any rcrelationship or
rcconsistgrp that you need. Ensure that a single volume is only in one relationship.
Intercluster example: This example is for an intercluster Global Mirror operation only. If
you want to set up an intracluster operation, we highlight the steps in the following
procedure that you do not need to perform.
Table 10-5 Details of the volumes for the Global Mirror relationship scenario
Content of volume Volumes at primary site Volumes at secondary site
680 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
5. Create the Global Mirror relationship for GM_App_Pri that uses the following settings:
– Master: GM_App_Pri
– Auxiliary: GM_App_Sec
– Auxiliary SVC system: ITSO_SVC4
– Name: GMREL3
Intracluster Global Mirror: If you are creating an intracluster Global Mirror, do not perform
the next step. Instead, go to 10.14.3, “Changing link tolerance and system delay
simulation” on page 683.
Pre-verification
To verify that both clustered systems can communicate with each other, use the
lspartnershipcandidate command. Example 10-170 confirms that our clustered systems
can communicate because ITSO_SVC4 is an eligible SVC system candidate to ITSO_SVC_DH8
for the SVC system partnership, and vice versa. Therefore, both systems can communicate
with each other.
In Example 10-171, we show the output of the lspartnership command before we set up the
SVC systems’ partnership for Global Mirror. We show this output for comparison after we set
up the SVC partnership.
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
682 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Example 10-172 Creating the partnership from ITSO_SVC_DH8 to ITSO_SVC4 and verifying it
IBM_2145:ITSO_SVC_DH8:superuser>mkpartnership -bandwidth 100 ITSO_SVC4
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
0000020061C06FCA ITSO_SVC4 remote partially_configured_local 100
In Example 10-173, we create the partnership from ITSO_SVC4 back to ITSO_SVC_DH8 and
specify a 100 MBps bandwidth to use for the background copy. After the partnership is
created, verify that the partnership is fully configured by reissuing the lspartnership
command.
Example 10-173 Creating the partnership from ITSO_SVC4 to ITSO_SVC_DH8 and verifying it
IBM_2145:ITSO_SVC4:superuser>mkpartnership -bandwidth 100 ITSO_SVC_DH8
IBM_2145:ITSO_SVC4:superuser>lspartnership
id name location partnership bandwidth
0000020061C06FCA ITSO_SVC4 local
000002006BE04FC4 ITSO_SVC_DH8 remote fully_configured 100
IBM_2145:ITSO_SVC_DH8:superuser>lspartnership
id name location partnership bandwidth
000002006BE04FC4 ITSO_SVC_DH8 local
0000020061C06FCA ITSO_SVC4 remote fully_configured 100
Important: We strongly suggest that you use the default value. If the link is overloaded for
a period (which affects host I/O at the primary site), the relationships are stopped to protect
those hosts.
To check the current settings for the delay simulation, use the following command:
lssystem
In Example 10-174, we show the modification of the delay simulation value and a change of
the Global Mirror link tolerance parameters. We also show the changed values of the Global
Mirror link tolerance and delay simulation parameters.
684 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
auth_service_enabled no
auth_service_url
auth_service_user_name
auth_service_pwd_set no
auth_service_cert_set no
auth_service_type tip
relationship_bandwidth_limit 25
tier generic_ssd
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_capacity 766.50GB
tier_free_capacity 736.50GB
has_nas_key no
layer appliance
We use the lsvdisk command to list all of the volumes in the ITSO_SVC_DH8 system. Then, we
use the lsrcrelationshipcandidate command to show the possible candidate volumes for
GM_DB_Pri in ITSO_SVC4.
After checking all of these conditions, we use the mkrcrelationship command to create the
Global Mirror relationship. To verify the new Global Mirror relationships, we list them by using
the lsrcrelationship command.
The status of GMREL3 is consistent_stopped because it was created with the -sync option. The
-sync option indicates that the secondary (auxiliary) volume is synchronized with the primary
(master) volume. The initial background synchronization is skipped when this option is used.
GMREL1 and GMREL2 are in the inconsistent_stopped state because they were not created with
the -sync option. Therefore, their auxiliary volumes must be synchronized with their primary
volumes.
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship -delim :
686 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_
name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority
:progress:copy_type:cycling_mode
0:GMREL1:000002006BE04FC4:ITSO_SVC_DH8:0:GM_DB_Pri:0000020061C06FCA:ITSO_SVC4:0:GM_DB_Sec:master:0:CG_W2K3_
GM:inconsistent_copying:50:73:global:none
1:GMREL2:000002006BE04FC4:ITSO_SVC_DH8:1:GM_DBLog_Pri:0000020061C06FCA:ITSO_SVC4:1:GM_DBLog_Sec:master:0:CG
_W2K3_GM:inconsistent_copying:50:75:global:none
2:GMREL3:000002006BE04FC4:ITSO_SVC_DH8:2:GM_App_Pri:0000020061C06FCA:ITSO_SVC4:2:GM_App_Sec:master:::consis
tent_stopped:50:100:global:none
When Global Mirror is implemented, the goal is to reach a consistent and synchronized state
that can provide redundancy if a hardware failure occurs that affects the SAN at the
production site.
In this section, we show how to start the stand-alone Global Mirror relationships and the
Consistency Group.
aux_change_vdisk_id
aux_change_vdisk_name
Upon the completion of the background copy, the CG_W2K3_GM Global Mirror Consistency
Group enters the Consistent synchronized state.
Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification
when Global Mirror Consistency Groups or relationships change state.
688 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 38
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary master
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state inconsistent_copying
bg_copy_priority 50
progress 76
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
When all of the Global Mirror relationships complete the background copy, the Consistency
Group enters the Consistent synchronized state, as shown in Example 10-181.
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
First, we show how to stop and restart the stand-alone Global Mirror relationships and the
Consistency Group.
690 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
aux_change_vdisk_id
aux_change_vdisk_name
Example 10-183 Stopping a Global Mirror Consistency Group without specifying the -access
parameter
IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
If we want to enable access (write I/O) for the secondary volume later, we can reissue the
stoprcconsistgrp command and specify the -access parameter. The Consistency Group
changes to the Idling state, as shown in Example 10-184.
RC_rel_name GMREL2
If any updates were performed on the master volume or the auxiliary volume, consistency is
compromised. Therefore, we must issue the -force parameter to restart the relationship, as
shown in Example 10-185. If the -force parameter is not used, the command fails.
Example 10-185 Restarting a Global Mirror relationship after updates in the Idling state
IBM_2145:ITSO_SVC_DH8:superuser>startrcrelationship -primary master -force GMREL3
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
If any updates were performed on the master volume or the auxiliary volume in any of the
Global Mirror relationships in the Consistency Group, consistency is compromised. Therefore,
we must issue the -force parameter to start the relationship. If the -force parameter is not
used, the command fails.
In Example 10-186, we restart the Consistency Group and change the copy direction by
specifying the auxiliary volumes to become the primaries.
692 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Example 10-186 Restarting a Global Mirror relationship while changing the copy direction
IBM_2145:ITSO_SVC_DH8:superuser>startrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
If the volume that is specified as the primary already is a primary when this command is run,
the command has no effect.
In Example 10-187, we change the copy direction for the stand-alone Global Mirror
relationship and specify the auxiliary volume to become the primary.
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O is inhibited to that
volume when it becomes the secondary. Therefore, careful planning is required before the
switchrcrelationship command is used.
Example 10-187 Switching the copy direction for a Global Mirror relationship
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary master
consistency_group_id
consistency_group_name
state consistent_synchronized
bg_copy_priority 50
progress
freeze_time
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
master_change_vdisk_id
master_change_vdisk_name
aux_change_vdisk_id
aux_change_vdisk_name
10.14.18 Switching the copy direction for a Global Mirror Consistency Group
When a Global Mirror Consistency Group is in the Consistent synchronized state, we can
change the copy direction for the relationship by using the switchrcconsistgrp command
and specifying the primary volume. If the volume that is specified as the primary already is a
primary when this command is run, the command has no effect.
In Example 10-188, we change the copy direction for the Global Mirror Consistency Group
and specify the auxiliary to become the primary.
694 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Important: When the copy direction is switched, it is crucial that no I/O is outstanding to
the volume that changes from primary to secondary because all I/O are inhibited when that
volume becomes the secondary. Therefore, careful planning is required before the
switchrcconsistgrp command is used.
Example 10-188 Switching the copy direction for a Global Mirror Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary master
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
IBM_2145:ITSO_SVC_DH8:superuser>switchrcconsistgrp -primary aux CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_synchronized
relationship_count 2
freeze_time
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
When Global Mirror operates in cycling mode, changes are tracked and, where needed,
copied to intermediate Change Volumes. Changes are transmitted to the secondary site
periodically. The secondary volumes are much further behind the primary volume, and more
data must be recovered if a failover occurs. However, lower bandwidth is required to provide
an effective solution because the data transfer can be smoothed over a longer period.
A Global Mirror relationship consists of two volumes: primary and secondary. With SVC 6.3,
each of these volumes can be associated to a Change Volume. Change Volumes are used to
record changes to the remote copy volume. A FlashCopy relationship exists between the
remote copy volume and the Change Volume. This relationship cannot be manipulated as a
normal FlashCopy relationship. Most commands fail by design because this relationship is an
internal relationship.
Cycling mode transmits a series of FlashCopy images from the primary to the secondary, and
it is enabled by using svctask chrcrelationship -cycling=multi.
The primary Change Volume stores changes to be sent to the secondary volume and the
secondary Change Volume is used to maintain a consistent image at the secondary volume.
Every x seconds, the primary FlashCopy mapping is started automatically, where x is the
cycling period and is configurable. Data is then copied to the secondary volume from the
primary Change Volume. The secondary FlashCopy mapping is started if resynchronization is
needed. Therefore, a consistent copy is always at the secondary volume. The cycling period
is configurable and the default value is 300 seconds.
The recovery point objective (RPO) depends on how long the FlashCopy takes to complete. If
the FlashCopy completes within the cycling time, the maximum RPO = 2 x the cycling time;
otherwise, the RPO = 2 x the copy completion time.
You can estimate the current RPO by using the new freeze_time rcrelationship property. It is
the time of the last consistent image that is present at the secondary. Figure 10-13 shows the
cycling mode with Change Volumes.
696 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
You must have a Change Volume for both the primary and secondary volumes.
You cannot manipulate it like a normal FlashCopy mapping.
In this section, we show how to change the cycling mode of the stand-alone Global Mirror
relationship (GMREL3) and the Consistency Group CG_W2K3_GM Global Mirror relationships
(GMREL1 and GMREL2).
We assume that the source and target volumes were created and that the ISLs and zoning
are in place to enable the SVC systems to communicate. We also assume that the Global
Mirror relationship was established.
Complete the following steps to change the Global Mirror to cycling mode with Change
Volumes:
1. Create thin-provisioned Change Volumes for the primary and secondary volumes at both
sites.
2. Stop the stand-alone relationship GMREL3 to change the cycling mode at the primary site.
3. Set the cycling mode on the stand-alone relationship GMREL3 at the primary site.
4. Set the Change Volume on the master volume relationship GMREL3 at the primary site.
5. Set the Change Volume on the auxiliary volume relationship GMREL3 at the secondary site.
6. Start the stand-alone relationship GMREL3 in cycling mode at the primary site.
7. Stop the Consistency Group CG_W2K3_GM to change the cycling mode at the primary site.
8. Set the cycling mode on the Consistency Group at the primary site.
9. Set the Change Volume on the master volume relationship GMREL1 of the Consistency
Group CG_W2K3_GM at the primary site.
10.Set the Change Volume on the auxiliary volume relationship GMREL1 at the secondary site.
11.Set the Change Volume on the master volume relationship GMREL2 of the Consistency
Group CG_W2K3_GM at the primary site.
12.Set the Change Volume on the auxiliary volume relationship GMREL2 at the secondary site.
13.Start the Consistency Group CG_W2K3_GM in the cycling mode at the primary site.
Example 10-189 Creating the thin-provisioned volumes for Global Mirror cycling mode
IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb
-rsize 20% -autoexpand -grainsize 32 -name GM_DB_Pri_CHANGE_VOL
Virtual Disk, id [3], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb
-rsize 20% -autoexpand -grainsize 32 -name GM_DBLog_Pri_CHANGE_VOL
Virtual Disk, id [4], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>mkvdisk -iogrp 0 -mdiskgrp 0 -size 10 -unit gb
-rsize 20% -autoexpand -grainsize 32 -name GM_App_Pri_CHANGE_VOL
Virtual Disk, id [5], successfully created
IBM_2145:ITSO_SVC_DH8:superuser>stoprcrelationship GMREL3
698 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/37/20
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL3
id 2
name GMREL3
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 2
master_vdisk_name GM_App_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 2
aux_vdisk_name GM_App_Sec
primary aux
700 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
consistency_group_id
consistency_group_name
state consistent_copying
bg_copy_priority 50
progress 100
freeze_time 2011/10/04/20/42/25
status online
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 5
master_change_vdisk_name GM_App_Pri_CHANGE_VOL
aux_change_vdisk_id 5
aux_change_vdisk_name GM_App_Sec_CHANGE_VOL
Example 10-195 Stopping the Consistency Group to change the cycling mode
IBM_2145:ITSO_SVC_DH8:superuser>stoprcconsistgrp CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode none
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
Example 10-196 Setting the Global Mirror cycling mode on the Consistency Group
IBM_2145:ITSO_SVC_DH8:superuser>chrcconsistgrp -cyclingmode multi CG_W2K3_GM
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_stopped
relationship_count 2
freeze_time
status
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
10.14.28 Setting the Change Volume on the master volume relationships of the
Consistency Group
In Example 10-197 on page 702, we change both of the relationships of the Consistency
Group to add the Change Volumes on the primary volumes. A display shows the name of the
master Change Volumes.
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL1
id 0
name GMREL1
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 0
master_vdisk_name GM_DB_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 0
aux_vdisk_name GM_DB_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
702 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
IBM_2145:ITSO_SVC_DH8:superuser>
IBM_2145:ITSO_SVC_DH8:superuser>lsrcrelationship GMREL2
id 1
name GMREL2
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
master_vdisk_id 1
master_vdisk_name GM_DBLog_Pri
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
aux_vdisk_id 1
aux_vdisk_name GM_DBLog_Sec
primary aux
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 4
master_change_vdisk_name GM_DBLog_Pri_CHANGE_VOL
aux_change_vdisk_id
aux_change_vdisk_name
consistency_group_id 0
consistency_group_name CG_W2K3_GM
state consistent_stopped
bg_copy_priority 50
progress 100
freeze_time
status online
sync in_sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
master_change_vdisk_id 3
master_change_vdisk_name GM_DB_Pri_CHANGE_VOL
aux_change_vdisk_id 3
aux_change_vdisk_name GM_DB_Sec_CHANGE_VOL
704 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/02/33
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
IBM_2145:ITSO_SVC_DH8:superuser>lsrcconsistgrp CG_W2K3_GM
id 0
name CG_W2K3_GM
master_cluster_id 000002006BE04FC4
master_cluster_name ITSO_SVC_DH8
aux_cluster_id 0000020061C06FCA
aux_cluster_name ITSO_SVC4
primary aux
state consistent_copying
relationship_count 2
freeze_time 2011/10/04/21/07/42
status
sync
copy_type global
cycle_period_seconds 300
cycling_mode multi
RC_rel_id 0
RC_rel_name GMREL1
RC_rel_id 1
RC_rel_name GMREL2
Important: The support for migration from 7.1.x.x to 7.6.x.x is limited. Check with your
service representative for the recommended steps.
For more information about the recommended concurrent upgrade paths from all previous
versions of software to the latest release in each codestream, see this website:
http://www.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707
After the software file is uploaded to the system (the /home/admin/upgrade directory), you can
select the software and apply it to the system. Use the web script and the applysoftware
command. When a new code level is applied, it is automatically installed on all of the nodes
within the system.
The underlying command-line tool runs the sw_preinstall script. This script checks the
validity of the upgrade file and whether it can be applied over the current level. If the upgrade
file is unsuitable, the sw_preinstall script deletes the files to prevent the buildup of invalid
files on the system.
Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs
are working. Otherwise, the applications might have I/O failures during the software upgrade.
Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem
Device Driver (SDD) command. Example 10-200 shows the output.
706 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
Write-through mode: During a software upgrade, periods occur when not all of the nodes
in the system are operational. As a result, the cache operates in write-through mode.
Write-through mode affects the throughput, latency, and bandwidth aspects of
performance.
Verify that your uninterruptible power supply unit configuration is also set up correctly (even if
your system is running without problems). Specifically, ensure that the following conditions
are true:
Your uninterruptible power supply units are all getting their power from an external source
and that they are not daisy chained. Ensure that each uninterruptible power supply unit is
not supplying power to another node’s uninterruptible power supply unit.
The power cable and the serial cable, which come from each node, go back to the same
uninterruptible power supply unit. If the cables are crossed and go back to separate
uninterruptible power supply units, another node might also be shut down mistakenly
during the upgrade while one node is shut down.
Important: Do not share the SVC uninterruptible power supply unit with any other devices.
You must also ensure that all I/O paths are working for each host that runs I/O operations to
the SAN during the software upgrade. You can check the I/O paths by using the datapath
query commands.
You do not need to check for hosts that have no active I/O operations to the SAN during the
software upgrade.
Upgrade procedure
To upgrade the SVC system software, complete the following steps:
1. Before the upgrade is started, you must back up the configuration and save the backup
configuration file in a safe place.
2. Before you start to transfer the software code to the clustered system, clear the previously
uploaded upgrade files in the /home/admin/upgrade SVC system directory, as shown in
Example 10-201.
3. Save the data collection for support diagnosis if you experience problems, as shown in
Example 10-202.
4. List the dump that was generated by the previous command, as shown in
Example 10-203.
5. Save the generated dump in a safe place by using the pscp command, as shown in
Figure 10-14.
Note: The pscp command does not work if you did not upload your PuTTY SSH private
key or if you are not using the user ID and password with the PuTTY pageant agent, as
shown in Figure 10-14.
708 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
6. Upload the new software package by using PuTTY Secure Copy. Enter the pscp -load
command, as shown in Example 10-205.
7. Upload the SVC Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the
command, as shown in Example 10-206.
8. Verify that the packages were successfully delivered through the PuTTY command-line
application by entering the lsdumps command, as shown in Example 10-207.
9. Now that the packages are uploaded, install the SVC Software Upgrade Test Utility, as
shown in Example 10-208.
IBM_2145:ITSO_SVC_DH8:superuser>applysoftware -file
IBM2145_INSTALL_upgradetest_12.31
CMMVC6227I The package installed successfully.
10.Using the svcupgradetest command, test the upgrade for known issues that might prevent
a software upgrade from completing successfully, as shown in Example 10-209.
While the upgrade runs, you can check the status, as shown in Example 10-211.
The new code is distributed and applied to each node in the SVC system:
– During the upgrade, you can issue the lsupdate command to see the status of the
upgrade.
– To verify that the upgrade was successful, you can run the lssystem and lsnodevpd
commands, as shown in Example 10-212. (We truncated the lssystem and lsnodevpd
information for this example.)
IBM_2145:ITSO_SVC_DH8:superuser>lsnodevpd 1
id 1
...
...
710 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
To generate a new log before you analyze unfixed errors, run the dumperrlog command, as
shown in Example 10-213 on page 711.
Important: You must use the sainfo and satask command sets under the direction of IBM
Support. The incorrect use of these commands can lead to unexpected results.
For more information about how to troubleshoot and collect data from the SVC, see SAN
Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is
available at this website:
http://www.redbooks.ibm.com/abstracts/sg247521.html
Use the lsfabric command regularly to obtain a complete picture of the devices that are
connected and visible from the SVC cluster through the SAN. The lsfabric command
generates a report that displays the FC connectivity between nodes, controllers, and hosts.
For more information about the lsfabric command, see the V7.6 Command-Line Interface
User’s Guide for IBM System Storage SAN Volume Controller and Storwize V7000,
GC27-2287.
The recover system procedure is an extremely sensitive procedure that is to be used as a last
resort only. The procedure should not be used by the client or an IBM service support
representative (SSR) without direct guidance from IBM Remote Technical Support. Initiating
the T3 recover system procedure while the system is in a specific state can result in the loss
of the XML configuration backup files.
For further informations about the T3 recover system procedure refer to the IBM SAN Volume
Controller documentation:
712 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 09 CLI Operations Max.fm
https://ibm.biz/BdHnKF
714 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
11
The information is divided into normal operations and advanced operations. We explain the
basic configuration procedures that are required to get your IBM Spectrum Virtualize
environment running as quickly as possible by using GUI.
Multiple users can be logged in to the GUI at any time. However, no locking mechanism
exists, so be aware that if two users change the same object at the same time, the last action
that is entered from the GUI is the one that takes effect.
Important: Data entries that are made through the GUI are case-sensitive.
In later sections of this chapter, we expect users to be able to navigate to this panel without
explanation of the procedure each time.
Errors,
Performance meter
Running jobs warnings
Health indicator
Capacity overview
Dynamic menu
From any page in the IBM Spectrum Virtualize GUI, you can always access the dynamic
menu. The IBM Spectrum Virtualize GUI dynamic menu is on the left side of the GUI window.
To browse by using this menu, hover the mouse pointer over the various icons and choose a
page that you want to display, as shown in Figure 11-2 on page 717.
716 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Figure 11-2 The dynamic menu in the left column of IBM SAN Volume Controller GUI
Starting with IBM Spectrum Virtualize V7.6. the dynamic function of the left menu is disabled
by default and the icons are of a static size. To enable dynamic appearance, navigate to
Settings → GUI Preferences → Navigation as shown in Figure 11-3. Tick the checkbox to
enable animation (dynamic function) and click Save button. For changes to take effect relogin
or refresh GUI cache from the General panel in GUI Preferences.
3.
2.
The IBM Spectrum Virtualize dynamic menu consists of multiple panels independently on the
underlying hardware (SVC, IBM Storwize family). These panels group common configuration
and administration objects and present individual administrative objects to the IBM Spectrum
Virtualize GUI users, as shown in Figure 11-4.
Figure 11-4 IBM Spectrum Virtualize GUI panel managing SVC model DH8
Welcome banner
IBM Spectrum Virtualize V7.6 and higher allows administrators to configure welcome banner
- a text message that appears either in GUI login screen or at CLI login prompt. The content
of the welcome message is helpful when you need to notify users about some important
information about system, for example security warnings or location description.
The welcome banner (or login message) can be enabled from the GUI or CLI; to define such
a message use the following commands outlined in Example 11-1. Define (copy) the text file
that contains the welcome message to your configuration node and enable it in CLI. In our
case the content of the file is located in /tmp/banner.
The result of the action before is as illustrated in Figure 11-5 on page 719. It shows the login
message in the GUI and in the CLI login prompt window.
718 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Suggested tasks
After a successful login, IBM Spectrum Virtualize opens a pop-up window with suggested
notifying administrators that several key SVC functions are not yet configured. You cannot
miss or overlook this window. However, you can close the pop-up window and perform tasks
at any time.
In this case, the SVC GUI warns you that so far no volume is mapped to the host or that no
host is defined yet. You can directly perform the task from this window or cancel it and
execute the procedure later at any convenient time. Other suggested tasks that typically
appear after the initial configuration of the SVC are to create a volume and configure a
storage pool, for example.
The dynamic IBM Spectrum Virtualize menu contains the following panels (Figure 11-4 on
page 718):
Monitoring
Pools
Volumes
Hosts
Copy Services
Access
Settings
If non-critical issues exist for your system nodes, external storage controllers, or remote
partnerships, a new status area opens next to the Health Status widget, as shown in
Figure 11-9.
You can fix the error by clicking Status Alerts to direct you to the Events panel fix procedures.
720 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
If a critical system connectivity error exists, the Health Status bar turns red and alerts the
system administrator for immediate action, as shown in Figure 11-10.
The following information is displayed in this storage allocation indicator window. To view all of
the information, you must use the up and down arrow keys:
Allocated capacity
Virtual capacity
Compression ratio
Important: Since IBM Spectrum Virtualize version 7.4, the capacity units use the binary
prefixes that are defined by the International Electrotechnical Commission (IEC). The
prefixes represent a multiplication by 1024 with symbols GiB (gibibyte), TiB (tibibyte), and
PiB (pebibyte).
Volume migration
MDisk removal
Image mode migration
Extent migration
FlashCopy
Metro Mirror
Global Mirror
Volume formatting
Space-efficient copy repair
Volume copy verification
Volume copy synchronization
Estimated time for the task completion
By clicking within the square (as shown in Figure 11-7 on page 720), this area provides
detailed information about running and recently completed tasks, as shown in Figure 11-13.
Performance meter
In the middle of the notification area there is a Performance meter consisting of three
measured read and write parameters - bandwidth, IOPS, and latency. See Figure 11-14 for
details.
Overview window
Since IBM Spectrum Virtualize V7.4, the welcome window of the GUI changed from the
well-known former Overview panel to the new System panel, as shown in Figure 11-1 on
page 716. Clicking Overview (Figure 11-15 on page 723) in the upper-right corner of the
System panel opens a modified Overview panel with options that are similar to previous
versions of the software.
722 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
The following content of the chapter helps you to understand the structure of the panel and
how to navigate to various system components to manage them more efficiently and quickly.
Table filtering
On most pages, a Filter option (magnifying glass icon) is available on the upper-left side of the
window. Use this option if the list of object entries is too long.
2. Enter the text string that you want to filter and press Enter.
3. By using this function, you can filter your table that is based on column names. In our
example, a volume list is displayed that contains the names that include DS somewhere in
the name. DS is highlighted in amber, as shown in Figure 11-17. The search option is not
case-sensitive.
4. Remove this filtered view by clicking the reset filter icon, as shown in Figure 11-18.
Filtering: This filtering option is available in most menu options of the GUI.
Table information
In the table view, you can add or remove the information in the tables on most pages.
For example, on the Volumes page, complete the following steps to add a column to our table:
1. Right-click any column headers of the table or select the icon in the left corner of the table
header. A list of all of the available columns appears, as shown in Figure 11-19 on
page 724.
right-click
724 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. Select the column that you want to add (or remove) from this table. In our example, we
added the volume ID column and sorted the content by ID, as shown on the left in
Figure 11-20.
3. You can repeat this process several times to create custom tables to meet your
requirements.
4. You can always return to the default table view by selecting Restore Default View in the
column selection menu, as shown in Figure 11-21 on page 725.
Sorting: By clicking a column, you can sort a table that is based on that column in
ascending or descending order.
11.1.3 Help
To access online help, move the mouse pointer over the question mark (?) icon in the
upper-right corner of any panel and select the context-based help topic, as shown in
Figure 11-23 on page 726. Depending on the panel you are working with, the help displays its
context item.
By clicking Information Center, you are directed to the public IBM Knowledge Center, which
provides all of the information about the SVC systems, as shown in Figure 11-24.
726 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
As of V7.4, the option that was formerly called System Details is integrated into the device
overview on the general System panel, which is available after logging in or when clicking the
option System from the Monitoring menu. For more information, see “Overview window” on
page 722.
When you click a specific component of a node, a pop-up window indicates the details of the
disk drives in the unit. By right-clicking and selecting Properties, you see detailed technical
parameters, such as capacity, interface, rotation speed, and the drive status (online or offline).
1 3
right-click
2
Figure 11-27 Component details
In an environment with multiple SVC systems, you can easily direct the onsite personnel or
technician to the correct device by enabling the identification LED on the front panel. Click
Identify in the pop-up window that is shown in Figure 11-26. Then, wait for confirmation from
the technician that the device in the data center was correctly identified.
728 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Alternatively, you can use the SVC command-line interface (CLI) to get the same results.
Type the following commands in this sequence:
1. Type svctask chnode -identify yes 1 (or just type chnode -identify yes 1).
2. Type svctask chnode -identify no 1 (or just type chnode -identify no 1).
Each system that is shown in the Dynamic system view in the middle of a System panel can
be rotated by 180° to see its rear side. Click the rotation arrow in the lower-right corner of the
device, as illustrated in Figure 11-29.
The output is shown in Figure 11-31. By using this menu, you can also power off the machine
(without an option for remote start), remove the node or enclosure from the system, or list all
of the volumes that are associated with the system, for example.
The Properties option now also provides the information about encryption support, either if
the hardware encryption is supported or not. In other words, if the prerequisite hardware
(additional CPU and compression cards) is installed in the system or not and encryption
licenses enabled.
730 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
In addition, from the System panel, you can see an overview of important status information
and the parameters of the Fibre Channel (FC) ports (Figure 11-32).
By choosing Fibre Channel Ports, you can see the list and status of the available FC ports
with their worldwide port names (WWPNs), as shown in Figure 11-33.
11.2.3 Events
The Events option, which you select from the Monitoring menu, tracks all informational,
warning, and error messages that occur in the system. You can apply various filters to sort
them or export them to an external comma-separated values (CSV) file. A CSV file can be
created from the information that is shown here. Figure 11-34 on page 732 provides an
example of records in the SVC Event log.
11.2.4 Performance
The Performance panel reports the general system statistics that relate to processor (CPU)
utilization, host and internal interfaces, volumes, and MDisks. You can switch between MBps
or IOPS or even drill down in the statistics to the node level. This capability might be useful
when you compare the performance of each node in the system if problems exist after a node
failover occurs. See Figure 11-35.
The performance statistics in the GUI show, by default, the latest five minutes of data. To see
details of each sample, click the graph and select the time stamp, as shown in Figure 11-36
on page 733.
732 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
The charts that are shown in Figure 11-36 represent five minutes of the data stream. For
in-depth storage monitoring and performance statistics with historical data about your SVC
system, use the IBM Spectrum Control (enabled by former IBM Tivoli Storage Productivity
Center for Disk and IBM Virtual Storage Center).
For more information about a specific controller and MDisks, click the plus sign (+) that is to
the left of the controller icon and name.
2. Enter the new name that you want to assign to the controller and then click Rename, as
shown in Figure 11-39.
Controller name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.
3. A task is started to change the name of this storage system. When it completes, you can
close this window. The new name of your controller is displayed on the External Storage
panel.
734 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. When the task completes, click Close to see the newly detected MDisks.
You can add new columns to the table, as described in “Table information” on page 724.
To retrieve more information about a specific storage pool, select any storage pool in the left
column. The upper-right corner of the panel, which is shown in Figure 11-42, contains the
following information about this pool:
Status
Number of MDisks
Number of volume copies
Whether Easy Tier is active on this pool
Site assignment
Volume allocation
Capacity
Change the view by selecting MDisks by Pools. Select the pool with which you want to work
and click the plus sign (+), which expands the information. This panel displays the MDisks
that are present in this storage pool, as shown in Figure 11-43.
From this window you can also directly assign discovered storage (detected MDisks) to the
appropriate storage pools. Use the icons below the lost of controllers and click Assign button.
736 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Storage pool name: You can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The name can be 1 - 63 characters. The name is
case-sensitive. The name cannot start with a number or the pattern “MDiskgrp”
because this prefix is reserved for SVC internal assignment only.
b. Optional: Change the icon that is associated with this storage pool, as shown in
Figure 11-45.
c. In addition, you can specify the following information and then click Next:
• Extent Size under the Advanced Settings section. The default is 1 GiB.
• Encryption settings (Advanced settings). Default depends if your SVC is
encryption-enabled.
Important: If you have enabled software encryption on your SVC, each newly
defined storage pool has preset encryption enabled. If you intentionally do not wish
to have pool encrypted, you have to disable this option by unticking the checkbox
during pool definition.
3. In the next window (as shown in Figure 11-46), complete the following steps to specify the
MDisks that you want to associate with the new storage pool:
a. Select the MDisks that you want to add to this storage pool.
Tip: To add multiple MDisks, press and hold the Ctrl key and click selected items.
4. Close the task completion window. In the Storage Pools panel (as shown in Figure 11-47),
the new storage pool is displayed.
738 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. Enter the new name that you want to assign to the storage pool and click Rename, as
shown in Figure 11-49.
Storage pool name: The name can consist of the letters A - Z and a - z, the numbers
0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, dash, or underscore.
2. Once you have clicked the Delete Pool option, the command is directly executed. There is
no confirmation prompt. However, starting with IBM Spectrum Virtualize V7.4 it is not
possible to delete a pool with assigned and active MDisks. This option is unavailable from
the menu as indicated in Figure 11-51. First delete obsolete MDisks.
Important: IBM Spectrum Virtualize does not allow the user to directly delete pools that
contain any active volumes with past IO activities.
To retrieve more information about a specific MDisk, complete the following steps:
1. From the expanded view of a storage pool in the MDisks panel, select an MDisk.
2. Click Properties, as shown in Figure 11-52 on page 741.
740 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. For the selected MDisk, a basic overview is displayed that shows its important parameters,
as shown in Figure 11-53. For additional technical details, enlarge the section View more
details.
4. Click the Dependent Volumes tab to display information about the volumes that are on
this MDisk, as shown in Figure 11-54. For more information about the volume panel, see
11.8, “Working with volumes” on page 759.
3. In the Rename MDisk window (Figure 11-56), enter the new name that you want to assign
to the MDisk and click Rename.
MDisk name: The name can consist of the letters A - Z and a - z, the numbers 0 - 9,
the dash (-), and the underscore (_) character. The name can be 1 - 63 characters.
742 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers
(LUNs) from your subsystem are correctly assigned to SVC. Also, check that correct
zoning is in place. For example, ensure that SVC can see the disk subsystem.
Site awareness: Do not assign sites to the SVC nodes and external storage controllers
in a standard, normal topology. Site awareness is intended primarily for SVC Stretched
Clusters or HyperSwap. If any MDisks or controllers appear offline after detection,
remove the site assignment from the SVC node or controller and discover storage.
1. From the MDisks by Pools panel, select the unmanaged MDisk that you want to add to a
storage pool.
2. Click Actions → Assign, as shown in Figure 11-59.
Alternative: You can also access the Assign action by right-clicking an MDisk.
3. From the Add MDisk to Pool window, select to which pool you want to add this MDisk, and
then, click Assign, as shown in Figure 11-60.
4. Before assigning an MDisk to a specific pool you can select the storage tier of the disk
being assigned and if the MDisk is externally encrypted. If yes, then SVC-based
encryption is disabled even if the disk is assigned to the encrypted pool by SVC. See
Figure 11-61 on page 745 for details.
744 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Alternative: You can also access the Remove action by right-clicking an assigned
MDisk.
If volumes are using the MDisks that you are removing from the storage pool, you must
confirm that the volume data will be migrated to other MDisks in the pool.
3. Click Yes, as shown in Figure 11-63.
When the migration is complete, the MDisk status changes to Unmanaged. Ensure that
the MDisk remains accessible to the system until its status becomes Unmanaged. This
process might take time. If you disconnect the MDisk before its status becomes
Unmanaged, all of the volumes in the pool go offline until the MDisk is reconnected.
An error message is displayed (as shown in Figure 11-64) if insufficient space exists to
migrate the volume data to other extents on other MDisks in that storage pool.
11.6 Migration
For more information about data migration, see Chapter 6, “Data migration” on page 237.
746 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Host configuration: For more information about connecting hosts to the SVC in a SAN
environment, see Chapter 5, “Host configuration” on page 173.
A host system is a computer that is connected to SVC through FC interface, Fibre Channel
over Ethernet (FCoE), or an Internet Protocol (IP) network.
A host object is a logical object in SVC that represents a list of worldwide port names
(WWPNs) and a list of Internet Small Computer System Interface (iSCSI) names that identify
the interfaces that the host system uses to communicate with the SVC. iSCSI names can be
iSCSI-qualified names (IQN) or extended unique identifiers (EUI).
A typical configuration has one host object for each host system that is attached to SVC. If a
cluster of hosts accesses the same storage, you can add host bus adapter (HBA) ports from
several hosts to one host object to simplify a configuration. A host object can have both
WWPNs and iSCSI names.
The following methods can be used to visualize and manage your SVC host objects from the
SVC GUI Hosts menu selection:
Use the Hosts panel, as shown in Figure 11-65.
Use the Volumes by Hosts panel, as shown in Figure 11-68 on page 748.
Important: Several actions on the hosts are specific to the Ports by Host panel or the Host
Mapping panel, but all of these actions and others are accessible from the Hosts panel. For
this reason, all actions on hosts are run from the All Hosts panel.
You can add information (new columns) to the table in the Hosts panel, as shown in
Figure 11-19 on page 724. For more information, see “Table information” on page 724.
To retrieve more information about a specific host, complete the following steps:
1. In the table, select a host.
2. Click Actions → Properties, as shown in Figure 11-69.
748 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Alternative: You can also access the Properties action by right-clicking a host.
3. You are presented with information for a host in the Overview window, as shown in
Figure 11-70 on page 749.
Show Details option: To obtain more information about the hosts, select Show Details
(Figure 11-70).
4. On the Mapped Volumes tab (Figure 11-71), you can see the volumes that are mapped to
this host.
5. The Port Definitions tab, as shown in Figure 11-72 on page 750, displays attachment
information, such as the WWPNs that are defined for this host or the iSCSI IQN that is
defined for this host.
When you finish viewing the details, click Close to return to the previous window.
Note: The FCoE hosts are listed under the FC Hosts Add Menu in the SVC GUI. Click
Fire Channel Hosts to access the FCoE host options. (See Figure 11-74 on page 751.)
3. Select Fibre Channel Host from the two available connection types, as shown in
Figure 11-74 on page 751. We recommend to expand Advanced menu with details.
750 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. In the Add Host window (Figure 11-75 on page 752), enter a name for your host in the
Host Name field.
Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
provide a name, use letters A - Z and a - z, numbers 0 - 9, or the underscore (_)
character. The host name can be 1 - 63 characters.
5. In the Fibre Channel Ports section, use the drop-down list box to select the WWPNs that
correspond to your HBA or HBAs and then click Plus icon to add more ports.
Deleting an FC port: If you added the wrong FC port, you can delete it from the list by
clicking the Minus button.
If your WWPNs do not display, click Rescan to rediscover the available WWPNs that are
new since the last scan.
WWPN still not displayed: In certain cases, your WWPNs still might not display, even
though you are sure that your adapter is functioning (for example, you see the WWPN
in the switch name server) and your SAN zones are set up correctly. To correct this
situation, enter the WWPN of your HBA or HBAs into the drop-down list box and click
Add Port to List. It is displayed as unverified.
6. If you need to modify the I/O Group or Host Type, you must select Advanced in the
Advanced Settings section to access these Advanced Settings, as shown in Figure 11-75
on page 752. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these hosts, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that use MPxIO.
7. Click Add Host, as shown in Figure 11-75. After you return to the Hosts panel
(Figure 11-76 on page 752), you can see the newly added FC host.
iSCSI-attached hosts
To create a host that uses the iSCSI connection type, complete the following steps:
1. To go to the Hosts panel from the SVC System panel on Figure 11-1 on page 716, move
the mouse pointer over the Hosts selection and click Hosts.
2. Click Add Host, as shown in Figure 11-73 on page 750, and select iSCSI Host.
752 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. In the Add Host window (as shown in Figure 11-77), enter a name for your host in the Host
Name field.
Host name: If you do not provide a name, the SVC automatically generates the name
hostx (where x is the ID sequence number that is assigned by the SVC internally). If you
want to provide a name, you can use the letters A - Z and a - z, the numbers 0 - 9, and
the underscore (_) character. The host name can be 1 - 63 characters.
4. In the iSCSI ports section, enter the iSCSI initiator or IQN as an iSCSI port and then click
Plus icon. This IQN is obtained from the server and generally has the same purpose as
the WWPN. Repeat this step to add more ports.
Deleting an iSCSI port: If you add the wrong iSCSI port, you can delete it from the list
by clicking the Minus icon.
If needed, select Use CHAP authentication (all ports), as shown in Figure 11-77, and
enter the Challenge Handshake Authentication Protocol (CHAP) secret.
The CHAP secret is the authentication method that is used to restrict access for other
iSCSI hosts to use the same connection. You can set the CHAP for the whole system
under the system’s properties or for each host definition. The CHAP must be identical on
the server and the system or host definition.
5. If you need to modify the I/O Group or Host Type, you must select the Advanced option in
the Advanced Settings section to access these settings, as shown in Figure 11-75 on
page 752. Perform the following tasks:
– Select one or more I/O Groups from which the host can access volumes. By default, all
I/O Groups are selected.
– Select the Host Type. The default type is Generic. Use Generic for all hosts, unless you
use Hewlett-Packard UNIX (HP-UX) or Sun. For these types, select HP/UX (to support
more than eight LUNs for HP/UX machines) or TPGS for Sun hosts that are using
MPxIO.
Alternatives: Two other methods can be used to rename a host. You can right-click a
host and select Rename, or you can use the method that is described in Figure 11-79
on page 754.
3. In the Rename Host window, enter the new name that you want to assign and click
Rename, as shown in Figure 11-79.
754 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. The Remove Host window opens, as shown in Figure 11-81. In the Verify the number of
hosts that you are deleting field, enter the number of hosts that you want to remove. This
verification was added to help you avoid deleting the wrong hosts inadvertently.
If volumes are still associated with the host and if you are sure that you want to delete the
host even though these volumes will be no longer accessible, select Remove the hosts
even if volumes are mapped to them. These volumes will no longer be accessible to
the hosts.
3. Click Remove to complete the process, as shown in Figure 11-81.
Tip: You can also right-click a host and select Modify Volume Mappings.
3. In the Modify Host Mappings window (Figure 11-83), select the volume or volumes that
you want to map to this host and move each volume to the table on the right by clicking the
two greater than symbols (>>). If you must remove the volumes, select the volume and
click the two less than symbols (<<).
4. In the table on the right, you can edit the SCSI ID by selecting a mapping that is
highlighted in yellow, which indicates a new mapping. Click Edit SCSI ID (Figure 11-84 on
page 757).
756 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
When you attempt to map a volume that is already mapped to another host, a warning
pop-up window appears, prompting for confirmation (Figure 11-85). Volumes that are
mapped to multiple hosts are wanted in clustered or fault-tolerant systems, for example.
Changing a SCSI ID: You can change the SCSI ID only on new mappings. To edit a
mapping SCSI ID, you must unmap the volume and re-create the map to the volume.
5. In the Edit SCSI ID window, change SCSI ID and then click OK, as shown in Figure 11-86
on page 758.
6. After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships.
Tip: You can also right-click a host and select Modify Volume Mappings.
2. From the Unmap from Host window (Figure 11-88), enter the number of mappings that you
want to remove in the “Verify the number of mappings that this operation affects” field. This
verification helps you to avoid deleting the wrong hosts unintentionally.
758 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. Click Unmap to remove the host mapping or mappings. You are returned to the Hosts
panel.
You can visualize and manage your volumes by using the following methods:
Use the Volumes panel, as shown in Figure 11-89.
Important: Several actions on the hosts are specific to the Volumes by Pool panel or to the
Volumes by Host panel. However, all of these actions and others are accessible from the
Volumes panel. We run all of the actions in the following sections from the Volumes panel.
You can add information (new columns) to the table in the Volumes panel, as shown in
Figure 11-19 on page 724. For more information, see “Table information” on page 724.
760 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
To retrieve more information about a specific volume, complete the following steps:
1. In the table, select a volume and click Actions → Properties, as shown in Figure 11-92.
Tip: You can also access the Properties action by right-clicking a volume name.
The Overview tab shows basic information about a volume, as shown in Figure 11-93.
You can see more parameters of volume by expanding the section View more details
(Figure 11-94 on page 762).
2. When you finish viewing the details, click Close to return to the Volumes panel.
3. Select one of the following presets, as shown in Figure 11-96 on page 763:
– Basic: Create volumes that use a fully allocated (thick) amount of capacity from the
selected storage pool.
– Mirror: Create volumes with two physical copies that provide data protection. Each
copy can belong to a separate storage pool to protect data from storage failures.
– Custom: Provides advanced menu with additional options for volumes definition:
• Thin Provision: Create volumes whose capacity is virtual (seen by the host), but that
use only the real capacity that is written by the host application. The virtual capacity
of a thin-provisioned volume often is larger than its real capacity.
762 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Changing the preset: For our example, we chose the Generic preset. However,
whatever selected preset you choose, you can reconsider your decision later by
customizing the volume when you click the Advanced option.
4. After selecting a preset (in our example, Basic), you must select the storage pool on which
the data is striped from drop-down menu, as shown in Figure 11-96.
Figure 11-96 Creating a volume: Select preset and the storage pool
5. After you select the storage pool, the window is updated automatically. You must enter the
following information, as shown in Figure 11-97 on page 764:
– Enter a volume quantity. You can create multiple volumes at the same time by using an
automatic sequential numbering suffix.
– Enter a name if you want to create a single volume, or a naming prefix if you want to
create multiple volumes.
Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The host name can be 1 - 63 characters.
– Enter the size of the volume that you want to create and select the capacity unit of
measurement (bytes, KiB, MiB, GiB, or TiB) from the list.
Naming: When you create more than one volume, the wizard does not prompt you for a
name for each volume that is created. Instead, the name that you use here becomes
the prefix and a number (starting at zero) is appended to this prefix as each volume is
created. You can modify a starting suffix to any numeric value that you prefer (whole
non-negative numbers). Modifying the ending value increases or decreases the volume
quantity that is based on the whole number count.
764 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Volume location: Specify if mirroring is desired and select the respective pool. Choose a
caching I/O Group and then select a preferred node. You can leave the default values for
SVC auto-balance. After you select a caching I/O Group, you also can add more I/O
Groups as Accessible I/O Groups ().
Thin Provisioning: You can set the following options after you activate thin provisioning
(Figure 11-101):
– Real Capacity: Enter the real size that you want to allocate. This size is the percentage
of the virtual capacity or a specific number in GiB of the disk space that is allocated.
– Automatically Expand: Select to allow the real disk size to grow, as required.
– Warning Threshold: Enter a percentage of the virtual volume capacity for a threshold
warning. A warning message is generated when the used disk capacity on the
space-efficient copy first exceeds the specified threshold.
– Thin-Provisioned Grain Size: Select the grain size: 32 KiB, 64 KiB, 128 KiB, or
256 KiB. Smaller grain sizes save space. Larger grain sizes produce better
performance. Try to match the FlashCopy grain size if the volume is used for
FlashCopy.
Important: Compressed and uncompressed volumes must not be mixed within the
same pool.
For more information about the Real-time Compression feature, see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859, and
Implementing IBM Real-time Compression in SAN Volume Controller and IBM Storwize
V7000, TIPS1083.
General: You can format volume before use and set the caching mode (Enabled, Read
cache only, Disabled) as illustrated on Figure 11-97 on page 764.
– OpenVMS only: Enter the user-defined identifier (UDID) for OpenVMS. You must
complete only this field for the OpenVMS system.
You can choose to create only the volumes by clicking Create, or you can create and map the
volumes by selecting Create and Map to Host. If you select to create only the volumes, you
are returned to the Volumes panel. You see that your volumes were created but not mapped,
as shown in Figure 11-104. You can map them later.
766 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
If you want to create and map the volumes on the volume creation window, click Continue
after the task finishes and another window opens. In the Modify Host Mappings window,
select the I/O Group and host to which you want to map these volumes by using the
drop-down menu (as shown Figure 11-105) and you are automatically directed to the host
mapping table.
Figure 11-105 Modify Host Mappings: Select the host to which to map your volumes
In the Modify Host Mappings window, verify the mapping. If you want to modify the mapping,
select the volume or volumes that you want to map to a host and move each of them to the
table on the right by clicking the two greater than symbols (>>), as shown in Figure 11-106. If
you must remove the mappings, click the two less than symbols (<<).
After you add all of the volumes that you want to map to this host, click Map Volumes or
Apply to create the host mapping relationships and finalize the creation of the volumes. You
return to the main Volumes panel. You can see that your volumes were created and mapped,
as shown in Figure 11-107.
Tip: Two other ways are available to rename a volume. You can right-click a volume and
select Rename, or you can use the method that is described in Figure 11-377 on
page 902.
3. In the Rename Volume window, enter the new name that you want to assign to the volume
and click Rename, as shown in Figure 11-109.
Volume name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The volume name can be 1 - 63 characters.
768 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. The Delete Volume window opens, as shown in Figure 11-111. In the “Verify the number of
volumes that you are deleting” field, enter a value for the number of volumes that you want
to remove. This verification helps you to avoid deleting the wrong volumes.
Important: Deleting a volume is a destructive action for any user data on that volume.
A volume cannot be deleted if SVC records any I/O activity on the volume during the
defined past time interval.
If you still have a volume that is associated with a host that is used with FlashCopy or
remote copy, and you want to delete the volume, select Delete the volume even if it has
host mappings or is used in FlashCopy mappings or remote-copy relationships.
Then, click Delete, as shown in Figure 11-111.
Note: You also can delete a mirror copy of a mirrored volume. For information about
deleting a mirrored copy, see 11.8.11, “Deleting volume copy” on page 780.
Important: Before you delete a host mapping, ensure that the host is no longer using the
volume. Unmapping a volume from a host does not destroy the volume contents.
Unmapping a volume has the same effect as powering off the computer without first
performing a clean shutdown; therefore, the data on the volume might end up in an
inconsistent state. Also, any running application that was using the disk receives I/O errors
and might not recover until a forced application or server reboot.
Tip: You can also right-click a volume and select View Mapped Hosts.
770 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. In the Properties window, click the Host Maps tab, as shown in Figure 11-113.
7. Click Unmap to remove the host mapping or mappings. You are returned to the Host Maps
window. Click Refresh to verify the results of the unmapping action, as shown in
Figure 11-115.
Tip: You can also right-click a volume and select Unmap All Hosts.
3. In the “Verify the number of mappings that this operation affects” field in the Unmap from
Hosts window (Figure 11-117), enter the number of host objects that you want to remove.
This verification helps you to avoid deleting the wrong host objects.
772 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. Click Unmap to remove the host mapping or mappings. You are returned to the All
Volumes panel.
Although shrinking a volume is an easy task by using SVC, ensure that your operating system
supports shrinking (natively or by using third-party tools) before you use this function.
In addition, the preferred practice is to always have a consistent backup before you attempt to
shrink a volume.
Important: For thin-provisioned or compressed volumes, the use of this method to shrink a
volume results in shrinking its virtual capacity. For more information about shrinking its real
capacity, see , “Shrink or expand real capacity of thin-provisioned or compressed volume”
on page 776.
Assuming that your operating system supports it, perform the following steps to shrink a
volume:
1. Perform any necessary steps on your host to ensure that you are not using the space that
you are about to remove.
2. In the volume table, select the volume that you want to shrink. Click Actions → Shrink, as
shown in Figure 11-118.
3. The Shrink Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 11-119 on page 774.
You can enter how much you want to shrink the volume by using the Shrink by field or you
can directly enter the final size that you want to use for the volume by using the Final size
field. The other field is computed automatically. For example, if you have a 10 GiB volume
and you want it to become 6 GiB, you can specify 4 GiB in the Shrink by field or you can
directly specify 6 GiB in the Final size field, as shown in Figure 11-119 on page 774.
4. When you are finished, click Shrink and the changes are visible on your host.
774 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Dynamic expansion of a volume is supported only when the volume is in use by one of the
following operating systems:
AIX 5L™ V5.2 and higher
Windows Server 2008, and Windows Server 2012 for basic and dynamic disks
Windows Server 2003 for basic disks, and Windows Server 2003 with Microsoft hot fix
(Q327020) for dynamic disks (however out of vendor support)
Important: For thin-provisioned volumes, the use of this method results in expanding its
virtual capacity.
If your operating system supports expanding a volume, complete the following steps:
1. In the table, select the volume that you want to expand.
2. Click Actions → Expand, as shown in Figure 11-120 on page 775.
3. The Expand Volume - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 11-121 on page 776.
You can enter how much you want to enlarge the volume by using the Expand by field, or
you can directly enter the final size that you want to use for the volume by using the Final
size field. The other field is computed automatically.
For example, if you have a 9 GiB volume and you want it to become 20 GiB, you can
specify 11 GiB in the Expand by field or you can directly specify 20 GiB in the Final size
field, as shown in Figure 11-121 on page 776. The maximum final size shows 499 GiB for
the volume.
4. When you are finished, click Expand, and verify changes on your host of the volume is
already mapped.
To shrink or expand the real capacity of a thin-provisioned or compressed volume use the
same procedures described in sections 11.8.7, “Shrinking a volume” on page 773 and 11.8.8,
“Expanding a volume” on page 775.
776 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: You can also right-click a volume and select Migrate to Another Pool.
3. The Migrate Volume Copy window opens, as shown in Figure 11-123. Select the storage
pool to which you want to reassign the volume. You are presented with a list of only the
storage pools with the same extent size.
4. When you finish making your selections, click Migrate to begin the migration process.
5. You can check the progress of the migration by using the Running Tasks status area, as
shown in Figure 11-124 on page 777.
To expand this area, click the icon, and then click Migration. Figure 11-125 shows a
detailed view of the running tasks.
When the migration is finished, the volume is part of the new pool.
Important: Migrated volume inherits all parameters of target pool, such as extent size,
encryption, easy tier settings, etc. Consider requirements for a volume before migration.
Tip: You can also create a mirrored volume by selecting the Mirror preset during the
volume creation, as shown in Figure 11-96 on page 763.
You can use a volume copy for any operation for which you can use a volume. It is not
apparent to higher-level operations, such as Metro Mirror, Global Mirror, or FlashCopy.
Creating a volume copy from an existing volume is not restricted to the same storage pool;
therefore, this method is ideal to use to protect your data from a disk system or an array
failure. If one copy of the mirror fails, it provides continuous data access to the other copy.
When the failed copy is repaired, the copies automatically resynchronize.
You can also use a volume mirror as an alternative migration tool, where you can synchronize
the mirror before splitting off the original side of the mirror. The volume stays online, and it can
be used normally while the data is being synchronized. The copies can also be separate
structures: striped, image, sequential, or space-efficient, and separate extent sizes.
To create a mirror copy from within a volumes panel, complete the following steps:
1. In the table, select the volume to which you want to add a mirrored copy.
2. Click Actions → Add Volume Copy, as shown in Figure 11-126.
778 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: You can also right-click a volume and select Add Volume Copy.
3. The Add Volume Copy - volumename window (where volumename is the volume that you
selected in the previous step) opens, as shown in Figure 11-127.
Important: Volume copy inherits all parameters of target pool, such as extent size,
encryption, Easy Tier settings, etc. Consider requirements for a volume before defining
a volume copy.
5. You can check the migration by using the Running Tasks menu, as shown in
Figure 11-128. To expand this Status Area, click the icon and click Volume
Synchronization.
6. When the synchronization is finished, the volume is part of the new pool, as shown in
Figure 11-129.
Primary copy: As shown in Figure 11-129, the primary copy is identified with an
asterisk (*). In this example, Copy 0 is the primary version (copy) and Copy 1* is the
mirrored copy.
780 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: You can also right-click a volume and select Delete this Copy.
2. The Warning window opens, as shown in Figure 11-131. Click Yes to confirm.
If the volume that you intend to delete is a primary copy and the secondary copy is not yet
synchronized, the attempt fails and you must wait until the synchronization completes.
Tip: You can also right-click a volume and select Create Volume From This Copy.
2. The Duplicate Volume window opens, as shown in Figure 11-133. In this window, enter a
name for the new volume.
Volume name: If you do not provide a name, IBM Spectrum Virtualize automatically
generates the name vdiskx (where x is the ID sequence number that is assigned by the
SVC internally). If you want to provide a name, you can use the letters A - Z and a - z,
the numbers 0 - 9, and the underscore (_) character. The host name can be 1 - 63
characters.
This new volume is now being formatted and will be available to be mapped to a host. It is
assigned to the same pool as the source copy of the duplication process (Figure 11-134).
Important: After you split a volume mirror, you cannot resynchronize or recombine them.
You must create a new volume copy.
782 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. The Validate Volume Copies window opens, as shown in Figure 11-136. In this window,
select one of the following options:
– Generate Event of Differences: Use this option if you want to verify only that the
mirrored volume copies are identical. If a difference is found, the command stops and
logs an error that includes the logical block address (LBA) and the length of the first
difference. Starting at a separate LBA each time, you can use this option to count the
number of differences on a volume.
– Overwrite Differences: Use this option to overwrite the content from the primary
volume copy to the other volume copy. The command corrects any differing sectors by
copying the sectors from the primary copy to the copies that are compared. Upon
completion, the command process logs an event, which indicates the number of
differences that were corrected. Use this option if you are sure that the primary volume
copy data is correct or that your host applications can handle incorrect data.
– Return Media Error to Host: Use this option to convert sectors on all volume copies
that contain different contents into virtual medium errors. Upon completion, the
command logs an event, which indicates the number of differences that were found,
the number of differences that were converted into medium errors, and the number of
differences that were not converted. Use this option if you are unsure what data is
correct, and you do not want an incorrect version of the data to be used.
Copy Services: For more information about the functionality of Copy Services in IBM
Spectrum Virtualize environment, see Chapter 9, “Advanced Copy Services” on page 475.
In this section, we describe the tasks that you can perform at a FlashCopy level. The following
methods can be used to visualize and manage your FlashCopy:
Use the SVC Overview panel. Move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy, as shown in Figure 11-137.
784 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
In its basic mode, the IBM FlashCopy function copies the contents of a source volume to a
target volume. Any data that existed on the target volume is lost and that data is replaced
by the copied data.
Use the Consistency Groups panel, as shown in Figure 11-138. A Consistency Group is a
container for mappings. You can add many mappings to a Consistency Group.
Use the FlashCopy Mappings panel, as shown in Figure 11-139. A FlashCopy mapping
defines the relationship between a source volume and a target volume.
2. Select the volume for which you want to create the FlashCopy relationship, as shown in
Figure 11-141 on page 786.
Depending on whether you created the target volumes for your FlashCopy mappings or you
want SVC to create the target volumes for you, the following options are available:
If you created the target volumes, see “Using existing target volumes” on page 786.
If you want the SVC to create the target volumes for you, see “Creating target volumes” on
page 790.
786 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. The Create FlashCopy Mapping window opens (Figure 11-143 on page 787). In this
window, you must create the relationship between the source volume (the disk that is
copied) and the target volume (the disk that receives the copy). A mapping can be created
between any two volumes inside an SVC clustered system. Select a source volume and a
target volume for your FlashCopy mapping, and then click Add. If you must create other
copies, repeat this step.
Important: The source volume and the target volume must be of equal size. Therefore,
only targets of the same size are shown in the list for a source volume.
Volumes: The volumes do not have to be in the same I/O Group or storage pool.
3. Click Next after you create all of the relationships that you need, as shown in
Figure 11-144.
4. In the next window, select one FlashCopy preset. The GUI provides the following presets
to simplify common FlashCopy operations, as shown in Figure 11-145 on page 788:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates a replica of the source volume on a target volume. The copy can be
changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
source and target volumes.
For each preset, you can customize various advanced options. You can access these
settings by clicking Advanced Settings.
5. The advanced setting options are shown in Figure 11-146.
788 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
If you prefer not to customize these settings, go directly to step 6 on page 789.
You can customize the following advanced setting options, as shown in Figure 11-146:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Or, if you do not want to include this FlashCopy mapping in a Consistency Group, select
No, do not add the mappings to a consistency group.
Click Finish, as shown in Figure 11-148.
7. Check the result of this FlashCopy mapping. For each FlashCopy mapping relationship
that was created, a mapping name is automatically generated that starts with fcmapX,
where X is the next available number. If needed, you can rename these mappings, as
shown in Figure 11-149. For more information, see 11.9.11, “Renaming FlashCopy
mapping” on page 805.
Target volume naming: If the target volume does not exist, the target volume is
created. The target volume name is based on its source volume and a generated
number at the end, for example, source_volume_name_XX, where XX is a number that
was generated dynamically.
790 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. In the Create FlashCopy Mapping window (Figure 11-151 on page 791), you must select
one FlashCopy preset. The GUI provides the following presets to simplify common
FlashCopy operations:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.
For each preset, you can customize various advanced options. To access these settings,
click Advanced Settings. The Advanced Setting options show in Figure 11-152.
If you prefer not to customize these advanced settings, go directly to step 3 on page 792.
You can customize the advanced setting options that are shown in Figure 11-152:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which can affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
792 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Click Finish.
Figure 11-153 Selecting the option to add the mappings to a Consistency Group
4. Check the result of this FlashCopy mapping, as shown in Figure 11-154. For each
FlashCopy mapping relationship that is created, a mapping name is automatically
generated that starts with fcmapX where X is the next available number. If necessary, you
can rename these mappings, as shown in Figure 11-154. For more information, see
11.9.11, “Renaming FlashCopy mapping” on page 805.
Tip: You can start FlashCopy from the SVC GUI. However, the use of the SVC GUI might
be impractical if you plan to handle many FlashCopy mappings or Consistency Groups
periodically, or at varying times. In these cases, creating a script by using the CLI might be
more convenient.
3. A volume is created as a target volume for this snapshot in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column Status area, as shown in
Figure 11-156.
794 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click FlashCopy.
2. Select the volume that you want to clone.
3. Click Actions → Create Clone, as shown in Figure 11-157.
4. A volume is created as a target volume for this clone in the same pool as the source
volume. The FlashCopy mapping is created and started. You can check the FlashCopy
progress in the Progress column or in the Running Tasks Status column. After the
FlashCopy clone is created, the mapping is removed and the new cloned volume becomes
available, as shown in Figure 11-158.
4. A volume is created as a target volume for this backup in the same pool as the source
volume. The FlashCopy mapping is created and started.
You can check the FlashCopy progress in the Progress column, as shown in
Figure 11-160, or in the Running Tasks Status column.
796 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. Click Create Consistency Group and enter the FlashCopy Consistency Group name that
you want to use and click Create (Figure 11-162).
Consistency Group name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The volume name can be 1 - 63 characters.
2. Select in which Consistency Group (Figure 11-164) you want to create the FlashCopy
mapping. If you prefer not to create a FlashCopy mapping in a Consistency Group, select
Not in a Group.
3. If you select a new Consistency Group, click Actions → Create FlashCopy Mapping, as
shown in Figure 11-165.
4. If you did not select a Consistency Group, click Create FlashCopy Mapping, as shown in
Figure 11-166.
5. The Create FlashCopy Mapping window opens, as shown in Figure 11-167. In this
window, you must create the relationships between the source volumes (the volumes that
798 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
are copied) and the target volumes (the volumes that receive the copy). A mapping can be
created between any two volumes in a clustered system.
Important: The source volume and the target volume must be of equal size.
Tip: The volumes do not have to be in the same I/O Group or storage pool.
6. Select a volume in the Source Volume column by using the drop-down list. Then, select a
volume in the Target Volume column by using the drop-down list. Click Add, as shown in
Figure 11-167. Repeat this step to create other relationships.
To remove a relationship that was created, click .
Important: The source and target volumes must be of equal size. Therefore, only the
targets with the appropriate size are shown for a source volume.
7. Click Next after all of the relationships that you want to create are shown (Figure 11-168).
Figure 11-168 Create FlashCopy Mapping with the relationships that were created
8. In the next window, you must select one FlashCopy preset. The GUI provides the following
presets to simplify common FlashCopy operations, as shown in Figure 11-169:
– Snapshot: Creates a copy-on-write point-in-time copy.
– Clone: Creates an exact replica of the source volume on a target volume. The copy can
be changed without affecting the original volume.
– Backup: Creates a FlashCopy mapping that can be used to recover data or objects if
the system experiences data loss. These backups can be copied multiple times from
the source and target volumes.
Whichever preset you select, you can customize various advanced options. To access
these settings, click Advanced Settings.
If you prefer not to customize these settings, go directly to step 9.
You can customize the following advanced setting options, as shown in Figure 11-170:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Incremental: This option copies only the parts of the source or target volumes that
changed since the last copy. Incremental copies reduce the completion time of the
copy operation.
Incremental copies: Even if the type of the FlashCopy mapping is incremental, the
first copy process copies all of the data from the source volume to the target volume.
800 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
9. If you do not want to create these FlashCopy mappings from a Consistency Group (step 3
on page 798), you must confirm your choice by selecting No, do not add the mappings
to a consistency group, as shown in Figure 11-171 on page 801.
10.Click Finish.
11.Check the result of this FlashCopy mapping in the Consistency Groups window, as shown
in Figure 11-172.
For each FlashCopy mapping relationship that you created, a mapping name is
automatically generated that starts with fcmapX where X is an available number. If
necessary, you can rename these mappings. For more information, see 11.9.11,
“Renaming FlashCopy mapping” on page 805.
Tip: You can start FlashCopy from the SVC GUI. However, if you plan to handle many
FlashCopy mappings or Consistency Groups periodically, or at varying times, creating a
script by using the operating system shell CLI might be more convenient.
Tip: You can also right-click a FlashCopy mapping and select Show Related Volumes.
In the Related Volumes window (Figure 11-174), you can see the related mapping for a
volume. If you click one of these volumes, you can see its properties. For more information
about volume properties, see 11.8.1, “Volume information” on page 760.
802 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: You can also right-click a FlashCopy mapping and select Move to Consistency
Group.
4. In the Move FlashCopy Mapping to Consistency Group window, select the Consistency
Group for this FlashCopy mapping by using the drop-down list (Figure 11-176).
Tip: You can also right-click a FlashCopy mapping and select Remove from
Consistency Group.
In the Remove FlashCopy Mapping from Consistency Group window, click Remove, as
shown in Figure 11-178.
804 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: You can also right-click a FlashCopy mapping and select Edit Properties.
4. In the Edit FlashCopy Mapping window, you can modify the following parameters for a
selected FlashCopy mapping, as shown in Figure 11-180 on page 805:
– Background Copy Rate: This option determines the priority that is given to the copy
process. A faster rate increases the priority of the process, which might affect the
performance of other operations.
– Cleaning Rate: This option minimizes the amount of time that a mapping is in the
stopping state. If the mapping is not complete, the target volume is offline while the
mapping is stopping.
Tip: You can also right-click a FlashCopy mapping and select Rename Mapping.
4. In the Rename FlashCopy Mapping window, enter the new name that you want to assign
to the FlashCopy mapping and click Rename, as shown in Figure 11-182 on page 806.
FlashCopy mapping name: You can use the letters A - Z and a - z, the numbers 0 - 9,
and the underscore (_) character. The FlashCopy mapping name can be 1 - 63
characters.
806 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 11-184 on page 807.
Consistency Group name: The name can consist of the letters A - Z and a - z, the
numbers 0 - 9, the dash (-), and the underscore (_) character. The name can be 1 - 63
characters. However, the name cannot start with a number, a dash, or an underscore.
The new Consistency Group name is displayed in the Consistency Group panel.
Tip: You can also right-click a FlashCopy mapping and select Delete Mapping.
4. The Delete FlashCopy Mapping window opens, as shown in Figure 11-186. In the “Verify
the number of FlashCopy mappings that you are deleting” field, you must enter the
number of volumes that you want to remove. This verification was added to help avoid
deleting the wrong mappings.
If you still have target volumes that are inconsistent with the source volumes and you want
to delete these FlashCopy mappings, select Delete the FlashCopy mapping even when
the data on the target volume is inconsistent, or if the target volume has other
dependencies.
Click Delete, as shown in Figure 11-186.
Important: Deleting a Consistency Group does not delete the FlashCopy mappings.
808 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: You can also right-click a FlashCopy mapping and select Start.
4. You can check the FlashCopy progress in the Progress column of the table or in the
Running Tasks status area. After the task completes, the FlashCopy mapping status is in a
Copied state, as shown in Figure 11-190.
Important: Stop a FlashCopy copy process only when the data on the target volume is
useless, or if you want to modify the FlashCopy mapping. When a FlashCopy mapping is
stopped, the target volume becomes invalid and it is set offline by SVC.
810 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
In this section, we describe the tasks that you can perform at a remote copy level.
The following panels are used to visualize and manage your remote copies:
The Remote Copy panel, as shown in Figure 11-193.
By using the Metro Mirror and Global Mirror Copy Services features, you can set up a
relationship between two volumes so that updates that are made by an application to one
volume are mirrored on the other volume. The volumes can be in the same SVC clustered
system or on two separate SVC systems.
To access the Remote Copy panel, move the mouse pointer over the Copy Services
selection and click Remote Copy.
812 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Important: All SVC clustered systems must be at level 5.1 or higher. A system can be
partnered with up to three remote systems. No more than four systems can be in the same
connected set. Only one IP partnership is supported.
Intra-cluster Metro Mirror: If you are creating an intra-cluster Metro Mirror, do not perform
this next step to create the SVC clustered system Metro Mirror partnership. Instead, go to
11.10.4, “Creating stand-alone remote copy relationships” on page 817.
To create an FC partnership between the SVC systems by using the GUI, complete the
following steps:
1. From the SVC System panel, move the mouse pointer over Copy Services in the dynamic
menu and click Partnerships. The Partnerships panel opens, as shown in Figure 11-200
on page 814.
2. Click Create Partnership to create a partnership with another SVC system, as shown in
Figure 11-200.
3. In the Create Partnership window (Figure 11-201), complete the following information:
– Select the partnership type, either Fibre Channel or IP.
814 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
– Select an available partner system from the drop-down list, as shown in Figure 11-202.
If no candidate is available, the following error message is displayed:
This system does not have any candidates.
– Enter a link bandwidth (Mbps) that is used by the background copy process between
the systems in the partnership. Set this value so that it is less than or equal to the
bandwidth that can be sustained by the communication link between the systems. The
link must sustain any host requests and the rate of the background copy.
– Enter the background copy rate.
– Click OK to confirm the partnership relationship.
4. As shown in Figure 11-202, our partnership is in the Partially Configured state because
this work was performed only on one side of the partnership so far.
To fully configure the partnership between both systems, perform the same steps on the
other SVC system in the partnership. For simplicity and brevity, we show only the two most
significant windows when the partnership is fully configured.
5. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership. We
specify the available bandwidth for the background copy (200 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows
(which are shown in Figure 11-203 and Figure 11-204 on page 815) confirm that our
remote system partnership is now in the Fully Configured state. (Figure 11-202 shows the
remote system ITSO SVC 5 from the local system ITSO SVC 3.)
Figure 11-204 shows the remote system ITSO SVC 3 from the local system ITSO SVC 5.
To create an IP partnership between SVC systems by using the GUI, complete the following
steps:
1. From the SVC System panel, move the mouse pointer over Copy Services and click
Partnerships. For type, select IP, as shown in Figure 11-205.
To fully configure the partnership between both systems, we must perform the same steps
on the other SVC system in the partnership. For simplicity and brevity, we only show the
two most significant windows when the partnership is fully configured.
816 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. Starting the SVC GUI for ITSO SVC 5, select ITSO SVC 3 for the system partnership.
Specify the available bandwidth for the background copy (100 Mbps) and then click OK.
Now that both sides of the SVC system partnership are defined, the resulting windows (as
shown in Figure 11-207 and Figure 11-208 on page 817) confirm that our remote system
partnership is now in the Fully Configured state. Figure 11-207 shows the remote system
ITSO SVC 5 from the local system ITSO SVC 3.
Figure 11-208 on page 817 shows the remote system ITSO SVC 3 from the local system
ITSO SVC 5.
Note: The bandwidth setting definition that is used when the IP partnership is created
changed. Previously, the bandwidth setting defaulted to 50 MBs, and it was the maximum
transfer rate from the primary site to the secondary site for initial volume sync or resync.
The link bandwidth setting is now configured by using Mbits not MBs and you set this link
bandwidth setting to a value that the communication link can sustain or what is allocated
for replication. The background copy rate setting is now a percentage of the link bandwidth
and it determines the bandwidth that is available for initial sync and resync or for Global
Mirror with Change Volumes.
3. In the Create Relationship window, select one of the following types of relationships that
you want to create (as shown in Figure 11-210 on page 818):
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
However, the copy might not contain the last few updates if a disaster recovery
operation is performed.
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume. Changes can
then be copied to the remote system asynchronously. The FlashCopy relationship
exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volume is for internal use. The user cannot
manipulate it as they can with a normal FlashCopy mapping. Most svctask *fcmap
commands fail.
Figure 11-210 Select the type of relationship that you want to create
818 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Click Next.
5. In the next window, select the location of the auxiliary volumes, as shown in
Figure 11-212:
– On this system, which means that the volumes are local.
– On another system, which means that you select the remote system from the
drop-down list.
After you make a selection, click Next.
6. In the New Relationship window that is shown in Figure 11-213, you can create
relationships. Select a master volume in the Master drop-down list. Then, select an
auxiliary volume in the Auxiliary drop-down list for this master and click Add. If needed,
repeat this step to create other relationships.
Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are shown in the list box for a specific source
volume.
Figure 11-214 Create the relationships between the master and auxiliary volumes
After all of the relationships that you want to create are shown, click Next.
8. Specify whether the volumes are synchronized, as shown in Figure 11-215. Then, click
Next.
820 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
9. In the last window, select whether you want to start to copy the data, as shown in
Figure 11-216. Click Finish.
10.Figure 11-217 shows that the task to create the relationship is complete.
The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status is Consistent Copying. You can check the copying progress in
the Running Tasks status area.
After the copy is finished, the relationship status changes to Consistent synchronized.
Figure 11-218 on page 822 shows the Consistent Synchronized status.
3. Enter a name for the Consistency Group, and then, click Next, as shown in Figure 11-220.
Consistency Group name: If you do not provide a name, the SVC automatically
generates the name rccstgrpX, where X is the ID sequence number that is assigned by
the SVC internally. You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The Consistency Group name can be 1 - 15 characters. No
blanks are allowed.
822 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. In the next window, select where the auxiliary volumes are located, as shown in
Figure 11-221:
– On this system, which means that the volumes are local
– On another system, which means that you select the remote system in the drop-down
list
After you make a selection, click Next.
5. Select whether you want to add relationships to this group, as shown in Figure 11-222.
The following options are available:
– If you select Yes, click Next to continue the wizard and go to step 6.
– If you select No, click Finish to create an empty Consistency Group that can be used
later.
6. Select one of the following types of relationships to create, as shown in Figure 11-223 on
page 825:
– Metro Mirror
This type of remote copy creates a synchronous copy of data from a primary volume to
a secondary volume. A secondary volume can be on the same system or on another
system.
– Global Mirror
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated,
but the copy might not contain the last few updates if a disaster recovery operation is
performed.
– Global Mirror with Change Volumes
This type provides a consistent copy of a source volume on a target volume. Data is
written to the target volume asynchronously so that the copy is continuously updated.
Change Volumes are used to record changes to the remote copy volume.
Changes can then be copied to the remote system asynchronously. The FlashCopy
relationship exists between the remote copy volume and the Change Volume.
FlashCopy mapping with Change Volumes is for internal use. The user cannot
manipulate this type of mapping like a normal FlashCopy mapping.
Most svctask *fcmap commands fail.
Click Next.
824 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Figure 11-223 Select the type of relationship that you want to create
7. As shown in Figure 11-224 on page 825, you can optionally select existing relationships to
add to the group. Click Next.
Note: To select multiple relationships, hold down Ctrl and click the entries that you want
to include.
8. In the window that is shown in Figure 11-225, you can create relationships. Select a
volume in the Master drop-down list. Then, select a volume in the Auxiliary drop-down list
for this master. Click Add, as shown in Figure 11-225. Repeat this step to create other
relationships, if needed.
Important: The master and auxiliary volumes must be of equal size. Therefore, only
the targets with the appropriate size are displayed for a specific source volume.
To remove a relationship that was created, click (Figure 11-225). After all of the
relationships that you want to create are displayed, click Next.
Figure 11-225 Create relationships between the master and auxiliary volumes
9. Specify whether the volumes are already synchronized. Then, click Next, as shown in
Figure 11-226.
10.In the last window, select whether you want to start to copy the data. Then, click Finish, as
shown in Figure 11-227.
11.The relationships are visible in the Remote Copy panel. If you selected to copy the data,
you can see that the status of the relationships is Inconsistent copying. You can check the
826 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
copying progress in the Running Tasks status area, as shown in Figure 11-228 on
page 827.
Figure 11-228 Consistency Group created with relationship in copying and synchronized status
After the copies are completed, the relationships and the Consistency Group change to the
Consistent Synchronized status.
3. Enter the new name that you want to assign to the Consistency Group and click Rename,
as shown in Figure 11-230 on page 828.
Consistency Group name: The Consistency Group name can consist of the letters
A - Z and a - z, the numbers 0 - 9, the dash (-), and the underscore (_) character. The
name can be 1 - 15 characters. However, the name cannot start with a number, dash,
or an underscore character. No blanks are allowed.
The new Consistency Group name is displayed on the Remote Copy panel.
Tip: You can also right-click a remote copy relationship and select Rename.
3. In the Rename Relationship window, enter the new name that you want to assign to the
FlashCopy mapping and click Rename, as shown in Figure 11-232 on page 829.
828 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Remote copy relationship name: You can use the letters A - Z and a - z, the numbers
0 - 9, and the underscore (_) character. The remote copy name can be 1 - 15
characters. No blanks are allowed.
Tip: You can also right-click a remote copy relationship and select Add to Consistency
Group.
5. In the Add Relationship to Consistency Group window, select the Consistency Group for
this remote copy relationship by using the drop-down list, as shown in Figure 11-234 on
page 830. Click Add to Consistency Group to confirm your changes.
Tip: You can also right-click a remote copy relationship and select Remove from
Consistency Group.
5. In the Remove Relationship From Consistency Group window, click Remove, as shown in
Figure 11-236 on page 831.
830 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: You can also right-click a relationship and select Start from the list.
5. After the task is complete, the remote copy relationship status has a Consistent
Synchronized state, as shown in Figure 11-238.
3. Click Actions → Start (Figure 11-240) to start the remote copy Consistency Group.
4. You can check the remote copy Consistency Group progress, as shown in Figure 11-241.
832 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
5. After the task completes, the Consistency Group and all of its relationships are in a
Consistent Synchronized state, as shown in Figure 11-242 on page 833.
Important: When the copy direction is switched, no outstanding I/O can exist to the
volume that changes from primary to secondary because all I/O is inhibited to that volume
when it becomes the secondary. Therefore, careful planning is required before you switch
the copy direction for a remote copy relationship.
5. The Warning window that is shown in Figure 11-244 opens. A confirmation is needed to
switch the remote copy relationship direction. The remote copy is switched from the
master volume to the auxiliary volume. Click Yes.
Figure 11-245 on page 834 shows the command-line output about this task.
The copy direction is now switched, as shown in Figure 11-246. The auxiliary volume is
now accessible and shown as the primary volume. Also, the auxiliary volume is now
synchronized to the master volume.
834 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Important: When the copy direction is switched, it is crucial that no outstanding I/O exists
to the volume that changes from primary to secondary because all of the I/O is inhibited to
that volume when it becomes the secondary. Therefore, careful planning is required before
you switch the copy direction for a Consistency Group.
4. The warning window that is shown in Figure 11-248 opens. A confirmation is needed to
switch the Consistency Group direction. In the example that is shown in Figure 11-248, the
Consistency Group is switched from the master group to the auxiliary group. Click Yes.
The remote copy direction is now switched, as shown in Figure 11-249 on page 836. The
auxiliary volume is now accessible and shown as a primary volume.
Tip: You can also right-click a relationship and select Stop from the list.
836 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
5. The Stop Remote Copy Relationship window opens, as shown in Figure 11-251 on
page 837. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Relationship.
6. Figure 11-252 shows the command-line text for the stop remote copy relationship.
The new relationship status can be checked, as shown in Figure 11-253 on page 838. The
relationship is now Consistent Stopped.
Tip: You can also right-click a relationship and select Stop from the list.
4. The Stop Remote Copy Consistency Group window opens, as shown in Figure 11-255 on
page 839. To allow secondary read/write access, select Allow secondary read/write
access. Then, click Stop Consistency Group.
838 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
The new relationship status can be checked, as shown in Figure 11-256. The relationship
is now Consistent Stopped.
Multiple remote copy mappings: To select multiple remote copy mappings, hold down
Ctrl and click the entries that you want.
Tip: You can also right-click a remote copy mapping and select Delete.
4. The Delete Relationship window opens (Figure 11-258). In the “Verify the number of
relationships that you are deleting” field, enter the number of volumes that you want to
remove. This verification was added to help to avoid deleting the wrong relationships.
Click Delete, as shown in Figure 11-258.
Important: Deleting a Consistency Group does not delete its remote copy mappings.
840 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. The warning window that is shown in Figure 11-260 opens. Click Yes.
On the System status panel (beneath the SVC nodes), you can view the global storage
usage, as shown in Figure 11-262. By using this method, you can monitor the physical
capacity and the allocated capacity of your SVC system. You can change between the
Allocation view and the Compression view to see the capacity usage and space savings of
the Real-time Compression feature, as shown in Figure 11-263.
842 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
You can click an individual node. You can right-click the node, as shown in Figure 11-265, to
open the list of actions.
If you click Properties, you see the following view, as shown in Figure 11-266 on page 844.
Under View in the list of actions, you can see information about the Fibre Channel Ports, as
shown in Figure 11-267.
844 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Naming rules
When you choose a name for an object, the following rules apply:
Names must begin with a letter.
Important: Do not start names by using an underscore (_) character even though it is
possible. The use of the underscore as the first character of a name is a reserved
naming convention that is used by the system configuration restore process.
Valid characters are uppercase letters (A - Z), lowercase letters (a - z), digits (0 - 9), the
underscore (_) character, a period (.), a hyphen (-), and a space.
Names must not begin or end with a space.
Object names must be unique within the object type. For example, you can have a volume
named ABC and an MDisk called ABC, but you cannot have two volumes called ABC.
The default object name is valid (object prefix with an integer).
Objects can be renamed to their current names.
To rename the system from the System panel, complete the following steps:
1. Click Actions in the upper-left corner of the SVC System panel, as shown in
Figure 11-270.
2. From the panel, select Rename System, as shown in Figure 11-271 on page 846.
3. The panel opens, as shown in Figure 11-272. Specify a new name for the system and click
Rename.
846 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The clustered system name can be 1 - 63 characters.
4. The Warning window opens, as shown in Figure 11-273 on page 847. If you are using the
iSCSI protocol, changing the system name or the iSCSI Qualified Name (IQN) also
changes the IQN of all of the nodes in the system. Changing the system name or the IQN
might require the reconfiguration of all iSCSI-attached hosts. This reconfiguration might be
required because the IQN for each node is generated by using the system and node
names.
5. Click Yes.
Three site objects are automatically defined by the SVC and numbered 1, 2, and 3. The SVC
creates the corresponding default names, site1, site2, and site3, for each of the site
objects. Site1 and site2 are the two sites that make up the two halves of the stretched
system, and site3 is the quorum disk. You can rename the sites to describe your data center
locations.
3. The Rename Sites panel with the site information opens, as shown in Figure 11-275.
4. Enter the appropriate site information. Figure 11-276 shows the updated Rename Sites
panel. Click Rename.
848 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
3. Enter the new name of the node and click Rename (Figure 11-278).
If you remove the main power while the system is still running, the uninterruptible power
supply units or internal batteries detect the loss of power and instruct the nodes to shut down.
This shutdown can take several minutes to complete. Although the uninterruptible power
supply units or internal batteries have sufficient power to perform the shutdown, you
unnecessarily drain a unit’s batteries.
When power is restored, the SVC nodes start. However, one of the first checks that is
performed by the SVC node is to ensure that the batteries have sufficient power to survive
another power failure, which enables the node to perform a clean shutdown.
(You do not want the batteries to run out of power when the node’s shutdown activities did not
complete). If the batteries are not charged sufficiently, the node does not start.
It can take up to 3 hours to charge the batteries sufficiently for a node to start.
Important: When a node shuts down because of a power loss, the node dumps the cache
to an internal hard disk drive so that the cached data can be retrieved when the system
starts. With 2145-8F2/8G4 nodes, the cache is 8 GiB. With 2145-CF8/CG8 nodes, the
cache is 24 GiB. With 2145-DH8 nodes, the cache is up to 64 GiB. Therefore, this process
can take several minutes to dump to the internal drive.
The SVC uninterruptible power supply units or internal batteries are designed to survive at
least two power failures in a short time. After that time, the nodes do not start until the
batteries have sufficient power to survive another immediate power failure.
During maintenance activities, if the uninterruptible power supply units or batteries detect
power and then detect a loss of power multiple times (the nodes start and shut down more
than once in a short time), you might discover that you unknowingly drained the batteries. You
must wait until they are charged sufficiently before the nodes start.
Important: Before a system is shut down, quiesce all I/O operations that are directed to
this system because you lose access to all of the volumes that are serviced by this
clustered system. Failure to quiesce all I/O operations might result in failed I/O operations
that are reported to your host operating systems.
You do not need to quiesce all I/O operations if you are shutting down only one SVC node.
Begin the process of quiescing all I/O activity to the system by stopping the applications on
your hosts that are using the volumes that are provided by the system.
If you are unsure which hosts are using the volumes that are provided by the SVC system,
follow the procedure that is described in 10.6.23, “Showing the host to which the volume is
mapped” on page 617, and repeat this procedure for all volumes.
850 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
From the System status panel, complete the following steps to shut down your system:
1. Click Actions, as shown in Figure 11-279 on page 851. Select Power Off System.
3. Ensure that you stopped all FlashCopy mappings, remote copy relationships, data
migration operations, and forced deletions before you continue.
Important: Pay special attention when encryption is enabled on some storage pool.
You have to have inserted USB drive with stored encryption keys, otherwise the data
will not be readable after restart!
You completed the required tasks to shut down the system. You can now shut down the
uninterruptible power supply units by pressing the power buttons on their front panels. The
internal batteries of the 2145-DH8 nodes will shut down automatically with the nodes.
Note: Since IBM Spectrum Virtualize V7.6 you are no longer able to power off single node
from the management GUI. Even when selecting single node and choosing Power Off
System from the context menu, the whole cluster will be powered off.
3. Follow the process that is described in 11.17.8, “Upgrading IBM Spectrum Virtualize
software” on page 896.
852 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
To update the internal drives, select Pools → Internal Storage in the management GUI.
To update specific drives, select the drive or drives and select Actions → Update. Click
Browse and select the directory where you downloaded the firmware update file. Click
Upgrade. Depending on the number of drives and the size of the system, drive updates
can take up to 10 hours to complete.
6. To monitor the progress of the update, click the Running Tasks icon on the bottom center
of the management GUI window and then click Drive Update Operations. You can also
use the Monitoring → Events panel to view any completion or error messages that relate
to the update.
In our lab environment, io_grp0 is built from the 2145-DH8 nodes and io_grp1 consists of a
previous model 2145-CF8. This configuration is typical when you are upgrading your data
center storage virtualization infrastructure to a newer SVC platform.
To see the I/O Group details, move the mouse pointer over Actions and click Properties. The
Properties are shown in Figure 11-285. Alternatively, hover the mouse pointer over the I/O
Group name and right-click to open a menu and navigate to Properties.
854 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Control enclosures
Expansion enclosures
Internal capacity
3. The following tasks are available for this node (Figure 11-287).
5. The System window (Figure 11-289) shows how to obtain additional information about
certain hardware parts.
856 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. Hover the mouse cursor over the empty gray frame, click there and the Action panel for the
system opens, as shown in Figure 11-293. Click Add Nodes.
3. In the Add Nodes window (Figure 11-294), you see the available nodes, which are in
candidate mode and able to join the cluster.
858 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. Select available nodes to be added and Click Next. You will be prompted to enable
encryption on selected nodes. Encryption licenses need to be installed in the system. See
Figure 11-295.
5. Click Next and the summary of action is displayed as shown in Figure 11-296 on
page 860.
Click Finish and the SVC system will add the node to the cluster.
Important: When a node is added to a system, it displays a state of “Adding” and a yellow
warning triangle with an exclamation point. The process to add a node to the system can
take up to 30 minutes, particularly if the software version of the node changes. The added
nodes are updated to the code version of the running cluster.
Figure 11-297 Remove a node from the SVC clustered system action
860 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
By default, the cache is flushed before the node is deleted to prevent data loss if a failure
occurs on the other node in the I/O Group.
In certain circumstances, such as when the system is degraded, you can take the
specified node offline immediately without flushing the cache or ensuring that data loss
does not occur. Select Bypass check for volumes that will go offline, and remove the
node immediately without flushing its cache.
3. Click Yes to confirm the removal of the node. See the System Details panel to verify a
node removal, as shown in Figure 11-299.
Figure 11-299 System Details panel with one SVC node removed
If this node is the last node in the system, the warning message differs, as shown in
Figure 11-300 on page 862. Before you delete the last node in the system, ensure that you
want to destroy the system. The user interface and any open CLI sessions are lost.
Figure 11-300 Warning window for the removal of the last node in the cluster
After you click OK, the node is a candidate to be added back into this system or into
another system.
11.15 Troubleshooting
The events that are detected by the system are saved in a system event log. When an entry is
made in this event log, the condition is analyzed and classified to help you diagnose
problems.
To access this panel from the SVC System panel, move a mouse pointer over Monitoring in
the dynamic menu and select Events.
862 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
The list of system events opens with the highest-priority event indicated and information
about how long ago the event occurred. Click Close to return to the Recommended Actions
panel.
Note: If an event is reported, you must select the event and run a fix procedure.
Tip: You can also click Run Fix at the top of the panel (Figure 11-302) to solve the most
critical event.
3. The Directed Maintenance Procedure window opens, as shown in Figure 11-303. Follow
the steps in the wizard to fix the event.
Sequence of steps: We do not describe all of the possible steps here because the
steps that are involved depend on the specific event. The process is always interactive
and you are guided through the entire process.
To access this panel from the SVC System panel that is shown in Figure 11-1 on page 716,
move the mouse pointer over the Monitoring selection in the dynamic menu and click Events.
Then, in the upper-left corner of the panel, select Recommended actions, Unfixed messages
and alerts, or Show all.
Certain alerts have a four-digit error code and a fix procedure that helps you fix the problem.
Other alerts also require an action, but they do not have a fix procedure. Messages are fixed
when you acknowledge reading them.
Filtering events
You can filter events in various ways. Filtering can be based on event status, as described in
“Basic filtering”, or over a period, as described in “Time filtering” on page 865. You can also
search the event log for a specific text string by using table filtering, as described in “Overview
window” on page 722.
Certain events require a specific number of occurrences in 25 hours before they are displayed
as unfixed. If they do not reach this threshold in 25 hours, they are flagged as expired.
Monitoring events are beneath the coalesce threshold and are transient.
You can also sort events by time or error code. When you sort by error code, the most serious
events (those events with the lowest numbers) are displayed first.
Basic filtering
You can filter the Event log display in one of the following ways by using the drop-down menu
in the upper-left corner of the panel (Figure 11-305 on page 865):
Display all unfixed alerts and messages: Select Recommended Actions to show all
events that require your attention.
Show all alerts and messages: Select Unfixed Messages and Alerts.
Display all event alerts, messages, monitoring, and expired events: Select Show All,
which includes the events that are under the threshold.
864 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Time filtering
You can use the following methods to perform time filtering:
Select a start date and time, and an end date and time frame filter. Complete the following
steps to use this method:
a. Click Actions → Filter by Date, as shown in Figure 11-306.
Tip: You can also access the Filter by Date action by right-clicking an event.
b. The Date/Time Filter window opens, as shown in Figure 11-307. From this window,
select a start date and time and an end date and time.
c. Click Filter and Close. Your panel is now filtered based on the time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 11-308 on page 866.
Select an event and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an event.
b. Click Actions → Show entries within. Select minutes, hours, or days, and select a
value, as shown in Figure 11-309.
Figure 11-309 Show entries within a certain amount of time after this event
Tip: You can also access the Show entries within action by right-clicking an event.
c. Now, your window is filtered based on the time frame, as shown in Figure 11-310.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 11-311 on page 867.
866 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Tip: To select multiple events, hold down Ctrl and click the entries that you want to
select.
Tip: You can also access the Mark as Fixed action by right-clicking an event.
3. The Warning window that is shown in Figure 11-313 opens. Click Yes.
3. You can view the file by using Notepad or another program, as shown in Figure 11-315.
868 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
2. The Warning window that is shown in Figure 11-317 opens. From this window, you must
confirm that you want to clear all entries from the error log.
3. Click Yes.
The Download Support Package window opens, as shown in Figure 11-320 on page 870.
The duration varies: Depending on your choice, this action can take several minutes
to complete.
870 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
From this window, select the following types of logs that you want to download:
– Standard logs
These logs contain the most recent logs that were collected for the system. These logs
are the most commonly used by support to diagnose and solve problems.
– Standard logs plus one existing statesave
These logs contain the standard logs for the system and the most recent statesaves
from any of the nodes in the system. Statesaves are also known as dumps or
livedumps.
– Standard logs plus most recent statesave from each node
These logs contain the standard logs for the system and the most recent statesaves
from each node in the system. Statesaves are also known as dumps or livedumps.
– Standard logs plus new statesaves
These logs generate new statesaves (livedumps) for all the nodes in the system and
package the statesaves with the most recent logs.
2. Click Download, as shown in Figure 11-320.
3. Select where you want to save the logs, as shown in Figure 11-321.
2. In the detailed view, select the node from which you want to download the logs by using
the drop-down menu that is in the upper-left corner of the panel, as shown in
Figure 11-323.
3. Select the package or packages that you want to download, as shown in Figure 11-324 on
page 872.
Tip: To select multiple packages, hold down Ctrl and click the entries that you want to
include.
Tip: You can also delete packages by clicking Delete in the Actions menu.
872 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Maximum logging level: The maximum logging level can have a significant effect on the
performance of the CIMOM interface.
To change the CIMOM logging level to high, medium, or low, use the drop-down menu in the
upper-right corner of the panel, as shown in Figure 11-326.
Each user account has a name, role, and password assigned to it, which differs from the
Secure Shell (SSH) key-based role approach that is used by the CLI. Starting with version
6.3, you can access the CLI with a password and no SSH key.
Note: Use the default superuser account only for initial configuration and emergency
access. Change its default passw0rd. Always define individual accounts for the users.
The role-based security feature organizes the SVC administrative functions into groups,
which are known as roles, so that permissions to run the various functions can be granted
differently to the separate administrative users. Table 11-1 on page 874 lists the four major
roles and one special role.
Copy Operator All svcinfo commands and the For users that control all copy
following svctask commands: functionality of the cluster
prestartfcconsistgrp,
startfcconsistgrp,
stopfcconsistgrp,
chfcconsistgrp,
prestartfcmap, startfcmap,
stopfcmap, chfcmap,
startrcconsistgrp,
stoprcconsistgrp,
switchrcconsistgrp,
chrcconsistgrp,
startrcrelationship,
stoprcrelationship,
switchrcrelationship,
chrcrelationship, and
chpartnership
Service All svcinfo commands and the For users that perform service
following svctask commands: maintenance and other
applysoftware, setlocale, hardware tasks on the cluster
addnode, rmnode, cherrstate,
writesernum, detectmdisk,
includemdisk, clearerrlog,
cleardumps, settimezone,
stopcluster, startstats,
stopstats, and settime
Monitor All svcinfo commands and the For users that need view
following svctask commands: access only
finderr, dumperrlog,
dumpinternallog,
chcurrentuser, and the
svcconfig command: backup
VASA Provider All commands related to virtual For users and system accounts
volumes or VVOLs used by needed to manage virtual
VMware vSphere volumes and VVOLs used by
VMware vSphere and managed
by IBM Spectrum Virtualize.
The superuser user is a built-in account that has the Security Admin user role permissions.
You cannot change permissions or delete this superuser account; you can only change the
password. You can also change this password manually on the front panels of the clustered
system nodes.
An audit log tracks actions that are issued through the management GUI or CLI. For more
information, see 11.16.9, “Audit log information” on page 884.
874 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
User name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The user name can be 1 - 256 characters.
The following types of authentication are available in the Authentication Mode section:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 11-1 on page 874) to which you want this user to belong.
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported Lightweight Directory
Access Protocol (LDAP) service. Ensure that the remote authentication service is
supported by the SVC clustered system. For more information about remote user
authentication, see 2.12, “User authentication” on page 59.
The following types of local credentials can be configured in the Local Credentials section,
depending on your needs:
– Password authentication
The password authenticates users to the management GUI. Enter the password in the
Password field. Verify the password.
Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.
Tip: You can also change user properties by right-clicking a user and selecting
Properties from the list.
876 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
5. From the User Properties window, you can change the authentication mode and the local
credentials. For the authentication mode, choose the following type of authentication:
– Local
The authentication method is on the system. Users must be part of a user group that
authorizes them to specific sets of operations.
If you select this type of authentication, use the drop-down list to select the user group
(Table 11-1 on page 874) of which you want the user to be part.
– Remote
Remote authentication allows users of the SVC clustered system to authenticate to the
system by using the external authentication service. The external authentication
service can be IBM Tivoli Integrated Portal or a supported LDAP service. Ensure that
the remote authentication service is supported by the SVC clustered system.
For the local credentials, the following types of local credentials can be configured in this
section, depending on your needs:
– Password authentication: The password authenticates users to the management GUI.
You must enter the password in the Password field. Verify the password.
Password: The password can be 6 - 64 characters and it cannot begin or end with a
space.
– SSH public/private key authentication: The SSH key authenticates users to the CLI.
Use Browse to locate and upload the SSH public key.
6. To confirm the changes, click OK (Figure 11-331 on page 877).
Important: To remove the password for a specific user, the SSH public key must be
defined. Otherwise, this action is not available.
Tip: You can also remove the password by right-clicking a user and selecting Remove
Password.
4. The Warning window that is shown in Figure 11-333 on page 878 opens. Click Yes.
878 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Important: To remove the SSH public key for a specific user, the password must be
defined. Otherwise, this action is not available.
Tip: You can also remove the SSH public key by right-clicking a user and selecting
Remove SSH Key.
4. The Warning window that is shown in Figure 11-335 opens. Click Yes.
Important: To select multiple users to delete, hold down Ctrl and click the entries that
you want to delete.
Tip: You can also delete a user by right-clicking the user and selecting Delete.
880 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Group name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The group name can 1 - 63 characters.
4. The User Group Properties window opens (Figure 11-341 on page 882).
From this window, you can change the role. You must select a role among Monitor, Copy
Operator, Service, Administrator, Security Administrator, or VASA Provider. For more
information about these roles, see Table 11-1 on page 874.
5. To confirm the changes, click OK, as shown in Figure 11-341.
882 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
– If you have users in this group, the Delete User Group window opens, as shown in
Figure 11-344. The users of this group are moved to the Monitor user group.
To view the audit log, from the SVC System panel, move the pointer over the Access selection
on the dynamic menu and click Audit Log, as shown in Figure 11-345.
884 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Time filtering
The following methods are available to perform time filtering on the audit log:
Select a start date and time and an end date and time.
To use this time frame filter, complete the following steps:
a. Click Actions → Filter by Date, as shown in Figure 11-346.
Tip: You can also access the Filter by Date action by right-clicking an entry.
b. The Date/Time Filter window opens (Figure 11-347). From this window, select a start
date and time and an end date and time.
c. Click Filter and Close. Your audit log panel is now filtered based on its time frame.
To disable this time frame filter, click Actions → Reset Date Filter, as shown in
Figure 11-348.
Select an entry and show the entries within a certain period of this event.
To use this time frame filter, complete the following steps:
a. In the table, select an entry.
b. Click Actions → Show entries within. Select minutes, hours, or days. Then, select a
value, as shown in Figure 11-349.
Tip: You can also access the Show entries within action by right-clicking an entry.
11.17 Configuration
In this section, we describe how to configure various properties of the SVC system.
886 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
Important: If you change the name of the system after iSCSI is configured, you might
need to reconfigure the iSCSI hosts.
To change the system name, click the system name and specify the new name.
System name: You can use the letters A - Z and a - z, the numbers 0 - 9, and the
underscore (_) character. The name can be 1 - 63 characters.
Notifications are normally sent immediately after an event is raised. However, events can
occur because of service actions that are performed. If a recommended service action is
active, notifications about these events are sent only if the events are still unfixed when the
service action completes.
888 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. A wizard opens, as shown in Figure 11-353. In the Email Event Notifications System
Location window, you must first define the system location information (Company name,
Street address, City, State or province, Postal code, and Country or region). Click Next
after you provide this information.
5. In the Contact Details window, you must enter contact information to enable IBM Support
personnel to contact the person in your organization to assist with problem resolution
(Contact name, Email address, Telephone (primary), Telephone (alternate), and Machine
location). Ensure that all contact information is valid and click Next, as shown in
Figure 11-354 on page 889.
6. In the Email Event Notifications Email Servers window (Figure 11-355), configure at least
one email server that is used by your site. Enter a valid IP address and a server port for
each server that is added. Ensure that the email servers are valid. Use Ping to verify the
accessibility to your email server. If the destination is not reachable, the system will not let
you finish the configuration. You have to insert correct and accessible server.
7. The last window displays a summary of your Email Event Notifications wizard. Click
Finish to complete the setup. The wizard is now closed. More information was added to
the panel, as shown on Figure 11-356. You can edit or disable email notification from this
window.
8. Once the initial configuration is done, you can edit all mentioned settings and in addition
you can define exact messages that you want to report to the call home. Click Edit button
from the Email window and commit such changes (Figure 11-357 on page 891).
890 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
You can configure an SNMP server to receive various informational, error, or warning
notifications by entering the following information (Figure 11-358 on page 892):
IP Address
The address for the SNMP server.
Server Port
The remote port number for the SNMP server. The remote port number must be a value of
1 - 65535.
Community
The SNMP community is the name of the group to which devices and management
stations that run SNMP belong.
Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine any corrective
action.
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
Syslog notifications
The syslog protocol is a standard protocol for forwarding log messages from a sender to a
receiver on an IP network. The IP network can be IPv4 or IPv6. The system can send syslog
messages that notify personnel about an event.
You can configure a syslog server to receive log messages from various systems and store
them in a central repository by entering the following information (Figure 11-359 on
page 893):
IP Address
The IP address for the syslog server.
Facility
The facility determines the format for the syslog messages. The facility can be used to
determine the source of the message.
Message Format
The message format depends on the facility. The system can transmit syslog messages in
the following formats:
– The concise message format provides standard detail about the event.
– The expanded format provides more details about the event.
Event Notifications:
Consider the following points about event notifications:
– Select Error if you want the user to receive messages about problems, such as
hardware failures, that must be resolved immediately.
– Select Warning if you want the user to receive messages about problems and
unexpected conditions. Investigate the cause immediately to determine whether any
corrective action is necessary.
892 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
– Select Info if you want the user to receive messages about expected events. No action
is required for these events.
The syslog messages can be sent in concise message format or expanded message format.
• If you are using a Network Time Protocol (NTP) server, select Set NTP Server IP
Address and then enter the IP address of the NTP server, as shown in
Figure 11-362.
894 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
4. Click Save.
Licensing
Complete the following steps to configure the licensing settings:
1. From the SVC Settings panel, move the pointer over Settings and click System.
2. In the left column, select License Functions, as shown in Figure 11-363.
3. In the Select Your License section, you can choose between the following licensing
options for your SVC system:
– Standard Edition: Select the number of terabytes that are available for your license for
virtualization and for Copy Services functions for this license option.
– Entry Edition: This type of licensing is based on the number of the physical disks that
you are virtualizing and whether you selected to license the FlashCopy function, the
Metro Mirror and Global Mirror function, or both.
4. Set the licensing options for the SVC for the following elements:
– Virtualization Limit
Enter the capacity of the storage that will be virtualized by this system.
– FlashCopy Limit
Enter the capacity that is available for FlashCopy mappings.
Important: The Used capacity for FlashCopy mapping is the sum of all of the
volumes that are the source volumes of a FlashCopy mapping.
Important: The Used capacity for Global Mirror and Metro Mirror is the sum of the
capacities of all of the volumes that are in a Metro Mirror or Global Mirror
relationship; both master volumes and auxiliary volumes are included.
The format for the software upgrade package name ends in four positive integers that are
separated by dots. For example, a software upgrade package might have the name that is
shown in the following example:
IBM_2145_INSTALL_7.6.0.0
Important: Before you attempt any SVC code update, read and understand the SVC
concurrent compatibility and code cross-reference matrix. For more information, see the
following website and click Latest IBM SAN Volume Controller code:
http://www.ibm.com/support/docview.wss?uid=ssg1S1001707
During the upgrade, each node in your SVC clustered system is automatically shut down and
restarted by the upgrade process. Because each node in an I/O Group provides an
alternative path to volumes, use the Subsystem Device Driver (SDD) to ensure that all I/O
paths between all hosts and SANs work.
If you do not perform this check, certain hosts might lose connectivity to their volumes and
experience I/O errors when SVC node that provides that access is shut down during the
upgrade process. You can check the I/O paths by using SDD datapath query commands.
896 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
You can use the svcupgradetest utility to check for known issues that might cause problems
during an IBM Spectrum Virtualize software upgrade.
The software upgrade test utility can be downloaded in advance of the upgrade process, or it
can be downloaded and run directly during the software upgrade, as guided by the upgrade
wizard.
You can run the utility multiple times on the same SVC system to perform a readiness
check-in preparation for a software upgrade. We strongly advise that you run this utility for a
final time immediately before you apply the upgrade to ensure that no new releases of the
utility were available since you originally downloaded it.
The installation and use of this utility is non-disruptive and the utility does not require the
restart of any SVC node; therefore, host I/O is not interrupted. The utility is only installed on
the current configuration node.
System administrators must continue to check whether the version of code that they plan to
install is the latest version. You can obtain the latest information at this website:
https://ibm.biz/BdE8Pe
This utility is intended to supplement rather than duplicate the existing tests that are
performed by the SVC upgrade procedure (for example, checking for unfixed errors in the
error log).
Upgrade procedure
To upgrade the IBM Spectrum Virtualize software from version 7.4 to version 7.6, complete
the following steps:
1. Log in with your administrator user ID and password. The SVC management home page
opens. Click Settings → System → Update System.
2. The window that is shown in Figure 11-364 opens.
From the window that is shown in Figure 11-364, you can select the following options:
– Check for updates: Use this option to check, on the IBM website, whether an SVC
software version is available that is newer than the version that you installed on your
SVC. You need an Internet connection to perform this check.
– Update: Use this option to start the software upgrade process.
3. Click Update to start the upgrade process. The window that is shown in Figure 11-365
opens.
From the Upgrade Package window, download the upgrade test utility from the IBM
website. Click the folder icons and choose the both packages as outlined in Figure 11-366.
The new code level detected from the package will appear.
4. When you click Update, the selection window opens. Choose either to update the system
automatically or manually. The differences are explained:
– Updating the system automatically
During the automatic update process, each node in a system is updated one at a time,
and the new code is staged on the nodes. While each node restarts, degradation in the
maximum I/O rate that can be sustained by the system can occur. After all the nodes in
the system are successfully restarted with the new code level, the new level is
automatically committed.
During an automatic code update, each node of a working pair is updated sequentially.
The node that is being updated is temporarily unavailable and all I/O operations to that
node fail. As a result, the I/O error counts increase and the failed I/O operations are
directed to the partner node of the working pair. Applications do not see any I/O
failures. When new nodes are added to the system, the update package is
automatically downloaded to the new nodes from the SVC system.
The update can normally be done concurrently with typical user I/O operations.
However, performance might be affected. If any restrictions apply to the operations that
can be done during the update, these restrictions are documented on the product
898 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
website that you use to download the update packages. During the update procedure,
most configuration commands are not available.
– Updating the system manually
During an automatic update procedure, the system updates each of the nodes
systematically. The automatic method is the preferred procedure for updating the code
on nodes; however, to provide more flexibility in the update process, you can also
update each node manually.
During this manual procedure, you prepare the update, remove a node from the
system, update the code on the node, and return the node to the system. You repeat
this process for the remaining nodes until the last node is removed from the system.
Every node must be updated to the same code level. You cannot interrupt the update
and switch to installing a different level.
After all of the nodes are updated, you must confirm the update to complete the
process. The confirmation restarts each node in order and takes about 30 minutes to
complete
We selected an Automatic update (Figure 11-367).
5. When you click Finish, the IBM Spectrum Virtualize software upgrade starts. The window
that is shown in Figure 11-368 opens. The system starts with the upload of the test utility
and the SVC system firmware.
6. After a while, the system starts automatically to run the update test utility.
7. When the system detects an issue or an error, you are guided by the GUI. Click Read
more, as shown in Figure 11-370. These issues are typically caused by
non-recommended configuration and we advise users to fix them before proceeding with
upgrade.
Figure 11-370 Issues that are detected by the update test utility
If you decide that issues or warnings are marginal and do not affect the upgrade process,
confirm the resumption of the upgrade as shown in Figure 11-371.
8. The Update Test Utility Results panel opens and describes the results, as shown in
Figure 11-372.
900 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
9. In our case, we received a warning because we did not enable email notification. So, we
can click Close and proceed with the update. As shown in Figure 11-373, we click
Resume.
11.When the update for the first node completes, the system is paused for approximately 30
minutes to ensure that all paths are reestablished to the newly updated node
(Figure 11-375 on page 902).
12.SVC updates each node in sequence, the active configuration node at latest. Once the
system is updated, the failover usually happens and you lose access to web console (it
goes offline). Click Yes to reestablish the web session, as shown in Figure 11-376.
902 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
get assigned disk capacity directly from SVC instead from ESXi datastore. That allows
storage administrators to control appropriate usage of storage capacity and to enable
enhanced features of storage virtualization directly to the virtual machine (such as replication,
thin-provisioning, compression, encryption, etc.).
The VVols management is enables in SVC in System section as shown in Figure 11.18.1 on
page 903.
The NTP server has to be configured prior to enabling VVols management. It is highly
recommended to use the same NTP server for ESXi and for SVC.
A quick-start guide to VVols, Quick-start Guide to Configuring VMware Virtual Volumes for
Systems Powered by IBM Spectrum Virtualize, REDP-5321is available at:
https://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/redp5321.html?Open
There is an IBM Redbook in development that will go into more depth about VVols and that
will be released on the IBM Redbooks website shortly.
11.18.1 Resources
IBM Spectrum Virtualize V7.6 introduces advanced management and allocation of system
resources. You can limit the specific amount of system memory to various SVC operations as
shown in
Navigation
This option enables or disables animated dynamic menu of the GUI. You can either have
static icons on the left side with fixed size or icons with dynamically changing their size once
hovering mouse cursor over them. To enable animated dynamic function of icons, tick the
checkbox as shown in Figure 11-380 on page 904.
Login Message
The login message is displayed to anyone logging into GUI or CLI session. It can be defined
and enabled either from GUI or from CLI once the text file with the message content is loaded
to the system (Figure ).
The details how to define login message from CLI and how it looks like after its enablement
are provided in “Welcome banner” on page 718.
General settings
Complete the following steps to configure general GUI preferences:
1. From the SVC Settings window, move the pointer over Settings and click GUI Preferences
(Figure 11-382 on page 905).
904 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 10 GUI Operations Libor.fm
– Clear Customization
This option deletes all GUI preferences that are stored in the browser and restores the
default preferences.
– Knowledge Center
You can change the URL of IBM Spectrum Virtualize Knowledge Center.
– The accessibility option enables Low graphic mode when the system is connected
through a slower network.
– Advanced pool settings allows you to select the extent size during storage pool
creation.
– Default logout time in minutes after inactivity in the established session.
906 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
To ensure that the performance levels of your system that you want are maintained, monitor
performance periodically to provide visibility to potential problems that exist or are developing
so that they can be addressed in a timely manner.
Performance considerations
When you are designing the IBM Spectrum Virtualize infrastructure or maintaining an existing
infrastructure, you must consider many factors in terms of their potential effect on
performance. These factors include, but are not limited to dissimilar workloads competing for
the same resources, overloaded resources, insufficient available resources, poor performing
resources, and similar performance constraints.
Remember the following high-level rules when you are designing your storage area network
(SAN) and IBM Spectrum Virtualize layout:
Host-to-SVC inter-switch link (ISL) oversubscription
This area is the most significant I/O load across ISLs. The recommendation is to maintain
a maximum of 7-to-1 oversubscription. A higher ratio is possible, but it tends to lead to I/O
bottlenecks. This suggestion also assumes a core-edge design, where the hosts are on
the edges and the SVC is the core.
Storage-to-SVC ISL oversubscription
This area is the second most significant I/O load across ISLs. The maximum
oversubscription is 7-to-1. A higher ratio is not supported. Again, this suggestion assumes
a multiple-switch SAN fabric design.
Node-to-node ISL oversubscription
This area is the least significant load of the three possible oversubscription bottlenecks. In
standard setups, this load can be ignored. Although this area is not entirely negligible, it
does not contribute significantly to the ISL load. However, node-to-node ISL
oversubscription is mentioned here in relation to the split-cluster capability that was made
available with version 6.3. When the system is running in this manner, the number of ISL
links becomes more important. As with the storage-to-SVC ISL oversubscription, this load
also requires a maximum of 7-to-1 oversubscription. Exercise caution and careful planning
when you determine the number of ISLs to implement. If you need assistance, we
recommend that you contact your IBM representative and request technical assistance.
ISL trunking/port channeling
For the best performance and availability, we highly recommend that you use ISL trunking
or port channeling. Independent ISL links can easily become overloaded and turn into
performance bottlenecks. Bonded or trunked ISLs automatically share load and provide
better redundancy in a failure.
908 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
Rules and guidelines are no substitution for monitoring performance. Monitoring performance
can provide a validation that design expectations are met and identify opportunities for
improvement.
The performance is near linear when nodes are added into the cluster until performance
eventually becomes limited by the attached components. Also, although virtualization with
provides significant flexibility in terms of the components that are used, it does not diminish
the necessity of designing the system around the components so that it can deliver the level
of performance that you want.
The key item for planning is your SAN layout. Switch vendors have slightly different planning
requirements, but the end goal is that you always want to maximize the bandwidth that is
available to the SVC ports. The SVC is one of the few devices that can drive ports to their
limits on average, so it is imperative that you put significant thought into planning the SAN
layout.
Performance monitoring
In this section, we highlight several performance monitoring techniques.
The statistics files (Volume, MDisk, and Node) are saved at the end of the sampling interval
and a maximum of 16 files (each) are stored before they are overlaid in a rotating log fashion.
This design provides statistics for the most recent 80-minute period if the default 5-minute
sampling interval is used. IBM Spectrum Virtualize supports user-defined sampling intervals
of 1 - 60 minutes.
The maximum space that is required for a performance statistics file is 1,153,482 bytes. Up to
128 (16 per each of the three types across eight nodes) different files can exist across eight
SVC nodes. This design makes the total space requirement a maximum of 147,645,694 bytes
for all performance statistics from all nodes in an SVC cluster.
Note: Remember this maximum of 147,645,694 bytes for all performance statistics from all
nodes in an SVC cluster when you are in time-critical situations. The required size is not
otherwise important because SVC node hardware can map the space.
You can define the sampling interval by using the startstats -interval 2 command to
collect statistics at 2-minute intervals. For more information, see 10.8.8, “Starting statistics
collection” on page 630.
Collection intervals: Although more frequent collection intervals provide a more detailed
view of what happens within IBM Spectrum Virtualize and SVC, they shorten the amount of
time that the historical data is available on the IBM Spectrum Virtualize. For example,
instead of an 80-minute period of data with the default five-minute interval, if you adjust to
2-minute intervals, you have a 32-minute period instead.
Since software version 5.1.0, cluster-level statistics are no longer supported. Instead, use the
per node statistics that are collected. The sampling of the internal performance counters is
coordinated across the cluster so that when a sample is taken, all nodes sample their internal
counters at the same time. It is important to collect all files from all nodes for a complete
analysis. Tools, such as Tivoli Storage Productivity Center, perform this intensive data
collection for you.
910 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
The node_frontpanel_id is of the node on which the statistics were collected. The date is in
the form <yymmdd> and the time is in the form <hhmmss>. The following example shows an
MDisk statistics file name:
Nm_stats_113986_141031_214932
Example A-1 shows typical MDisk, volume, node, and disk drive statistics file names.
Tip: The performance statistics files can be copied from the SVC nodes to a local drive on
your workstation by using the pscp.exe (included with PuTTY) from an MS-DOS command
line, as shown in this example:
C:\Program Files\PuTTY>pscp -unsafe -load ITSO_SVC3
[email protected]:/dumps/iostats/* c:\statsfiles
Use the -load parameter to specify the session that is defined in PuTTY.
qperf
qperf is an unofficial (no-charge and unsupported) collection of awk scripts. qperf was made
available for download from IBM Techdocs. It was written by Christian Karpp. qperf is
designed to provide a quick performance overview by using the command-line interface (CLI)
and a UNIX Korn shell. (It can also be used with Cygwin.)
svcmon
svcmon is not longer available.
The performance statistics files are in .xml format. They can be manipulated by using various
tools and techniques. Figure A-1 on page 912 shows an example of the type of chart that you
can produce by using the IBM Spectrum Virtualize performance statistics.
Each node collects various performance statistics, mostly at 5-second intervals, and the
statistics that are available from the config node in a clustered environment. This information
can help you determine the performance effect of a specific node. As with system statistics,
node statistics help you to evaluate whether the node is operating within normal performance
metrics.
912 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
The lsnodestats command provides performance statistics for the nodes that are part of a
clustered system, as shown in Example A-2 (the output is truncated and shows only part of
the available statistics). You can also specify a node name in the command to limit the output
for a specific node.
The previous example shows statistics for the two node members of cluster ITSO_SVC3. For
each node, the following columns are displayed:
stat_name: Provides the name of the statistic field
stat_current: The current value of the statistic field
stat_peak: The peak value of the statistic field in the last 5 minutes
stat_peak_time: The time that the peak occurred
On the other side, the lssystemstats command lists the same set of statistics that is listed
with the lsnodestats command, but representing all nodes in the cluster. The values for these
statistics are calculated from the node statistics values in the following way:
Bandwidth: Sum of bandwidth of all nodes
Latency: Average latency for the cluster, which is calculated by using data from the whole
cluster, not an average of the single node values
IOPS: Total IOPS of all nodes
CPU percentage: Average CPU percentage of all nodes
Table A-1 has a brief description of each of the statistics that are presented by the
lssystemstats and lsnodestats commands.
914 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
The real-time statistics are available from the IBM Spectrum Virtualize GUI. Click
Monitoring → Performance (as shown in Figure A-2) to open the Performance Monitoring
window.
As shown in Figure A-3 on page 917, the Performance monitoring window is divided into the
following sections that provide utilization views for the following resources:
CPU Utilization: The CPU utilization graph shows the current percentage of CPU usage
and peaks in utilization. It can also display compression CPU usage for systems with
compressed volumes.
Volumes: Shows four metrics on the overall volume utilization graphics:
– Read
– Write
– Read latency
– Write latency
Interfaces: The Interfaces graph displays data points for Fibre Channel (FC), iSCSI,
serial-attached SCSI (SAS), and IP Remote Copy interfaces. You can use this information
to help determine connectivity issues that might impact performance.
– Fibre Channel
– iSCSI
– SAS
– IP Remote Copy
MDisks: Also shows four metrics on the overall MDisks graphics:
– Read
– Write
– Read latency
– Write latency
916 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
You can use these metrics to help determine the overall performance health of the volumes
and MDisks on your system. Consistent unexpected results can indicate errors in
configuration, system faults, or connectivity issues.
You can also select to view performance statistics for each of the available nodes of the
system, as shown in Figure A-4.
You can also change the metric between MBps or IOPS, as shown in Figure A-5.
On any of these views, you can select any point with your cursor to know the exact value and
when it occurred. When you place your cursor over the timeline, it becomes a dotted line with
the various values gathered, as shown in Figure A-6 on page 918.
For each of the resources, various values exist that you can view by selecting the value. For
example, as shown in Figure A-7, the four available fields are selected for the MDisks view:
Read, Write, Read latency, and Write latency. In our example, Read is not selected.
Performance data collection and Tivoli Storage Productivity Center for Disk
Although you can obtain performance statistics in standard .xml files, the use of .xml files is a
less practical and more complicated method to analyze the IBM Spectrum Virtualize
performance statistics. Tivoli Storage Productivity Center for Disk is the supported IBM tool to
collect and analyze SVC performance statistics.
Tivoli Storage Productivity Center for Disk is installed separately on a dedicated system and it
is not part of the IBM Spectrum Virtualize bundle.
For more information about the use of Tivoli Storage Productivity Center to monitor your
storage subsystem, see SAN Storage Performance Management Using Tivoli Storage
Productivity Center, SG24-7364, which is available at this website:
http://www.redbooks.ibm.com/abstracts/sg247364.html?Open
918 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
SVC port quality statistics: Tivoli Storage Productivity Center for Disk Version 4.2.1
supports the SVC port quality statistics that are provided in SVC versions 4.3 and later.
Monitoring these metrics and the performance metrics can help you to maintain a stable
SAN environment.
Figure A-8 SAN Volume Controller 2145-DH8 With Fibre Channel host interface adapers
The port mask is an optional parameter of the mkhost and chhost commands. The port mask
is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports
enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111
(all ports enabled).
Setting up Fibre Channel port masks is particularly useful when you have more than four
Fibre Channel ports on any node in the system, as it saves setting up a large number of SAN
zones.
Fibre Channel IO ports are logical ports, which can exist on Fibre Channel platform ports or
on FCoE platform ports.
There are two Fibre Channel port masks on a system. The local port mask controls
connectivity to other nodes in the same system, and the partner port mask control
connectivity to nodes in remote, partnered systems. By default, all ports are enabled for both
local and partner connectivity.
The port masks apply to all nodes on a system; a different port mask cannot be set on nodes
in the same system. You do not have to have the same port mask on partnered systems.
Note: If all devices are zoned correctly during configuration, the use of port masking can
be optional.
The Fibre Channel paths correspond to the following bit number and WWNN port format:
HBA 1:
Fibre Channel port 1 = bit1 = 50:07:68:01:4x:xx:xx
Fibre Channel port 2 = bit 2 = 50:07:68:01:3x:xx:xx
Fibre Channel port 3 = bit 3 = 50:07:68:01:1x:xx:xx
Fibre Channel port 4 = bit 4 = 50:07:68:01:2x:xx:xx
HBA 2:
Fibre Channel port 5 = bit5 = 50:07:68:01:5x:xx:xx
Fibre Channel port 6 = bit6 = 50:07:68:01:6x:xx:xx
Fibre Channel port 7 = bit7 = 50:07:68:01:7x:xx:xx
Fibre Channel port 8 = bit 8 = 50:07:68:01:8x:xx:xx
Before executing the port masking procedure, verify the current status of port masks. They
should all show 11111111 as highlighted in the end. See Example A-4 for details.
Example A-4
IBM_2076 superuser>lssystem
# non-relevant output lines removed for clarity #
local_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
partner_fc_port_mask
1111111111111111111111111111111111111111111111111111111111111111
Assuming you need to use and isolate 2 WWNN ports one for Local to node communication
and the other for Remote communication, use the procedure below.:
1. Identify the ports you want to use for each local node communication
a. (Port “:7x” and “:8x” ) as below:
b. Port “:7x” (WWN: 50:07:68:01:7x:xx:xx ) Intended for: used for local node
communication
c. Port “:8x” (WWN 50:07:68:01:8x:xx:xx) Intended for: used for local node
communication
2. Also Identify the ports you want to use for each partnership communication
a. (Port “:5x” and “:6x”) as below:
b. Port “:5x” (50:07:68:01:5x:xx:xx) Intended for: used for partnership communication
c. Port “:6x” (50:07:68:01:6x:xx:xx Intended for: used for partnership communication
3. Then proceed to set the port masking, for Local node communication by issuing the
following command:
920 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 11 APPENDIX A Performance Anhony_Lev.fm
5. Now Once you run the two commands below, to set the port masking for local node
communication and partnershp communication :
The output is ending with 11000000 as highlighted and shown below in Example A-5.
Example A-5
IBM_2076 superuser>lssystem
# non-relevant output lines removed for clarity #
local_fc_port_mask
0000000000000000000000000000000000000000000000000000000011000000
partner_fc_port_mask
0000000000000000000000000000000000000000000000000000000000110000
7. This concludes your steps for masking 2 ports , one for local node communication and one
for remote partnership communication.
922 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
Appendix B. Terminology
In this appendix, we define the IBM System Storage SAN Volume Controller (SVC) terms that
are commonly used in this book.
To see the complete set of terms that relate to the SAN Volume Controller, see the Glossary
section of the IBM SAN Volume Controller Knowledge Center, which is available at this
website:
http://www.ibm.com/support/knowledgecenter/STPVGU/landing/SVC_welcome.html
Array
An ordered collection, or group, of physical devices (disk drive modules) that are used to
define logical volumes or devices. An array is a group of drives designated to be managed
with a Redundant Array of Independent Disks (RAID).
Asymmetric virtualization
Asymmetric virtualization is a virtualization technique in which the virtualization engine is
outside the data path and performs a metadata-style service. The metadata server contains
all the mapping and locking tables, and the storage devices contain only data. See also
“Symmetric virtualization” on page 936.
Asynchronous replication
Asynchronous replication is a type of replication in which control is given back to the
application as soon as the write operation is made to the source volume. Later, the write
operation is made to the target volume. See also “Synchronous replication” on page 936.
Back end
See “Front end and back end” on page 929.
Call home
Call home is a communication link that is established between a product and a service
provider. The product can use this link to call IBM or another service provider when the
product requires service. With access to the machine, service personnel can perform service
tasks, such as viewing error and problem logs or initiating trace and dump retrievals.
Canister
A canister is a single processing unit within a storage system.
Capacity licensing
Capacity licensing is a licensing model that licenses features with a price-per-terabyte model.
Licensed features are FlashCopy, Metro Mirror, Global Mirror, and virtualization. See also
“FlashCopy” on page 928, “Metro Mirror” on page 932, and “Virtualization” on page 937.
Chain
A set of enclosures that are attached to provide redundant access to the drives inside the
enclosures. Each control enclosure can have one or more chains.
924 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
Channel extender
A channel extender is a device that is used for long-distance communication that connects
other storage area network (SAN) fabric components. Generally, channel extenders can
involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or
another long-distance communication protocol.
Child pool
Administrators can use child pools to control capacity allocation for volumes that are used for
specific purposes. Instead of being created directly from managed disks (MDisks), child pools
are created from existing capacity that is allocated to a parent pool. As with parent pools,
volumes can be created that specifically use the capacity that is allocated to the child pool.
Child pools are similar to parent pools with similar properties. Child pools can be used for
volume copy operation. Also, see “Parent pool” on page 932.
Cold extent
A cold extent is an extent of a volume that does not get any performance benefit if it is moved
from a hard disk drive (HDD) to a Flash disk. A cold extent also refers to an extent that needs
to be migrated onto an HDD if it is on a Flash disk drive.
Compression
Compression is a function that removes repetitive characters, spaces, strings of characters,
or binary data from the data that is being processed and replaces characters with control
characters. Compression reduces the amount of storage space that is required for data.
Compression accelerator
A compression accelerator is hardware onto which the work of compression is offloaded from
the microprocessor.
Configuration node
While the cluster is operational, a single node in the cluster is appointed to provide
configuration and service functions over the network interface. This node is termed the
configuration node. This configuration node manages the data that describes the
clustered-system configuration and provides a focal point for configuration commands. If the
configuration node fails, another node in the cluster transparently assumes that role.
Consistency Group
A Consistency Group is a group of copy relationships between virtual volumes or data sets
that are maintained with the same time reference so that all copies are consistent in time. A
Consistency Group can be managed as a single entity.
Container
A container is a software object that holds or organizes other software objects or entities.
Contingency capacity
For thin-provisioned volumes that are configured to automatically expand, the unused real
capacity that is maintained. For thin-provisioned volumes that are not configured to
automatically expand, the difference between the used capacity and the new real capacity.
Copied state
Copied is a FlashCopy state that indicates that a copy was triggered after the copy
relationship was created. The Copied state indicates that the copy process is complete and
the target disk has no further dependency on the source disk. The time of the last trigger
event is normally displayed with this status.
Counterpart SAN
A counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN
provides all of the connectivity of the redundant SAN, but without the 100% redundancy. SVC
nodes are typically connected to a “redundant SAN” that is made up of two counterpart SANs.
A counterpart SAN is often called a SAN fabric.
Cross-volume consistency
A consistency group property that guarantees consistency between volumes when an
application issues dependent write operations that span multiple volumes.
Data consistency
Data consistency is a characteristic of the data at the target site where the dependent write
order is maintained to guarantee the recoverability of applications.
Data migration
Data migration is the movement of data from one physical location to another physical
location without the disruption of application I/O operations.
Discovery
The automatic detection of a network topology change, for example, new and deleted nodes
or links.
Disk tier
MDisks (logical unit numbers (LUNs)) that are presented to the SVC cluster likely have
different performance attributes because of the type of disk or RAID array on which they are
installed. The MDisks can be on 15 K RPM Fibre Channel (FC) or serial-attached SCSI (SAS)
disk, Nearline SAS, or Serial Advanced Technology Attachment (SATA), or even Flash Disks.
Therefore, a storage tier attribute is assigned to each MDisk and the default is generic_hdd.
SVC 6.1 introduced a new disk tier attribute for Flash Disk, which is known as generic_ssd.
926 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
Distributed RAID
An alternative RAID scheme where the number of drives that are used to store the array can
be greater than the equivalent, typical RAID scheme. The same data stripes are distributed
across a greater number of drives, which increases the opportunity for parallel I/O and
Easy Tier
Easy Tier is a volume performance function within the SVC that provides automatic data
placement of a volume’s extents in a multitiered storage pool. The pool normally contains a
mix of Flash Disks and HDDs. Easy Tier measures host I/O activity on the volume’s extents
and migrates hot extents onto the Flash Disks to ensure the maximum performance.
Encryption deadlock
The inability to access encryption keys to decrypt data. See also encryption recovery key.
Evaluation mode
The evaluation mode is an Easy Tier operating mode in which the host activity on all the
volume extents in a pool are “measured” only. No automatic extent migration is performed.
Event (error)
An event is an occurrence of significance to a task or system. Events can include the
completion or failure of an operation, user action, or a change in the state of a process.
Before SVC V6.1, this situation was known as an error.
Event code
An event code is a value that is used to identify an event condition to a user. This value might
map to one or more event IDs or to values that are presented on the service panel. This value
is used to report error conditions to IBM and to provide an entry point into the service guide.
Event ID
An event ID is a value that is used to identify a unique error condition that was detected by the
SVC. An event ID is used internally in the cluster to identify the error.
Excluded condition
The excluded condition is a status condition. It describes an MDisk that the SVC decided is
no longer sufficiently reliable to be managed by the cluster. The user must issue a command
to include the MDisk in the cluster-managed storage.
Extent
An extent is a fixed-size unit of data that is used to manage the mapping of data between
MDisks and volumes. The size of the extent can range 16 MB - 8 GB in size.
External storage
External storage refers to managed disks (MDisks) that are SCSI logical units that are
presented by storage systems that are attached to and managed by the clustered system.
Failback
Failback is the restoration of an appliance to its initial configuration after the detection and
repair of a failed network or component.
Failover
Failover is an automatic operation that switches to a redundant or standby system or node in
a software, hardware, or network interruption. See also “Failback”.
Field-replaceable units
Field-replaceable units (FRUs) are individual parts that are replaced entirely when any one of
the unit’s components fails. They are held as spares by the IBM service organization.
FlashCopy
FlashCopy refers to a point-in-time copy where a virtual copy of a volume is created. The
target volume maintains the contents of the volume at the point in time when the copy was
established. Any subsequent write operations to the source volume are not reflected on the
target volume.
FlashCopy mapping
A FlashCopy mapping is a continuous space on a direct-access storage volume, which is
occupied by or reserved for a particular data set, data space, or file.
FlashCopy relationship
See “FlashCopy mapping” on page 928.
928 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
FlashCopy service
FlashCopy service is a copy service that duplicates the contents of a source volume on a
target volume. In the process, the original contents of the target volume are lost. See also
“Point-in-time copy” on page 933.
Flash drive
A data storage device that uses solid-state memory to store persistent data.
Flash module
A modular hardware unit containing flash memory, one or more flash controllers, and
associated electronics.
Global Mirror
Global Mirror is a method of asynchronous replication that maintains data consistency across
multiple volumes within or across multiple systems. Global Mirror is generally used where
distances between the source site and target site cause increased latency beyond what the
application can accept.
Grain
A grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KiB or
256 KiB) in the SVC. A grain is also the unit to extend the real size of a thin-provisioned
volume (32 KiB, 64 KiB, 128 KiB, or 256 KiB).
Hop
One segment of a transmission path between adjacent nodes in a routed network.
Host ID
A host ID is a numeric identifier that is assigned to a group of host FC ports or Internet Small
Computer System Interface (iSCSI) host names for LUN mapping. For each host ID, SCSI IDs
are mapped to volumes separately. The intent is to have a one-to-one relationship between
hosts and host IDs, although this relationship cannot be policed.
Host mapping
Host mapping refers to the process of controlling which hosts have access to specific
volumes within a cluster. (Host mapping is equivalent to LUN masking.) Before SVC V6.1, this
process was known as VDisk-to-host mapping.
Hot extent
A hot extent is a frequently accessed volume extent that gets a performance benefit if it is
moved from an HDD onto a Flash Disk.
HyperSwap
Pertaining to a function that provides continuous, transparent availability against storage
errors and site failures, and is based on synchronous replication.
Image mode
Image mode is an access mode that establishes a one-to-one mapping of extents in the
storage pool (existing LUN or (image mode) MDisk) with the extents in the volume.
Image volume
An image volume is a volume in which a direct block-for-block translation exists from the
managed disk (MDisk) to the volume.
I/O Group
Each pair of SVC cluster nodes is known as an input/output (I/O) Group. An I/O Group has a
set of volumes that are associated with it that are presented to host systems. Each SVC node
is associated with exactly one I/O Group. The nodes in an I/O Group provide a failover and
failback function for each other.
Internal storage
Internal storage refers to an array of managed disks (MDisks) and drives that are held in
enclosures and in nodes that are part of the SVC cluster.
I/O group
A collection of volumes and node relationships that present a common interface to host
systems. Each pair of nodes is known as an input/output (I/O) group.
930 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
Latency
The time interval between the initiation of a send operation by a source task and the
completion of the matching receive operation by the target task. More generally, latency is the
time between a task initiating data transfer and the time that transfer is recognized as
complete at the data destination.
Licensed capacity
The amount of capacity on a storage system that a user is entitled to configure.
License key
An alphanumeric code that activates a licensed function on a product.
Local fabric
The local fabric is composed of SAN components (switches, cables, and so on) that connect
the components (nodes, hosts, and switches) of the local cluster together.
Machine signature
A string of characters that identifies a system. A machine signature might be required to
obtain a license key.
Metro Mirror
Metro Mirror is a method of synchronous replication that maintains data consistency across
multiple volumes within the system. Metro Mirror is generally used when the write latency that
is caused by the distance between the source site and target site is acceptable to application
performance.
Mirrored volume
A mirrored volume is a single virtual volume that has two physical volume copies. The primary
physical copy is known within the SVC as copy 0 and the secondary copy is known within the
SVC as copy 1.
Node
An SVC node is a hardware entity that provides virtualization, cache, and copy services for
the cluster. The SVC nodes are deployed in pairs that are called I/O Groups. One node in a
clustered system is designated as the configuration node.
Node canister
A node canister is a hardware unit that includes the node hardware, fabric and service
interfaces, and serial-attached SCSI (SAS) expansion ports.
Node rescue
The process by which a node that has no valid software installed on its hard disk drive can
copy software from another node connected to the same Fibre Channel fabric.
Oversubscription
Oversubscription refers to the ratio of the sum of the traffic on the initiator N-port connections
to the traffic on the most heavily loaded ISLs, where more than one connection is used
between these switches. Oversubscription assumes a symmetrical network, and a specific
workload that is applied equally from all initiators and sent equally to all targets. A
symmetrical network means that all the initiators are connected at the same level, and all the
controllers are connected at the same level.
Parent pool
Parent pools receive their capacity from MDisks. All MDisks in a pool are split into extents of
the same size. Volumes are created from the extents that are available in the pool. You can
add MDisks to a pool at any time either to increase the number of extents that are available
for new volume copies or to expand existing volume copies. The system automatically
balances volume extents between the MDisks to provide the best performance to the
volumes.
932 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
Partnership
In Metro Mirror or Global Mirror operations, the relationship between two clustered systems.
In a clustered-system partnership, one system is defined as the local system and the other
system as the remote system.
Point-in-time copy
A point-in-time copy is the instantaneous copy that the FlashCopy service makes of the
source volume. See also “FlashCopy service” on page 929.
Preparing phase
Before you start the FlashCopy process, you must prepare a FlashCopy mapping. The
preparing phase flushes a volume’s data from cache in preparation for the FlashCopy
operation.
Primary volume
In a stand-alone Metro Mirror or Global Mirror relationship, the target of write operations
issued by the host application.
Private fabric
Configure one SAN per fabric so that it is dedicated for node-to-node communication. This
SAN is referred to as a private SAN.
Public fabric
Configure one SAN per fabric so that it is dedicated for host attachment, storage system
attachment, and remote copy operations. This SAN is referred to as a public SAN. You can
configure the public SAN to allow SVC node-to-node communication also. You can optionally
use the -localportfcmask parameter of the chsystem command to constrain the node-to-node
communication to use only the private SAN.
Quorum disk
A disk that contains a reserved area that is used exclusively for system management. The
quorum disk is accessed when it is necessary to determine which half of the clustered system
continues to read and write data. Quorum disks can either be MDisks or drives.
Quorum index
The quorum index is the pointer that indicates the order that is used to resolve a tie. Nodes
attempt to lock the first quorum disk (index 0), followed by the next disk (index 1), and finally
the last disk (index 2). The tie is broken by the node that locks them first.
RACE engine
The RACE engine compresses data on volumes in real time with minimal impact on
performance. See “Compression” on page 925 or “Real-time Compression” on page 933.
Real capacity
Real capacity is the amount of storage that is allocated to a volume copy from a storage pool.
Real-time Compression
Real-time Compression is an IBM integrated software function for storage space efficiency.
The RACE engine compresses data on volumes in real time with minimal impact on
performance.
RAID 0
RAID 0 is a data striping technique that is used across an array and no data protection is
provided.
RAID 1
RAID 1 is a mirroring technique that is used on a storage array in which two or more identical
copies of data are maintained on separate mirrored disks.
RAID 10
RAID 10 is a combination of a RAID 0 stripe that is mirrored (RAID 1). Therefore, two identical
copies of striped data exist; no parity exists.
RAID 5
RAID 5 is an array that has a data stripe, which includes a single logical parity drive. The
parity check data is distributed across all the disks of the array.
RAID 6
RAID 6 is a RAID level that has two logical parity drives per stripe, which are calculated with
different algorithms. Therefore, this level can continue to process read and write requests to
all of the array’s virtual disks in the presence of two concurrent disk failures.
Rebuild area
Reserved capacity that is distributed across all drives in a redundant array of drives. If a drive
in the array fails, the lost array data is systematically restored into the reserved capacity,
returning redundancy to the array. The duration of the restoration process is minimized
because all drive members simultaneously participate in restoring the data. See also
distributed RAID.
Relationship
In Metro Mirror or Global Mirror, a relationship is the association between a master volume
and an auxiliary volume. These volumes also have the attributes of a primary or secondary
volume.
934 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
Reliability is the degree to which the hardware remains free of faults. Availability is the ability
of the system to continue operating despite predicted or experienced faults. Serviceability is
how efficiently and nondisruptively broken hardware can be fixed.
Remote fabric
The remote fabric is composed of SAN components (switches, cables, and so on) that
connect the components (nodes, hosts, and switches) of the remote cluster together.
Significant distances can exist between the components in the local cluster and those
components in the remote cluster.
Secondary volume
Pertinent to remote copy, the volume in a relationship that contains a copy of data written by
the host application to the primary volume. See also relationship
Snapshot
A snapshot is an image backup type that consists of a point-in-time view of a volume.
Space efficient
See “Thin provisioning” on page 937.
Spare
An extra storage component, such as a drive or tape, that is predesignated for use as a
replacement for a failed component.
Spare goal
The optimal number of spares that are needed to protect the drives in the array from failures.
The system logs a warning event when the number of spares that protect the array drops
below this number.
Stand-alone relationship
In FlashCopy, Metro Mirror, and Global Mirror, relationships that do not belong to a
consistency group and that have a null consistency-group attribute.
Statesave
Binary data collection that is used in problem determination.
Stretched system
A stretched system is an extended high availability (HA) method that is supported by SVC to
enable I/O operations to continue after the loss of half of the system. A stretched system is
also sometimes referred to as a split system. One half of the system and I/O Group is usually
in a geographically distant location from the other, often 10 kilometers (6.2 miles) or more. A
third site is required to host a storage system that provides a quorum disk.
Striped
Pertaining to a volume that is created from multiple managed disks (MDisks) that are in the
storage pool. Extents are allocated on the MDisks in the order specified.
Symmetric virtualization
Symmetric virtualization is a virtualization technique in which the physical storage, in the form
of a Redundant Array of Independent Disks (RAID), is split into smaller chunks of storage
known as extents. These extents are then concatenated, by using various policies, to make
volumes. See also “Asymmetric virtualization” on page 924.
Synchronous replication
Synchronous replication is a type of replication in which the application write operation is
made to both the source volume and target volume before control is given back to the
application. See also “Asynchronous replication” on page 924.
Thin-provisioned volume
A thin-provisioned volume is a volume that allocates storage when data is written to it.
936 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 12 APPENDIX B GLOSSARY Hartmut.fm
Thin provisioning
Thin provisioning refers to the ability to define storage, usually a storage pool or volume, with
a “logical” capacity size that is larger than the actual physical capacity that is assigned to that
pool or volume. Therefore, a thin-provisioned volume is a volume with a virtual capacity that
differs from its real capacity. Before SVC V6.1, this thin-provisioned volume was known as
space efficient.
T10 DIF
T10 DIF is a “Data Integrity Field” extension to SCSI to allow for end-to-end protection of data
from host application to physical media.
Virtualization
In the storage industry, virtualization is a concept in which a pool of storage is created that
contains several storage systems. Storage systems from various vendors can be used. The
pool can be split into volumes that are visible to the host systems that use them. See also
“Capacity licensing” on page 924.
Virtualized storage
Virtualized storage is physical storage that has virtualization techniques applied to it by a
virtualization engine.
Volume
A volume is an SVC logical device that appears to host systems that are attached to the SAN
as a SCSI disk. Each volume is associated with exactly one I/O Group. A volume has a
preferred node within the I/O Group. Before SVC 6.1, this volume was known as a VDisk or
virtual disk.
Volume copy
A volume copy is a physical copy of the data that is stored on a volume. Mirrored volumes
have two copies. Non-mirrored volumes have one copy.
Volume protection
To prevent active volumes or host mappings from inadvertent deletion, the system supports a
global setting that prevents these objects from being deleted if the system detects that they
have recent I/O activity. When you delete a volume, the system checks to verify whether it is
part of a host mapping, FlashCopy mapping, or remote-copy relationship. In these cases, the
system fails to delete the volume, unless the -force parameter is specified. Using the -force
parameter can lead to unintentional deletions of volumes that are still active. Active means
that the system detected recent I/O activity to the volume from any host.
Write-through mode
Write-through mode is a process in which data is written to a storage device at the same time
that the data is cached.
938 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
We also explain the term Enhanced Stretched Cluster (ESC), and the HyperSwap technology
introduced in the IBM Spectrum Virtualize V7.5, and how they differ from each other.
Technical details or implementation guidelines are not presented in this appendix as they are
described in separate publications. For more information, consult the topic “Technical
guidelines and Publications” on page 964, which contains references for implementation with
VMware environments, implementing SVC Stretched cluster with AIX virtualized or clustered
environments, and Storwize V7000 HyperSwap implementation.
Also, see:
To supply the different high availability needs customers have, the stretched system
configuration was introduced, where each node (from the same I/O Group) on the system is
physically on a different site. When implemented with mirroring technologies, such as volume
mirroring or Copy Services, these configurations can be used to maintain access to data on
the system in the event of power failures or site-wide outages.
Stretched Clusters are considered High Availability solutions as both sites works as instances
of production environment (there is no stand by location), and combined with application and
infrastructure layers of redundancy, it can provide enough protection for data which requires
availability and resiliency.
When SAN Volume Controller was first introduced, the maximum supported distance
between nodes within an I/O Group was 100 meters and with the evolution of code and
introduction of new features, SVC V5.1 introduced support for the Stretched Cluster
configuration, where nodes within an I/O Group can be separated by a distance of up to 10
km using specific configurations.
With V6.3 we began supporting Stretched Cluster configurations, where nodes can be
separated by a distance of up to 300 km, in specific configurations using FC switch Inter
Switch Links (ISLs) between different locations.
V7.2 introduced the Enhanced Stretched Cluster feature that further improved the stretched
cluster configurations introducing the site awareness concept for nodes and external storage,
and the Disaster Recovery (DR) feature that allows to manage effectively rolling disaster
scenarios.
Within Spectrum Virtualize V7.5, the site awareness concept has been extended to hosts
allowing more efficiency for host I/O traffic through the SAN as well as an easier host path
management.
The Spectrum Virtualize V7.6 introduces a new feature for stretched systems, the IP Quorum
application. Using an IP-based quorum application as the quorum device for the third site, no
Fibre Channel connectivity is required. Java applications run on hosts at the third site.
However, there are strict requirements on the IP network and some disadvantages with using
IP quorum applications. Unlike quorum disks, all IP quorum applications must be reconfigured
and redeployed to hosts when certain aspects of the system configuration change.
IP Quorum details can be found in the IBM SAN Volume Controller Knowledge Center:
https://ibm.biz/BdHnKF
Table C-1 provides an overview of features by the SVC stretched cluster in each code
version.
940 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Features/code level 5.1 6.2 6.3 6.4 7.1 7.2 7.3 7.4 7.5 7.6
Enhanced mode N N N N N Y Y Y Y Y
IP quorum device N N N N N N N N N Y
a. Available only on version 6.4.1 (6.4.0 not supported)
With stretched cluster, the two nodes in an I/O Group are separated by distance between two
locations and a copy of the volume is stored at each location. This configuration means that
you can lose either the SAN or power at one location and access to the disks remains
available at the alternative location. Using this configuration requires clustering software at
the application and server layers to fail over to a server at the alternative location and resume
access to the disks. The SAN Volume Controller keeps both copies of the storage in
synchronization, and the cache is mirrored between both nodes. Therefore, the loss of one
location causes no disruption to the alternative location.
As with any clustering solution, avoiding a split-brain situation (where nodes no longer can
communicate with each other) requires a tie-break mechanism. SAN Volume Controller is no
exception. The SAN Volume Controller uses a tie-break mechanism that is facilitated through
the implementation of a quorum disk. The SAN Volume Controller uses three quorum disks
from the Managed Disks that are attached to the cluster to be used for this purpose. Usually
the management of the quorum disks is transparent to the SAN Volume Controller users. In
an Enhanced Stretched Cluster configuration, the location of the quorum disks are done
automatically, and if needed you can assign them manually to ensure the active quorum disk
is in a third location. This configuration must be done to ensure the survival of one location if
a failure occurs at another location.
Starting in V6.3, there were significant enhancements for a Stretched System in the following
configurations:
No inter-switch link (ISL) configuration:
– Passive Wavelength Division Multiplexing (WDM) devices can be used between both
sites.
– Each SVC node should be attached in the local and remote failure domain directly
(local and remote sites).
– No ISLs can be used between the SVC nodes (similar to the version 5.1 supported
configuration).
– The distance extension is to up to 40 km (24.8 miles).
ISL configuration:
– ISLs are allowed between the SVC nodes (not allowed with releases earlier than V6.3).
– Each SVC node should be attached only to local Fibre Channel switches and ISLs
configured between failure domains (node-to-node traffic)
– Use of two separate SANs must be considered (Private and Public SANs).
– The maximum distance is similar to Metro Mirror (MM) distances (up to 300 km).
– The physical requirements are similar to MM requirements.
– ISL distance extension is allowed with active and passive WDM devices.
– Failure domain 3 (quorum site) must be either Fibre Channel or FCIP attached but the
response time to the quorum disk cannot exceed 80 ms
FCIP configuration:
– FCIP links are used between failure domains (FCIP support was introduced in version
6.4).
– You must have at least two FCIP tunnels between the failure domains.
– Use of two separate SANs must be considered (Private and Public SANs).
– Failure domain 3 (quorum site) must be either Fibre Channel or FCIP attached but the
response time to the quorum disk cannot exceed 80 ms.
– A guaranteed minimum bandwidth of 2 MBps is required for node-to-quorum traffic.
– No more than one ISL hop is supported for connectivity between failure domains.
Topology
The topology is the parameter which defines how the SVC cluster is designed and which
functions and features are available in the system. Topology can be set as:
942 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Standard (default):
There are two types of standard topology, one is when SVC nodes are deployed in the
same physical location and are not stretched (usually when remote high availability is not
required) and the second application is when SVC nodes are stretched in different
locations (sites) and are members of the same I/O group. When the SVC nodes are
physically stretched but the topology is still set as standard, the configuration is called a
Standard Stretched Cluster, which means the enhanced features are not enabled. For
more details about the comparison of standard stretched cluster and enhanced stretched
cluster, refer to topic “Standard and Enhanced Stretched Cluster comparison” on
page 951.
Stretched (enhanced)
In a stretched topology configuration, all enhanced features are enabled and each site is
defined as an independent failure domain. This physicall separation means that if one site
experiences a failure the other site can continue to operate without disruption. You must
also configure a third site to host a quorum device that provides an automatic tie-break in
the event of a potential link failure between the two main sites. The main site can be in the
same room or across rooms in the data center, buildings on the same campus, or
buildings in different cities. Different kinds of sites protect against different types of failures.
HyperSwap
Introduced in Spectrum Virtualize V7.5, in the HyperSwap topology, each I/O Group (node
pair) is at a different site (or failure domain). This feature combined with Remote Copy
services can be deployed to provide redundancy in a different level where a volume can
be active on two different I/O Groups.
Note: IBM Storwize HyperSwap requires additional license features to enable Remote
Copy Services. For more details, refer to IBM Storwize V7000, Spectrum Virtualize,
HyperSwap, and VMware implementation, SG24-8317
The topology parameter can be checked in the GUI: System → Action → Properties as
shown in Figure 11-383 on page 943.
Use can also use the command lssystem to show the topology parameter as shown in
Example 11-4.
The components that comprise an ESC configuration must span three independent failure
domains. Two failure domains contain SVC nodes and the storage controllers that contain
customer data. The third failure domain contains a storage controller where the active quorum
disk is located.
V7.2 introduced a site awareness concept for nodes and controller. V7.5 introduced the site
awareness concept for hosts too.
Site awareness can be used only when topology is set to stretched.
The default names for the sites are site1, site2, and site3. Sites 1 and 2 are where the two
halves of the ESC are located. Site 3 is the optional third site for a quorum tie-breaker
disk. You can set a name for each site if you prefer.
A site field is added to nodes, controllers, and hosts. The nodes and controller must have
sites set in advance, before you set topology to stretched, and must have a site assigned.
Nodes and hosts can be assigned only to sites 1 or 2. Controllers can be assigned to any
of the 3 sites.
The site property for a controller adjusts the I/O routing and error reporting for connectivity
between nodes and the associated MDisks. These changes are effective for any MDisk
controller that has a site defined, even if the DR feature is disabled.
The site property for a host adjusts the I/O routing and error reporting for connectivity
between hosts and the nodes in the same site. These changes are effective only at SAN
login time, meaning any changes potentially will require a host reboot or FC HBA rescan,
depending on the operating system used.
944 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Use can also use the command lssite to show the site names as shown in Example 11-5.
As discussed in the previous topics of this book, stretched cluster is configured with half the
nodes at each site and a quorum device at a third location. If an outage occurs at either site,
then the other nodes at the other site will access the quorum device and continue operation
without any intervention. If connectivity between the two sites is lost, then whichever nodes
access the quorum device first will continue operation. For disaster recovery purposes a user
may want to enable access to the storage at the site which lost the race to access the quorum
device.
Use the satask overridequorum command to enable access to the storage at the secondary
site. This feature is only available if the system was configured by assigning sites to nodes
and storage controllers, and changing the system topology to stretched.
If the user executed disaster recovery on one site and then powered up the remaining, failed
site (which had the configuration node at the time of power down (i.e., disaster)), then the
cluster would assert itself as designed. The user should:
Remove the connectivity of the nodes from the site experiencing the outage.
Power up or recover those nodes.
Execute satask leavecluster-force or svctask rmnode command for all the nodes in the
cluster.
Bring the nodes into candidate state, and then connect them to the site on which the site
disaster recovery feature was executed.
Volume Mirroring
Volume mirroring is a feature of Spectrum Virtualize software, which can be used without
additional licensing and it is independent of stretched cluster configurations.
The ESC configuration uses the benefits of volume mirroring function, which allows the
creation of one volume with two copies of MDisk extents. If the two copies are placed in
different Pools (as well within different controllers), the two data copies allow volume mirroring
to eliminate impact to volume availability if one or more MDisks (or controller) fails. The
resynchronization between both copies is incremental and is started automatically. A mirrored
volume has the same functions and behavior as a standard volume.
All operations that can be run on non-mirrored volumes can also be run on mirrored volumes.
These operations include migration and expand or shrink operations.
As with non-mirrored volumes, each mirrored volume is owned by the preferred node within
the I/O group. Therefore, the mirrored volume goes offline if the I/O group goes offline.
Spectrum Virtualize software volume mirroring functionality implements a read algorithm with
one copy that is designated as the primary (copy 0) for all read operations. Spectrum
Virtualize software reads the data from the primary copy and does not automatically distribute
the read requests across both copies. The first copy that is created becomes the primary by
default.
Starting with V7.2 the primary copy concepts are overridden. Read operations run locally,
accordingly, with site attributes assigned to each SVC node, controller and host (V7.5).
Write operations run on both mirrored copies. The storage controller with the lowest
performance determines the response time between the SVC and the storage controller
back-end. The SVC cache can hide high back-end response times from the host up to a
certain level.
If a back-end write fails or a copy goes offline, a bitmap file is used to track out-of-sync grains.
As soon as the missing copy is back online, Spectrum Virtualize software evaluates the
changed bitmap and automatically re-synchronizes both copies.
Volume access is not affected by the re-synchronization process and is run concurrently with
host I/O.
Using ISLs for node-to-node communication requires configuring two separate SANs, each of
them with two separate redundant fabrics.
Each SAN consists of at least one fabric that spans both production sites. At least one fabric
of the public SAN includes also the quorum site. You can use different approaches to
configure private and public SANs:
Use dedicated Fibre Channel switches for each SAN.
Use separate virtual fabrics or virtual SANs for each SAN.
946 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Note: ISLs must not be shared between private and public virtual fabrics.
Private SAN:
SAN fabric dedicated for SVC node-to-node communication
It must contain at least two ports per node (one per fabric)
Must meet the bandwidth requirements if using Ethernet layers (FCoE or FCIP)
Maximum latency between nodes of 80 ms (round trip)
Public SAN:
SAN fabric dedicated for host and storage controller attachment (including quorum).
Must have physical redundancy (dedicated switches) or logical redundancy (virtual fabrics
or virtual SANs).
Must meet controller zoning recommendations described in Chapter 3, “Planning and
configuration” on page 83 and also refer to best practices recommendations in IBM
System Storage SAN Volume Controller and Storwize V7000 Best Practices and
Performance Guidelines, SG24-7521.
Quorum disk storage controllers must be Fibre Channel or FCIP attached and provide less
than 80 ms response times with a guaranteed bandwidth of greater than 2 Mbps.
Backend storage and quorum controllers:
Consider the implementation of storage backend controllers by placing them in different
sites or failure domains to provide volume mirroring and contingency for data.
Do not connect a storage system in one site directly to a switch fabric in the other site.
The storage system at the third site must support extended quorum disks. More
information is available in the interoperability matrixes.
Volume Mirroring:
Volume mirroring requires no additional licensing and must be used to increase data
availability. When creating mirrored volumes, consider the amount of space required to
maintain both copies available and synchronized.
Internal storage for SVC nodes (expansion enclosures):
Be aware that use of SSDs inside SVC nodes or expansion enclosures attached to SVC
nodes of a stretched cluster deployment is not supported.
Hosts placement planning:
With the introduction of site awareness on hosts in the Spectrum Virtualize V7.5, you must
plan where hosts will be deployed so they can access the local storage controller placed in
the same site or failure domain.
Infrastructure and application layer planning:
When planning the application layer and design, you must consider the requirements
needed to achieve business availability metrics such as: Recovery Point Objective (RPO)
and Recovery Time Objective (RTO). Those parameters combined with application
server’s software, infrastructure layers, and SVC features helps you to build a reliable high
availability solution.
You should also consider the functionalities and features shown in Table 11-2 when planning
your high availability solution, as they can be deployed together to build a more robust
solution.
948 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Data copies relies on One single SVC cluster Two SVC cluster (or Storwize
equipments)
Long distance link utilization Mirrored data, write cache Mirrored data only
mirroring
Note: Extra configuration steps are required for an enhanced stretched system. If you
currently have a standard stretched system configuration, you can update this
configuration to an enhanced stretched system after you update the system software to the
minimum level of V7.2. These additional configuration steps can be done by using the
command-line interface (CLI) or the management GUI.
The topics below describe the main features and benefits implementing ESC in your
environment:
Site awareness:
– Each node, controller and hosts must have a site definition (site 1, 2 or 3) which will
allow SVC to identify where the components are physically located.
– Connections between controller MDisks to nodes are only allowed within the same site.
– Connections between controller MDisks to nodes in the other site are ignored (path set
offline).
Manual failover capability:
– Ability to override quorum and recover from a rolling disaster.
– Issue satask overridequorum only when the administrator has ensured the other site is
not running as a cluster.
– Access to all recovered site volume copies. This includes the mirror-half of stretched
volumes plus any single-copy volumes from the local site.
– Missing nodes must not be allowed to access shared storage and must be removed.
– When the last failed site offline node is removed, any non-local site volume-copies will
come online.
– Administrator can now begin the process of reconstructing the system objects,
including:
• Defining quorum disks in the correct sites.
• Recreating volumes which were not automatically recovered.
• Re-creating copy services relationships.
Route I/O traffic between the nodes and controllers optimizing traffic across the link
reducing traffic between local nodes and remote controllers.
Volume Mirroring:
– Each volume mirror copy is configured from storage on different sites 1 or 2 (3 is
reserved for quorum only).
– Mirror write priority can be set to latency for fast failover operation (fast failover within
40s).
– Read I/Os are sent to the local site copy (volume mirroring always reads from the local
site copy if it is in sync).
Automatic SVC quorum based failover:
– Automatic quorum disk selection will choose one MDisk in each of sites 1, 2, 3.
Quorum MDisk on site 3 is automatically set as the active.
Restrict I/O traffic between sites in failure conditions to ensure data consistency.
I/O traffic is routed to minimize I/O data flows:
– Data payloads are only transferred the minimum number of times.
– There are I/O protocol control messages that flow across the link but these are very
small in comparison to the data payload.
For write I/Os:
– Write I/O data is typically copied across the link once.
– Upper cache creates write data mirror on I/O group partner node.
– Lower cache makes use of same write data mirror, during a destage.
– For each volume copy the local site node sends the write to the local site backend.
– I/O protocol control message sent but no data sent over the site link.
For read I/Os:
– Volume mirroring will issue reads to the local site copy if in sync.
Note: For best results, configure an enhanced stretched system to include at least two I/O
groups (four nodes). A system with just one I/O group cannot guarantee to maintain
mirroring of data or uninterrupted host access in the presence of node failures or system
updates.
If there has been the unlikely failure of two failure domains, you can enable a manual override
for this situation if the enhanced stretched system function is configured. You can also use
Metro Mirror or Global Mirror with either an enhanced stretched system or a standard
stretched system on a second SVC cluster for extended disaster recovery.
You configure and manage Metro Mirror or Global Mirror partnerships that include a stretched
system in the same way as other remote copy relationships. SVC supports SAN routing
technology (including FCIP links) for intersystem connections that use Metro Mirror or Global
Mirror.
950 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
SVC stretched cluster hardware installation guidelines and infrastructure requirements are
the same for either standard or enhanced stretch clusters.
Prior to V7.2 SVC, stretched clusters were configured without the topology parameter as it
was not available. In addition to that, all prerequisites and requirements for a stretched
solution would have to be met to provide redundancy in some of the components of the
solution.
Note: For a standard stretched cluster there are advantages and disadvantages to
latency and redundancy mode. If you want to be sure that the copy at your second
site is updated then the preferred setting is redundancy. If, however, you cannot
tolerate the longer I/O delays if there is a problem with one site or inter-site link, then
the latency should be chosen, but be aware that if a disaster occurs you might be
left with an out-of-sync copy of the data.
Many HyperSwap concepts, like the site awareness and DR feature, are in fact inherited from
the ESC function. Nevertheless important differences between the two solutions exist as
summarized in Table C-2.
Products that function is SVC only SVC with two or more I/O
available on groups; Storwize V7000,
Storwize V5000, FlashSystem
V9000
Technology for host to access Standard host multipathing Standard host multipathing
multiple copies and driver driver
automatically fail over
952 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Ability to use FlashCopy Yes (though no awareness of Limited: can use FlashCopy
together with High Availability site locality of data) maps with HyperSwap volume
solution as source, avoids sending data
across link between sites
The Enhanced Stretched Cluster function uses a stretched system topology, and the
HyperSwap function uses a hyperswap topology. These both spread the nodes of the system
across two sites, with storage at a third site acting as a tiebreaking quorum device.
The topologies differ in how the nodes are distributed across the sites:
For each I/O group in the system, the stretched topology has one node on one site, and
one node on the other site. The topology works with any number of I/O groups from 1 to 4,
but as the I/O group is split into two locations, this is only available with SVC as the nodes
from each I/O group can be physically separated.
The HyperSwap topology locates both nodes of an I/O group in the same site, making this
possible to use with either Storwize or SVC products. Therefore, to get a volume stored on
both sites, at least two I/O groups (or control enclosures) are required.
The stretched topology uses fewer system resources, allowing a greater number of highly
available volumes to be configured. However, during a disaster that makes one site
unavailable, the system cache on the nodes of the surviving site will be disabled.
The HyperSwap topology uses additional system resources to support a fully independent
cache on each site, allowing full performance even if one site is lost. In some environments a
HyperSwap topology will provide better performance than a stretched topology.
Both topologies allow full configuration of the highly available volumes through a single point
of configuration. The Enhanced Stretched Cluster function may be fully configured through
either the GUI or the CLI.
Note: For more information about IBM Storwize HyperSwap implementation, refer to IBM
Storwize V7000, Spectrum Virtualize, HyperSwap, and VMware implementation,
SG24-8317.
Stretched Cluster:
Requirement for Metro or Global Mirror to an additional cluster (possibly stretched too), to
protect against software failures or to maintain a copy at larger distance.
Concurrent I/O to the same volume from hosts in different sites is a significant portion of
the workload. Examples:
– Oracle RAC
– Large Windows Cluster-Shared Volumes (Hyper-V) that are shared cross-site
– Large VMware data stores that are shared cross-site
– OpenVMS clusters
Requirement for the maximum number of HA volumes, host objects and host ports, host
paths to preferred node.
Requirement for comprehensive FlashCopy support (with broad range of backup/restore
options, minimized RTO for FlashCopy restore, usage of FlashCopy Manager).
Requirement for iSCSI access to HA volumes.
Requirement for automatic provisioning via VSC.
HyperSwap:
Storwize Family: active/active solution for Storwize V7000 or V5000 without requiring
SVC.
SVC: requirement for additional protection against node failures or back-end storage
failures (four copies by combining Metro Mirror and Volume mirror).
I/O per volume is mostly from hosts in one site. Concurrent I/O to the same volume from
hosts in different sites is only a minor portion of the workload. Examples:
– Active/passive host clusters without concurrent volume access like traditional Windows
MSCS and traditional Unix clusters like AIX HACMP™,
– Small Windows Cluster-Shared Volumes (Hyper-V) that are shared single-site,
– Small VMware data stores that are shared single-site.
Requirement for cache-enhanced performance even during disasters.
Requirement for guaranteed application-wide failover characteristics (consistency groups).
Requirement for additional “golden copy” during re-synchronization.
954 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Volume mirroring provides a consistent data copy in both sites. If one storage subsystem fails,
the remaining subsystem processes the I/O requests. The combination of SVC node
distribution in two independent data centers and a copy of data in two independent storage
subsystems creates a new level of availability, the stretched cluster.
If all SVC nodes or the storage system in a single site fail; the other SVC nodes take over the
server load by using the remaining storage systems. The volume ID, and assignment to the
server are still the same. No server reboot, no failover scripts, and therefore no script
maintenance are required. This can be accomplished together with application clustered
configurations which will provide enough failover to servers.
However, you must consider that a stretched cluster typically requires a specific setup and
might exhibit substantially reduced performance.
Figure C-1 on page 956 shows an example of a non-ISL stretched cluster configuration as it
has started to be supported in V5.1.
The stretched cluster uses SVC volume mirroring functionality. Volume mirroring allows the
creation of one volume with two copies of MDisk extents; there is no two volumes with the
same data, but two copies of the same volume each in storage pools. Therefore, volume
mirroring can minimize the effect on volume availability if one set of MDisks goes offline. The
resynchronization between both copies after recovering from a failure is incremental; the SVC
starts the resynchronization process automatically.
As with a standard volume, each mirrored volume is owned by one I/O Group with a preferred
node. Therefore, the mirrored volume goes offline if the whole I/O Group goes offline. The
preferred node performs all I/O operations, which mean reads and writes. The preferred node
can be set manually, and it is always set to be the local node where the server is located.
The quorum disk keeps the status of the mirrored volume. The last status (in sync or not in
sync) and the definitions of the primary and secondary volume copy are saved there.
Therefore, an active quorum disk is required for volume mirroring. To ensure data
consistency, the SVC disables all mirrored volumes if no access exists to any quorum disk
candidate (mainly in disaster scenarios).
Consider the following preferred practices for stretched cluster in a non-ISL configuration:
Drive read I/O to the local storage system.
For distances less than 10 km (6.2 miles), drive the read I/O to the faster of the two disk
subsystems if they are not identical.
Consider long-distance links using Long Wave SFPs.
The preferred node must stay at the same site as the server that is accessing the volume.
The volume mirroring primary copy must stay at the same site as the server that is
accessing the volume to avoid any potential latency effect where the longer distance
solution is implemented.
956 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
In many cases, no independent third site is available. It is possible to use an existing building
or computer room from the two main sites to create a third, independent failure domain.
As shown in Figure C-1 on page 956, the setup is similar to a standard SVC environment, but
the nodes are distributed to two sites. The GUI representation of a stretched cluster is
illustrated in Figure C-2.
The SVC nodes and data are equally distributed across two separate sites with independent
power sources, which are named as separate failure domains (Failure Domain 1 and Failure
Domain 2). The quorum disk is in a third site with a separate power source (Failure Domain
3).
If the non-ISL configuration is implemented over a 10 km (6.2 mile) distance, passive WDM
devices (without power) can be used to pool multiple fiber-optic links with different
wavelengths in one or two connections between both sites. Small form-factor pluggable
(SFPs) with different wavelengths or “colored SFPs”, that is, SFPs that are used in Coarse
Wave Division Multiplexing (CWDM) devices are required in this configuration.
The maximum distance between both major sites is limited to 40 km (24.8 miles).
Because we must prevent the risk of burst traffic (because of the lack of buffer-to-buffer
credits), the link speed must be limited. The link speed depends on the cable length between
the nodes in the same I/O Group, as shown in Table C-3.
SVC code level Minimum length Maximum length Maximum link speed
The same scenarios are valid for site 2 and similar scenarios apply in a mixed failure
environment, for example, the failure of switch 1, SVC node 2, and storage 2. No manual
failover or failback activities are required because the SVC performs the failover or failback
operation.
The use of AIX Live Partition Mobility or VMware vMotion can increase the number of use
cases significantly. Online system migrations are possible, including running virtual machines
and applications. Online system migrations are an acceptable functionality to handle
maintenance operations in an appropriate way.
Advantages
A non-ISL configuration includes the following advantages:
The business continuity solution is distributed across two independent data centers.
The configuration is similar to a standard SVC clustered system.
Limited hardware effort: Passive WDM devices can be used, but are not required.
Requirements
A non-ISL configuration includes the following requirements:
Four independent fiber-optic links for each I/O Group between both data centers.
Long-wave SFPs with support over long distance for direct connection to remote SAN.
Optional usage of passive WDM devices.
Passive WDM device: No power is required for operation.
“Colored SFPs” to make different wavelength available.
“Colored SFPs” must be supported by the switch vendor.
Two independent fiber-optic links between site 1 and site 2 are recommended.
Third site for quorum disk placement.
Quorum disk storage system must use FC for attachment with similar requirements, such
as Metro Mirror storage (80 ms round-trip delay time, which is 40 ms in each direction).
When possible, use two independent fiber-optic links between site 1 and 2.
Note: For more details about non-ISL configurations, consult the IBM Redbook: IBM SAN
Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142.
958 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
The stretched cluster configuration that is shown in Figure C-3 on page 959 supports
distances of up to 300 km (186.4 miles), which is the same as the recommended distance for
Metro Mirror.
Data is written by the preferred node to the local storage and the inter-node communication is
responsible to send the data to remote storage. The SCSI write protocol results in two round
trips. This latency is hidden from the application by the write cache as it is held by different
components of the solution.
The stretched cluster is often used to move the workload between servers at separate sites.
VMotion or equivalent products can be used to move applications between servers; therefore,
applications no longer necessarily issue I/O requests to the local SVC nodes.
With the addition of host site awareness, and traffic I/O control, SCSI write commands from
local hosts to remote SVC nodes are blocked to avoid large latency in traffic round trips. All
remote writes (local to remote) are done through node-to-node traffic (private fabrics). For
stretched cluster configurations in a long-distance environment, we advise that you use the
local site for host I/O (as well local storage subsystems).
Advantages
This configuration includes the following advantages:
ISLs enable longer distances greater than 40 km (24.85 miles) between failure domains.
Active and passive WDM devices can be used between failure domains.
The supported distance is up to 300 km (186.41 miles) with WDM.
Requirements
A stretched cluster with ISL configuration must meet the following requirements:
Four independent, extended SAN fabrics are shown in Figure C-3 on page 959. Those
fabrics are named Pub SAN 1, Pub SAN 2, Priv SAN 1, and Priv SAN 2. Each Public or
Private SAN can be created with a dedicated FC switch or director, or they can be a virtual
SAN in a CISCO or Brocade virtual FC switch or director.
Minimum of two ports per SVC node attached to the private SANs.
Minimum of two ports per SVC node attached to the public SANs.
SVC volume mirroring exists between site 1 and site 2.
Hosts and storage controllers attached to the public SANs.
The third site quorum disk attached to the public SANs.
Figure C-4 on page 961 shows the possible configurations with a virtual SAN.
960 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Figure C-5 shows the possible configurations with physical SAN switches in Private and
Public fabrics.
Figure C-5 ISL configuration with a physical SAN switches as Private and Public fabrics
Use a third site to house a quorum disk. Connections to the third site can be through FCIP
because of the distance (no FCIP or FC switches were shown in the previous layouts for
simplicity). In many cases, no independent third site is available.
It is possible to use an existing building from the two main sites to create a third,
independent failure domain, but you have the following considerations:
– The third failure domain needs an independent power supply or uninterruptible power
supply. If the host site fails, the third failure domain needs to continue to operate.
– Each site (failure domain) must be placed in a separate fire compartment.
– FC cabling must not go through another site (failure domain). Otherwise, a fire in one
failure domain could destroy the links (and therefore the access) to the SVC quorum
disk.
Applying these considerations, the SVC clustered system can be protected, even though
two failure domains are in the same building. Consider an IBM Advanced Technical
Support (ATS) review or processing a request for price quotation (RRQ)/Solution for
Compliance in a Regulated Environment (SCORE) to review the proposed configuration.
Four active/passive WDMs, two for each site, are needed to extend the public and private
SAN over a distance.
Place independent storage systems at the primary and secondary sites. Use volume
mirroring to mirror the host data between storage systems at the two sites.
The SVC nodes that are in the same I/O Group must be in two remote sites.
Note: For more details about non-ISL configurations, consult the IBM Redbook: IBM SAN
Volume Controller Stretched Cluster with PowerVM and PowerHA, SG24-8142.
FCIP configuration
In this configuration, FCIP links are used between failure domains. SAN Volume Controller
support for FCIP was introduced in V6.4.
This configuration is a variation of the ISL configuration that was described previously in this
chapter, and therefore many of the same requirements apply.
Figure 11-385 shows the connection diagram using FCIP connections between failure
domains.
962 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933 13 Stretched Cluster Tarik.fm
Therefore, buffer-to-buffer credits are necessary to have multiple FC frames in parallel flight.
An appropriate number of buffer-to-buffer credits are required for optimal performance. The
number of buffer credits to achieve the maximum performance over a specific distance
depends on the speed of the link, as shown in the following examples:
1 buffer credit = 2 km (1.2 miles) at 1 Gbps
1 buffer credit = 1 km (.62 miles) at 2 Gbps
1 buffer credit = 0.5 km (.3 miles) at 4 Gbps
1 buffer credit = 0.25 km (.15 miles) at 8 Gbps
1 buffer credit = 0.125 km (0.08 miles) at 16 Gbps
These guidelines give the minimum numbers. The performance drops if insufficient buffer
credits exist, according to the link distance and link speed, as shown in Table C-4.
The number of buffer-to-buffer credits that is provided by an SVC FC host bus adapter (HBA)
is limited. These numbers are determined by the hardware of the HBA and cannot be
changed. We suggest that you use 2145-DH8 nodes for distances longer than 4 km (2.4
miles) to provide enough buffer-to-buffer credits at a reasonable FC speed. Table Table 11-3
shows the different types of HBA cards and the buffer credits comparison for each one.
964 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933bibl.fm
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Implementing the IBM System Storage SAN Volume Controller V7.2, SG24-7933
Implementing the IBM Storwize V7000 V7.2, SG24-7938
IBM b-type Gen 5 16 Gbps Switches and Network Advisor, SG24-8186
Introduction to Storage Area Networks and System Networking, SG24-5470
IBM SAN Volume Controller and IBM FlashSystem 820: Best Practices and Performance
Capabilities, REDP-5027
Implementing the IBM SAN Volume Controller and FlashSystem 820, SG24-8172
Implementing IBM FlashSystem 840, SG24-8189
IBM FlashSystem in IBM PureFlex System Environments, TIPS1042
IBM FlashSystem 840 Product Guide, TIPS1079
IBM FlashSystem 820 Running in an IBM StorwizeV7000 Environment, TIPS1101
Implementing FlashSystem 840 with SAN Volume Controller, TIPS1137
IBM FlashSystem V840 Enterprise Performance Solution, TIPS1158
IBM Midrange System Storage Implementation and Best Practices Guide, SG24-6363
IBM System Storage b-type Multiprotocol Routing: An Introduction and Implementation,
SG24-7544
IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
Tivoli Storage Productivity Center for Replication for Open Systems, SG24-8149
Tivoli Storage Productivity Center V5.2 Release Guide, SG24-8204
Implementing an IBM b-type SAN with 8 Gbps Directors and Switches, SG24-6116
You can search for, view, download, or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Other resources
These publications are also relevant as further information sources:
IBM System Storage Master Console: Installation and User’s Guide, GC30-4090
IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface User's Guide, SC26-7544
IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
IBM System Storage Open Software Family SAN Volume Controller: Host Attachment
Guide, SC26-7563
IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
IBM System Storage SAN Volume Controller - Software Installation and Configuration
Guide, SC23-6628
IBM System Storage SAN Volume Controller V6.2.0 - Software Installation and
Configuration Guide, GC27-2286
IBM System Storage SAN Volume Controller 6.2.0 Configuration Limits and Restrictions,
S1003799
IBM TotalStorage Multipath Subsystem Device Driver User’s Guide, SC30-4096
IBM XIV and SVC Best Practices Implementation Guide
http://ibm.co/1bk64gW
Considerations and Comparisons between IBM SDD for Linux and DM-MPIO
http://ibm.co/1CD1gxG
Referenced websites
These websites are also relevant as further information sources:
IBM Storage home page
http://www.storage.ibm.com
SAN Volume Controller supported platform
http://ibm.co/1FNjddm
SAN Volume Controller IBM Knowledge Center
http://www-01.ibm.com/support/knowledgecenter/STPVGU/welcome
Cygwin Linux-like environment for Windows
http://www.cygwin.com
966 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
Draft Document for Review February 4, 2016 8:01 am 7933bibl.fm
968 Implementing the IBM System Storage SAN Volume Controller with IBM Spectrum Virtualize V7.6
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided
250 by 526 which equals a spine width of .4752". In this case, you would use the .5” spine. Now select the Spine width for the book and hide the others: Special>Conditional
Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
Conditional Text Settings (ONLY!) to the book files.
Draft Document for Review February 4, 2016 8:01 am 7933spine.fm 969
Implementing the IBM System SG24-7933-04
Storage SAN Volume Controller ISBN DocISBN
(1.5” spine)
1.5”<-> 1.998”
789 <->1051 pages
Implementing the IBM System SG24-7933-04
Storage SAN Volume Controller ISBN DocISBN
(1.0” spine)
0.875”<->1.498”
460 <-> 788 pages
SG24-7933-04
Implementing the IBM System Storage SAN Volume Controller ISBN DocISBN
(0.5” spine)
0.475”<->0.873”
250 <-> 459 pages
Implementing the IBM System Storage SAN Volume Controller with IBM
(0.2”spine)
0.17”<->0.473”
90<->249 pages
(0.1”spine)
0.1”<->0.169”
53<->89 pages
To determine the spine width of a book, you divide the paper PPI into the number of pages in the book. An example is a 250 page book using Plainfield opaque 50# smooth which has a PPI of 526. Divided
250 by 526 which equals a spine width of .4752". In this case, you would use the .5” spine. Now select the Spine width for the book and hide the others: Special>Conditional
Text>Show/Hide>SpineSize(-->Hide:)>Set . Move the changed Conditional text settings to all files in your book by opening the book file with the spine.fm still open and File>Import>Formats the
Conditional Text Settings (ONLY!) to the book files.
Draft Document for Review February 4, 2016 8:01 am 7933spine.fm 970
Implementing the IBM SG24-7933-04
System Storage SAN ISBN DocISBN
(2.5” spine)
2.5”<->nnn.n”
1315<-> nnnn pages
Implementing the IBM System SG24-7933-04
Storage SAN Volume Controller ISBN DocISBN
with IBM Spectrum Virtualize
(2.0” spine)
2.0” <-> 2.498”
1052 <-> 1314 pages
Back cover
Draft Document for Review February 4, 2016 8:04 am
SG24-7933-04
ISBN DocISBN
Printed in U.S.A.
ibm.com/redbooks