Sg247759 (IBM XIV Storage System - Copy Services and Migration)
Sg247759 (IBM XIV Storage System - Copy Services and Migration)
ibm.com/redbooks
International Technical Support Organization
March 2011
SG24-7759-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page ix.
This edition applies to Version 10.2.2 of the IBM XIV Storage System Software and Version 2.5 of the IBM XIV
Storage System Hardware.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv
Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
March 2011, Second Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 1. Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Snapshots architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Snapshot handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.1 Creating a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2.2 Viewing snapshot details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Deletion priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.2.4 Restore a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.2.5 Overwriting snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.2.6 Unlocking a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2.7 Locking a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2.8 Deleting a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.2.9 Automatic deletion of a snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.3 Snapshots consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.3.1 Creating a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.3.2 Creating a snapshot using consistency groups. . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.3 Managing a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.3.4 Deleting a consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.4 Snapshot with remote mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.5 MySQL database backup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.6 Snapshot example for a DB2 database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.6.1 XIV Storage System and AIX OS environments . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.6.2 Preparing the database for recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1.6.3 Using XIV snapshots for database backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.6.4 Restoring the database from the XIV snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Contents v
7.5.1 Solution benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
7.5.2 Planning the bandwidth for Remote Mirroring links. . . . . . . . . . . . . . . . . . . . . . . 202
7.5.3 Setting up synchronous Remote Mirroring for IBM i . . . . . . . . . . . . . . . . . . . . . . 202
7.5.4 Scenario for planned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
7.5.5 Scenario for unplanned outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
7.6 Asynchronous Remote Mirroring with IBM i . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.6.1 Benefits of asynchronous Remote Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
7.6.2 Setting up asynchronous Remote Mirroring for IBM i . . . . . . . . . . . . . . . . . . . . . 210
7.6.3 Scenario for planned outages and disasters. . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Contents vii
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that does
not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX® i5/OS® System i®
AS/400® IBM® System p®
BladeCenter® iSeries® System Storage®
DB2® PowerHA™ System x®
DS4000® POWER® Tivoli®
DS6000™ Redbooks® TotalStorage®
DS8000® Redpaper™ XIV®
FlashCopy® Redbooks (logo) ®
Snapshot, and the NetApp logo are trademarks or registered trademarks of NetApp, Inc. in the U.S. and other
countries.
Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or
its affiliates.
SAP, and SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other
countries.
Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States,
other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redbooks® publication provides a practical understanding of the XIV® Storage
System copy and migration functions. The XIV Storage System has a rich set of copy
functions suited for various data protection scenarios, which enables clients to enhance their
business continuance, data migration, and online backup solutions. These functions allow
point-in-time copies, known as snapshots and full volume copies, and also include remote
copy capabilities in either synchronous or asynchronous mode. These functions are included
in the XIV software and all their features are available at no additional charge.
The various copy functions are reviewed in separate chapters, which include detailed
information about usage, as well as practical illustrations.
This book also explains the XIV built-in migration capability, and presents migration
alternatives based on the SAN Volume Controller (SVC).
Note: GUI and XCLI illustrations included in this book were created with an early version of
the 10.2.2 code, as available at the time of writing. There could be minor differences with
the XIV 10.2.2 code that is publicly released.
This book is intended for anyone who needs a detailed and practical understanding of the XIV
copy functions.
Bertrand Dufrasne is an IBM Certified Consulting I/T Specialist and Project Leader for
System Storage® disk products at the International Technical Support Organization, San
Jose Center. He has worked at IBM in various I/T areas. He has authored many IBM
Redbooks publications and has also developed and taught technical workshops. Before
joining the ITSO, he was an Application Architect in IBM Global Services. He holds a Masters
degree in Electrical Engineering from the Polytechnic Faculty of Mons (Belgium).
Roger Eriksson is an IBM STG Lab Services consultant, based in Stockholm, Sweden and
working for the European Storage Competence Center in Mainz, Germany. He is a Senior
Accredited IBM Product Service Professional. Roger has over 20 years experience working
on IBM servers and storage, including Enterprise and Midrange disk, NAS, SAN, System x®,
System p®, and Bladecenters. Since late 2008 he has worked primarily with the XIV product
line, performing proofs of concept, education, and consulting with both clients and IBM teams
worldwide. He holds a Technical College Degree in Mechanical Engineering.
Wilhelm Gardt holds a degree in Computer Sciences from the University of Kaiserslautern,
Germany. He worked as a software developer and subsequently as an IT specialist designing
and implementing heterogeneous IT environments (SAP, Oracle, AIX®, HP-UX, SAN, and so
forth). In 2001 he joined the IBM TotalStorage® Interoperability Centre (now Systems Lab
Europe) in Mainz where he performed customer briefings and Proof of Concepts on IBM
storage products. Since September 2004 he has been a member of the Technical Pre-Sales
Support team for IBM Storage (Advanced Technical Support).
Nils Nause is a Storage Support Specialist for IBM XIV Storage Systems in Mainz, Germany.
Nils joined IBM in 2005, responsible for Proof of Concepts and delivering briefings for several
IBM products. In July 2008 he started working for the XIV post sales support, with special
focus on Oracle Solaris attachment, as well as overall security aspects of the XIV Storage
System. He holds a degree in computer science from the University of Applied Science in
Wernigerode, Germany.
Markus Oscheka is an IT Specialist for Proof of Concepts and Benchmarks in the Disk
Solution Europe team in Mainz, Germany. His areas of expertise include setup and
demonstration of IBM System Storage and TotalStorage solutions in various environments,
including AIX, Linux®, Windows®, VMware ESX and Solaris. He has worked at IBM for nine
years, and has performed Proof of Concepts with Copy Services on DS6000/DS8000/XIV, as
well as performance benchmarks with DS4000/DS6000/DS8000/XIV. He is a co-author of
numerous redbooks publications about DS6000/DS8000® architecture, implementation, copy
services, and IBM XIV Storage System. He holds a degree in Electrical Engineering from the
Technical University in Darmstadt.
Carlo Saba is a Test Engineer for XIV in Tucson, Arizona. He has been working with the
product since shortly after its introduction and is a Certified XIV administrator. Carlo
graduated from the University of Arizona with a BS/BA in MIS and minor in Spanish.
Eugene Tsypin is an IT Specialist for IBM STG Storage Systems Sales in Russia. Eugene
has over 15 years of experience in the IT field, ranging from systems administration to
enterprise storage architecture. He currently does Field Technical Sales Support for storage
systems. His areas of expertise include performance analysis and disaster recovery solutions
using the unique capabilities and features of the IBM XIV Storage System and other IBM
storage, server, and software products.
Kip Wagner is an Advisory Product Engineer for XIV in Tucson, Arizona. He has more than
24 years experience in field support and systems engineering and is a Certified XIV Engineer
and Administrator. Kip was a member of the initial IBM XIV product launch team who helped
design and implement a world wide support structure specifically for XIV. He also helped
develop training material and service documentation used in the support organization. He is
currently the team leader for XIV product field engineering supporting customers in North and
South America. He also works with a team of engineers from around the world to provide field
experience feedback into the development process to help improve product quality, reliability
and serviceability.
Axel Westphal is an IT Specialist for Workshops and Proof of Concepts at the IBM European
Storage Competence Center (ESCC) in Mainz, Germany. He joined IBM in 1996, working for
Global Services as a System Engineer. His areas of expertise include setup and
Thanks to the authors of the previous edition: Aubrey Applewhaite, David Denny, Jawed
Iqbal, Christina Lara, Lisa Martinez, Rosemary McCutchen, Hank Sautter, Stephen Solewin,
Anthony Vandewerdt, Ron Verbeek, Pete Wendler, Roland Wolf.
Special thanks to Rami Elron for his help and advice on many of the topics covered in this
book.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xiii
Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an e-mail to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
This section describes the technical changes made in this edition of the book and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for SG24-7759-01
for IBM XIV Storage System: Copy Services and Migration
as created or updated on March 24, 2011.
New information
Open system considerations for Copy Services
IBM i considerations for Copy Services
Changed information
Various updates to reflect changes in XIV Storage System software v10.2.2
Chapter 1. Snapshots
The XIV Storage System has a rich set of copy functions suited for various data protection
scenarios, which enables clients to enhance their business continuance, data migration, and
online backup solutions. This chapter provides an overview of the snapshot function for the
XIV product.
A snapshot is a point-in-time copy of a volume’s data. The XIV snapshot is based on several
innovative technologies to ensure minimal degradation of or impact on system performance.
Snapshots make use of pointers and do not necessarily copy all the data to the second
instance of a volume. They efficiently share cache for common data, effectively working as a
larger cache than would be the case with full data copies.
A volume copy is an exact copy of a system volume; it differs from a snapshot in approach in
that a full data copy is performed in the background.
With these definitions in mind, we explore the architecture and functions of snapshots within
the XIV Storage System.
The XIV system consists of several servers with 12 disk drives each and memory that acts as
cache. All the servers are connected to each other and certain servers act as interface
servers to the SAN and the host servers (Figure 1-1).
Server
Network (FC/Ethernet)
Ethernet
Switch 1 Switch 2
Module 1 Module 15
XIV Architecture
• Split volume data in 1MB • Store both copies in
partitions different modules
• Maintain a copy of each • Spread data of a volume
partition across all disk drives
pseudo-randomly
Volume
Data Module 3
Chapter 1. Snapshots 3
A logical volume is represented by pointers to partitions that make up the volume. If a
snapshot is taken of a volume, the pointers are just copied to form the snapshot volume, as
shown in Figure 1-3. No space is consumed for the snapshot volume up to now.
Vol
• A snapshot of a
volume is taken.
Pointers point to the
same partitions as the
original volume
When an update is performed on the original data, the update is stored in a new position and
a pointer of the original volume now points to the new partition, whereas the snapshot volume
still points to the old partition. Now we use up more space for the original volume and its
snapshot and it has the size of a partition (1 MB). This method is called redirect-on-write.
Empty
Empty
Snapshot Pointer Volume A Volume Pointer
to Partition to Partition
Empty
Volume A Volume Pointer
to Partition
Snapshot Pointer Snapshot of A
to Partition
The actual metadata overhead for a snapshot is small. When the snapshot is created, the
system does not require new pointers because the volume and snapshot are exactly the
same, which means that the time to create the snapshot is independent of the size or number
of snapshots present in the system. As data is modified, new metadata is created to track the
changes to the data.
Note: The XIV system minimizes the impact to the host for write operations by performing
a redirect-on-write operation. As the host writes data to a volume with a snapshot
relationship, the incoming information is placed into a newly allocated partition. Then the
pointer to the data for the master volume is modified to point at the new partition. The
snapshot volume continues to point at the original data partition.
Because the XIV Storage System tracks the snapshot changes on a partition basis, data is
only copied when a transfer is less than the size of a partition. For example, a host writes
4 KB of data to a volume with a snapshot relationship. The 4 KB is written to a new partition,
but in order for the partition to be complete, the remaining data must be copied from the
original partition to the newly allocated partition.
The alternative to redirect-on-write is the copy on write function. Most other systems do not
move the location of the volume data. Instead, when the disk subsystem receives a change, it
copies the volume’s data to a new location for the point-in-time copy. When the copy is
complete, the disk system commits the newly modified data. Therefore, each individual
modification takes longer to complete because the entire block must be copied before the
change can be made.
Chapter 1. Snapshots 5
Storage pools and consistency groups
A storage pool is a logical entity that represents storage capacity. Volumes are created in a
storage pool and snapshots of a volume are within the same storage pool. Because
snapshots require capacity as the source and the snapshot volume differ over time, space for
snapshots must be set aside when defining a storage pool (Figure 1-6). A minimum of 34 GB
of snapshot space should be allocated. A value of 80% of the volume space is recommended.
A storage pool can be resized as needed as long as there is enough free capacity in the XIV
Storage System.
Terminology
• Storage Pool Storage Pool
– Administrative construct
for controlling usage of
data capacity Consistency Group
• Volume
– Data capacity spreads
across all disks in IBM
XIV system Volume Volume
• Snapshot
– Point in time image
– Same storage pool as
source
Snapshot Snapshot
• Consistency group
– Multiple volumes that Snapshot Group
require consistent
snapshot creation
– All in same storage pool
• Snapshot group
– Group of consistent
snapshots
An application can utilize many volumes on the XIV Storage System. For example, a
database application can span several volumes for application data and transaction logs. In
this case, the snapshot for the volumes must occur at the same moment in time so that the
data and logs are consistent. The consistency group allows the user to perform the snapshot
on all the volumes assigned to the group at the same moment in time, thereby enforcing data
consistency.
The XIV Storage System creates a special snapshot related to the remote mirroring
functionality. During the recovery process of lost links, the system creates a snapshot of all
the volumes in the system. This snapshot is used if the synchronization process fails. The
data can be restored to a point of known consistency. A special value of the deletion priority is
used to prevent the snapshot from being automatically deleted. Refer to 1.4, “Snapshot with
remote mirror” on page 30, for an example of this snapshot.
If you know in advance that an automatic deletion is possible, a pool can be expanded to
accommodate additional snapshots. This function requires that there is available space on
the system for the storage pool (Figure 1-7).
Chapter 1. Snapshots 7
Snapshot space on a single disk
Each snapshot has a deletion priority property that is set by the user. There are four priorities,
with 1 being the highest priority and 4 being the lowest priority. The system uses this priority
to determine which snapshot to delete first. The lowest priority becomes the first candidate for
deletion. If there are multiple snapshots with the same deletion priority, the XIV system
deletes the snapshot that was created first. Refer to 1.2.3, “Deletion priority” on page 12 for
an example of working with deletion priorities.
XIV Asynchronous Mirroring leverages snapshot technology. First a snapshot of the original
volume is created on the primary site (Master). Then the data is replicated to the volume on
the secondary site (Slave). After an initialization phase the differences between the Master
snapshot and a snapshot reflecting the initialization state are calculated. A synchronization
process is established that replicates the differences only from the Master to the Slave. Refer
to Chapter 5, “Asynchronous Remote Mirroring” on page 125 for details on XIV
Asynchronous Mirroring.
The snapshots that are created by the Asynchronous Mirroring process are protected from
manual deletion by setting the priority to 0. Nevertheless, the automatic deletion mechanism
that frees up space upon space depletion in a pool will proceed with these protected
snapshots if there is still insufficient space after the deletion of unprotected snapshots. In this
case the mirroring between the involved volumes is deactivated before the snapshot is
deleted.
Unlocking a snapshot
A snapshot also has a unique ability to be unlocked. By default, a snapshot is locked on
creation and is only readable. Unlocking a snapshot allows the user to modify the data in the
snapshot for post-processing.
When unlocked, the snapshot takes on the properties of a volume and can be resized or
modified. As soon as the snapshot has been unlocked, the modified property is set. The
modified property cannot be reset after a snapshot is unlocked, even if the snapshot is
relocked without modification.
If the first snapshot is unlocked and the duplicate snapshot already exists, the creation time
for the duplicate snapshot does not change. The duplicate snapshot points to the original
snapshot. If a duplicate snapshot is created from the unlocked snapshot, the creation date is
the time of duplication and the duplicate snapshot points to the original snapshot.
The new snapshot is displayed in Figure 1-9. The XIV Storage System uses a specific naming
convention. The first part is the name of the volume followed by the word snapshot and then a
number or count of snapshots for the volume. The snapshot is the same size as the master
volume. However, it does not display how much space has been used by the snapshot.
Chapter 1. Snapshots 9
From the view shown in Figure 1-9, other details are evident:
First is the locked property of the snapshot. By default, a snapshot is locked, which means
the it is write inhibited at the time of creation.
Second, the modified property is displayed to the right of the locked property. In this
example, the snapshot has not been modified.
You might want to create a duplicate snapshot – for example, if you want to keep this
snapshot as is and still be able to modify a copy of it.
The duplicate has the same creation date as the first snapshot, and it also has a similar
creation process. From the Volumes and Snapshots view, right-click the snapshot to
duplicate. Select Duplicate from the menu to create a new duplicate snapshot. Figure 1-10
provides an example of duplicating the snapshot ITSO_Volume.snapshot_00001.
After you select Duplicate from the menu, the duplicate snapshot is displayed directly under
the original snapshot.
Note: The creation date of the duplicate snapshot in Figure 1-11 is the same creation date
as the original snapshot. The duplicate snapshot points to the master volume, not the
original snapshot.
Example 1-1 provides an example of creating a snapshot and a duplicate snapshot with the
Extended Command Line Interface (XCLI).
In the following examples we use the XIV Session XCLI. You could also use the XCLI
command. In this case, however, specify the configuration file or the IP address of the XIV
that you are working with as well as the user ID and password. Use the XCLI command to
automate tasks with batch jobs. For simplicity, we used the XIV Session XCLI in our
examples.
Example 1-1 Creating a snapshot and a duplicate with the XCLI Session
snapshot_create vol=ITSO_Volume
snapshot_duplicate snapshot=ITSO_Volume.snapshot_00001
After the snapshot is created, it must be mapped to a host in order to access the data. This
action is performed in the same way as mapping a normal volume.
Creation of a snapshot is only done in the volume’s storage pool. A snapshot cannot be
created in a storage pool other than the one that owns the volume. If a volume is moved to
another storage pool, the snapshots are moved with the volume to the new storage pool
(provided that there is enough space).
Chapter 1. Snapshots 11
Scroll down to the snapshot of interest and select the snapshot by clicking its name. Details of
the snapshot are displayed in the upper right panel. Looking at the volume ITSO_Volume, it
contains a snapshot 00001 and a duplicate snapshot 00002. The snapshot and the duplicate
snapshot have the same creation date of 2010-10-06 11:42:00, as shown in Figure 1-13. In
addition, the snapshot is locked, has not been modified, and has a deletion priority of 1 (which
is the highest priority, so it will be deleted last).
Along with these properties, the tree view shows a hierarchal structure of the snapshots. This
structure provides details about restoration and overwriting snapshots. Any snapshot can be
overwritten by any parent snapshot, and any child snapshot can restore a parent snapshot or
a volume in the tree structure.
In Figure 1-13, the duplicate snapshot is a child of the original snapshot, or in other words,
the original snapshot is the parent of the duplicate snapshot. This structure does not refer to
the way the XIV Storage System manages the pointers with the snapshots, but is intended to
provide an organizational flow for snapshots.
Example 1-2 shows the snapshot data output in the XCLI Session. Due to space limitations,
only a small portion of the data is displayed from the output.
If the snapshot space is full, the duplicate snapshot is deleted first even though the original
snapshot is older.
After clicking Change Deletion Priority, select the desired deletion priority from the dialog
window and accept the change by clicking OK. Figure 1-15 shows the four options that are
available for setting the deletion priority. The lowest priority setting is 4, which causes the
snapshot to be deleted first. The highest priority setting is 1, and these snapshots are deleted
last. All snapshots have a default deletion priority of 1, if not specified on creation.
Figure 1-16 displays confirmation that the duplicate snapshot has had its deletion priority
lowered to 4. As shown in the upper right panel, the delete priority is reporting a 4 for
snapshot ITSO_Volume.snapshot_00002.
To change the deletion priority for the XCLI Session, specify the snapshot and new deletion
priority, as illustrated in Example 1-3.
Chapter 1. Snapshots 13
The GUI also lets you specify the deletion priority when you create the snapshot. Instead of
selecting Create Snapshot, select Create Snapshot (Advanced), as shown in Figure 1-17.
A panel is presented that allows you to specify the deletion priority and it also allows you to
use your own volume name for the snapshot.
After you perform the restore action, you return to the Volumes and Snapshots panel. The
process is instantaneous, and none of the properties (creation date, deletion priority, modified
properties, or locked properties) of the snapshot or the volume have changed.
Specifically, the process modifies the pointers to the master volume so that they are
equivalent to the snapshot pointer. This change only occurs for partitions that have been
modified. On modification, the XIV Storage System stores the data in a new partition and
modifies the master volume’s pointer to the new partition. The snapshot pointer does not
change and remains pointing at the original data. The restoration process restores the pointer
back to the original data and frees the modified partition space.
If a snapshot is taken and the original volume later increases in size, you can still do a restore
operation. The snapshot still has the original volume size and will restore the original volume
accordingly.
The XCLI Session (or XCLI command) provides more options for restoration than the GUI.
With the XCLI, you can restore a snapshot to a parent snapshot (Example 1-4).
From either the Volumes and Snapshots view or the Snapshots Tree view, right-click the
snapshot to overwrite. Select Overwrite from the menu and a dialog box opens. Click OK to
validate the overwriting of the snapshot. Figure 1-20 illustrates overwriting the snapshot
named ITSO_Volume.snapshot_00001.
Chapter 1. Snapshots 15
Figure 1-20 Overwriting a snapshot
It is important to note that the overwrite process modifies the snapshot properties and
pointers when involving duplicates. Figure 1-21 shows two changes to the properties. The
snapshot named ITSO_Volume.snapshot_00001 has a new creation date. The duplicate
snapshot still has the original creation date. However, it no longer points to the original
snapshot. Instead, it points to the master volume according to the snapshot tree, which
prevents a restoration of the duplicate to the original snapshot. If the overwrite occurs on the
duplicate snapshot, the duplicate creation date is changed, and the duplicate is now pointing
to the master volume.
Figure 1-21 Snapshot tree after the overwrite process has occurred
The XCLI performs the overwrite operation through the snapshot_create command. There is
an optional parameter in the command to specify which snapshot to overwrite. If the optional
parameter is not used, a new snapshot volume is created.
There are two scenarios that you must investigate when unlocking snapshots. The first
scenario is to unlock a duplicate. By unlocking the duplicate, none of the snapshot properties
are modified, and the structure remains the same. This method is straightforward and
provides a backup of the master volume along with a working copy for modification. To unlock
the snapshot, simply right-click the snapshot and select Unlock, as shown in Figure 1-22.
The results in the Snapshots Tree window show that the locked property is off and the
modified property is on for ITSO_Volume.snapshot_00002. Even if the volume is relocked or
overwritten with the original master volume, the modified property remains on. Also note that
in Figure 1-23 the structure is unchanged. If an error occurs in the modified duplicate
snapshot, the duplicate snapshot can be deleted, and the original snapshot duplicated a
second time to restore the information.
For the second scenario, the original snapshot is unlocked and not the duplicate. Figure 1-24
shows the new property settings for ITSO_Volume.snapshot.00001. At this point, the duplicate
snapshot mirrors the unlocked snapshot, because both snapshots still point to the original
data. While the unlocked snapshot is modified, the duplicate snapshot references the original
data. If the unlocked snapshot is deleted, the duplicate snapshot remains, and its parent
becomes the master volume.
Because the hierarchal snapshot structure was unmodified, the duplicate snapshot can be
overwritten by the original snapshot. The duplicate snapshot can be restored to the master
volume. Based on the results, this process does not differ from the first scenario. There is still
a backup and a working copy of the data.
Unlocking a snapshot is the same as unlocking a volume (Example 1-6 on page 18).
Chapter 1. Snapshots 17
Example 1-6 Unlocking a snapshot with the XCLI Session commands
vol_unlock vol=ITSO_Volume.snapshot_00001
The locking process completes immediately, preventing further modification to the snapshot.
In Figure 1-26, the ITSO_Volume.00001 snapshot shows that both the lock property is on and
the modified property is on.
Even though there has not been a change to the snapshot, the system does not remove the
modified property.
The XCLI lock command (vol_lock), which is shown in Example 1-7, is almost a mirror
operation of the unlock command. Only the actual command changes, but the same
operating parameters are used when issuing the command.
The panel in Figure 1-28 no longer displays the snapshot ITSO_Volume.snapshot.00001. Note
that the volume and the duplicate snapshot are unaffected by the removal of this snapshot. In
fact, the duplicate becomes the child of the master volume. The XIV Storage System provides
the ability to restore the duplicate snapshot to the master volume or to overwrite the duplicate
snapshot from the master volume even after deleting the original snapshot.
The delete snapshot command (snapshot_delete) operates the same as the creation
snapshot. Refer to Example 1-8.
Important: If you delete a volume, all snapshots associated with the volume are also
deleted.
Chapter 1. Snapshots 19
With this scenario, a duplicate does not cause the automatic deletion to occur. Because a
duplicate is a mirror copy of the original snapshot, the duplicate does not create the additional
allocations in the storage pool.
To examine the details of the scenario at the point where the second snapshot is taken, a
partition is in the process of being modified. The first snapshot caused a redirect on write, and
a partition was allocated from the snapshot area in the storage pool. Because the second
snapshot occurs at a different time, this action generates a second partition allocation in the
storage pool space. This second allocation does not have available space, and the oldest
snapshot is deleted. Figure 1-30 shows that the master volume XIV_ORIG_VOL and the newest
snapshot XIV_ORIG_VOL.snapshot.00007 are present. The oldest snapshot
XIV_ORIG_VOL.snapshot.00006 was removed.
To determine the cause of removal, you must go to the Events panel under the Monitor
menu. As shown on Figure 1-31, the event “SNAPSHOT_DELETED_DUE_TO_POOL_EXHAUSTION” is
logged. The snapshot name XIV_ORIG_VOL.snapshot.00006 and timestamp 2010-10-06
16:59:21 are also logged for future reference.
Starting at the Volumes and Snapshots view, select the volume that is to be added to the
consistency group. To select multiple volumes, hold down the Shift key or the Ctrl key to
select/deselect individual volumes. After the volumes are selected, right-click a selected
volume to bring up an operations menu. From there, click Create a Consistency Group
With Selected Volumes. Refer to Figure 1-32 for an example of this operation.
After selecting the Create option from the menu, a dialog window appears. Enter the name of
the consistency group. Because the volumes are added during creation, it is not possible to
change the pool name. Figure 1-33 shows the process of creating a consistency group. After
the name is entered, click Create.
The volume consistency group ownership can be seen under Volumes and Snapshots. As
in Figure 1-34, the three volumes contained in the itso pool are now owned by the ITSO_CG
consistency group. The volumes are displayed in alphabetical order and do not reflect a
preference or internal ordering.
Chapter 1. Snapshots 21
Figure 1-34 Viewing the volumes after creating a consistency group
In order to obtain details about the consistency group, the GUI provides a panel to view the
information. Under the Volumes menu, select Consistency Groups. Figure 1-35 illustrates
how to access this panel.
This selection sorts the information by consistency group. The panel allows you to expand the
consistency group and see all the volumes owned by that consistency group. In Figure 1-36,
there are three volumes owned or contained by the ITSO_CG consistency group. In this
example, a snapshot of the volumes has not been created.
From the consistency group view, you can create a consistency group without adding
volumes. On the menu bar at the top of the window, there is an icon to add a new consistency
group. By clicking the Add consistency group icon shown in Figure 1-37, a creation dialog box
When created, the consistency group appears in the Consistency Groups view of the GUI
(Figure 1-38). The new group does not have any volumes associated with it. A new
consistency group named ITSO_CG2 is created. The consistency group cannot be expanded
yet, because there are no volumes contained in the consistency group ITSO_CG2.
Using the Volumes view in the GUI, select the volumes to add to the consistency group. After
selecting the desired volumes, right-click the volumes and select Add To Consistency
Group. Figure 1-39 shows two volumes being added to a consistency group:
itso_volume_4
itso_volume_5
Chapter 1. Snapshots 23
Figure 1-39 Adding volumes to a consistency group
After selecting the volumes to add, a dialog box opens asking for the consistency group to
which to add the volumes. Figure 1-40 adds the volumes to the ITSO_CG consistency group.
Clicking OK completes the operation.
Using the XCLI Session (or XCLI command), the process must be done in two steps. First,
create the consistency group, then add the volumes. Example 1-9 provides an example of
setting up a consistency group and adding volumes using the XCLI.
Example 1-9 Creating consistency groups and adding volumes with the XCLI
cg_create cg=ITSO_CG pool=itso
cg_add_vol cg=ITSO_CG vol=itso_volume_01
cg_add_vol cg=ITSO_CG vol=itso_volume_02
The new snapshots are created and displayed beneath the volumes in the Consistency
Groups view (Figure 1-42). These snapshots have the same creation date and time. Each
snapshot is locked on creation and has the same defaults as a regular snapshot. The
snapshots are contained in a group structure (called a snapshot group) that allows all the
snapshots to be managed by a single operation.
Adding volumes to a consistency group does not prevent you from creating a single volume
snapshot. If a single volume snapshot is created, it is not displayed in the consistency group
view. The single volume snapshot is also not consistent across multiple volumes. However,
the single volume snapshot does work according to all the rules defined previously in 1.2,
“Snapshot handling” on page 9.
With the XCLI, when the consistency group is set up, it is simple to create the snapshot. One
command creates all the snapshots within the group at the same moment in time.
Chapter 1. Snapshots 25
In addition to the snapshot functions, you can remove a volume from the consistency group.
By right-clicking the volume, a menu opens. Click Remove From Consistency Group and
validate the removal on the dialog window that opens. Figure 1-43 provides an example of
removing the itso_volume_1 volume from the consistency group.
Chapter 1. Snapshots 27
From the Snapshots Group Tree view, you can see many details. Select the group to view on
the left panel by clicking the group snapshot. The right panes provide more in-depth
information about the creation time, the associated pool, and the size of the snapshots. In
addition, the consistency group view points out the individual snapshots present in the group.
Refer to Figure 1-45 for an example of the data that is contained in a consistency group.
To display all the consistency groups in the system, issue the XCLI cg_list command.
More details are available by viewing all the consistency groups within the system that have
snapshots. The groups can be unlocked or locked, restored, or overwritten. All the operations
discussed in the snapshot section are available with the snap_group operations.
In order to delete a consistency group with the XCLI, you must first remove all the volumes
one at a time. As in Example 1-13, each volume in the consistency group is removed first.
Then the consistency group is available for deletion. Deletion of the consistency group does
not delete the individual snapshots. They are tied to the volumes and are removed from the
consistency group when you remove the volumes.
Chapter 1. Snapshots 29
1.4 Snapshot with remote mirror
XIV has a special snapshot (shown in Figure 1-47) that is automatically created by the
system. During the recovery phase of a remote mirror, the system creates a snapshot on the
target to ensure a consistent copy.
Important: This snapshot has a special deletion priority and is not deleted automatically if
the snapshot space becomes fully utilized.
When the synchronization is complete, the snapshot is removed by the system because it is
no longer needed. The following list describes the sequence of events to trigger the creation
of the special snapshot. Note that if a write does not occur while the links are broken, the
system does not create the special snapshot. The events are:
1. Remote mirror is synchronized.
2. Loss of connectivity to remote system occurs.
3. Writes continue to the primary XIV Storage System.
4. Mirror paths are reestablished (here the snapshot is created) and synchronization starts.
For more details about remote mirror refer to Chapter 4, “Synchronous Remote Mirroring” on
page 101.
Important: The special snapshot is created regardless of the amount of pool space on the
target pool. If the snapshot causes the pool to be overutilized, the mirror remains inactive.
The pool must be expanded to accommodate the snapshot, then the mirror can be
reestablished.
The MySQL database stores the data in a set directory and cannot be separated. The backup
data, when captured, can be moved to a separate system. The following scenario shows an
The first step is to back up the database. For simplicity, a script is created to perform the
backup and take the snapshot. Two volumes are assigned to a Linux host (Figure 1-48). The
first volume contains the database and the second volume holds the incremental backups in
case of a failure.
On the Linux host, the two volumes are mapped onto separate file systems. The first file
system xiv_pfe_1 maps to volume redbook_markus_09, and the second file system xiv_pfe_2
maps to volume redbook_markus_10. These volumes belong to the consistency group MySQL
Group so that when the snapshot is taken, snapshots of both volumes are taken at the same
moment.
The backup script is simple, and depending on the implementation of your database, the
following script might be too simple. However, the following script (Example 1-16) does force
an incremental backup and copies the data to the second XIV volume. Then the script locks
the tables so that no more data can be modified. When the tables are locked, the script
initiates a snapshot, which saves everything for later use. Finally, the tables are unlocked.
# First flush the tables this can be done while running and
# creates an incremental backup of the DB at a set point in time.
/usr/local/mysql/bin/mysql -h localhost -u root -p password < ~/SQL_BACKUP
Chapter 1. Snapshots 31
# Since the mysql daemon was run specifying the binary log name
# of backup the files can be copied to the backup directory on another disk
cp /usr/local/mysql/data/backup* /xiv_pfe_2
When issuing commands to the MySQL database, the password for the root user is stored in
an environment variable (not in the script, as was done in Example 1-16 for simplicity).
Storing the password in an environment variable allows the script to perform the action
without requiring user intervention. For the script to invoke the MySQL database, the SQL
statements are stored in separate files and piped into the MySQL application. Example 1-17
provides the three SQL statements that are issued to perform the backup operation.
SQL_LOCK
FLUSH TABLES WITH READ LOCK
SQL_UNLOCK
UNLOCK TABLES
Now that the database is ready, the backup script is run. Example 1-18 is the output from the
script. Then the snapshots are displayed to show that the system now contains a backup of
the data.
Chapter 1. Snapshots 33
To show that the restore operation is working, the database is dropped (Figure 1-50) and all
the data is lost. After the drop operation is complete, the database is permanently removed
from MySQL. It is possible to perform a restore action from the incremental backup. For this
example, the snapshot function is used to restore the entire database.
The restore script, shown in Example 1-19, stops the MySQL daemon and unmounts the
Linux file systems. Then the script restores the snapshot and finally remounts and starts
MySQL.
# Mount the FS
mount /dev/dm-2 /xiv_pfe_1
mount /dev/dm-3 /xiv_pfe_2
To help you a bit, I am now going to create the needed MySQL databases
and start the MySQL server for you. If you run into any trouble, please
consult the MySQL manual, that you can find in the Docs directory.
Chapter 1. Snapshots 35
You can start the MySQL daemon with:
cd . ; ./bin/mysqld_safe &
When complete, the data is restored and the redbook database is available, as shown in
Figure 1-51.
The following example scenario illustrates how to prepare a DB2® database on an AIX
platform for storage-based snapshot backup and then perform snapshot backup and restore.
Figure 1-52 XIV volume mapping for the DB2 database server
Example 1-21 AIX volume groups and file systems created for the DB2 database
$ lsvg
rootvg
db2datavg
db2logvg
$ df -g
Filesystem GB blocks Free %Used Iused %Iused Mounted on
/dev/hd4 2.31 0.58 75% 19508 12% /
/dev/hd2 1.75 0.14 92% 38377 46% /usr
/dev/hd9var 0.16 0.08 46% 4573 19% /var
/dev/hd3 5.06 2.04 60% 7418 2% /tmp
/dev/hd1 1.00 0.53 48% 26 1% /home
/dev/hd11admin 0.12 0.12 1% 5 1% /admin
/proc - - - - - /proc
/dev/hd10opt 1.69 1.52 10% 2712 1% /opt
/dev/livedump 0.25 0.25 1% 4 1% /var/adm/ras/livedump
/dev/db2loglv 47.50 47.49 1% 4 1% /db2/XIV/log_dir
/dev/db2datalv 47.50 47.31 1% 56 1% /db2/XIV/db2xiv
Chapter 1. Snapshots 37
logging, only full, offline backups of the database are allowed. To perform an online backup of
the database, the logging method must be changed to archive. See Example 1-22.
This DB2 configuration change enables consistent XIV snapshot creation of the XIV volumes
(that the database is stored on) while the database is online, restore of the database using
snapshots, and roll forward of the database changes to a desired point in time.
Connect to DB2 as a database administrator to change the database configuration.
After the archive logging method has been enabled, DB2 requests a database backup.
Example 1-23
$ db2 connect reset
$ db2 backup db XIV to /tmp
$ db2 connect to XIV
Before the snapshot creation, ensure that the snapshot includes all file systems relevant for
the database backup. If in doubt, the dbpath view shows this information (Example 1-24). The
output only shows the relevant lines for better readability.
The AIX commands df and lsvg (with the -l and -p options) identify the related AIX file
systems and device files (hdisks). The XIV utility xiv_devlist shows the AIX hdisk names and
the names of the associated XIV volumes.
Figure 1-53 shows the newly created snapshot on the XIV graphical user interface.
Chapter 1. Snapshots 39
4. On the AIX system activate the volume groups and mount the file systems the database
resides in.
# varyonvg db2datavg
# mount /db2/XIV/db2xiv
5. Start the database instance.
$ db2start
6. Initialize the database.
From the DB2 view the XIV snapshot of the database volumes creates a split mirror
database environment. The database was in write suspend mode when the snapshot was
taken. Thus the restored database is still in this state and the split mirror must be used as
a backup image to restore the primary database. The DB2 command db2inidb must to run
to initialize a mirrored database before the split mirror can be used.
$ db2inidb XIV as mirror
DBT1000I The tool completed successfully.
7. Roll forward the database to the end of the logs and check whether a database connect
works.
After the XIV Storage System completes the setup of the pointers to the source data, a
background copy of the data is performed. The data is copied from the source volume to a
new area on the disk, and the pointers of the target volume are then updated to use this new
space. The copy operation is done in such a way as to minimize the impact to the system. If
the host performs an update before the background copy is complete, a redirect on write
occurs, which allows the volume to be readable and writable before the volume copy
completes.
If the sizes of the volumes differ, the size of the target volume is modified to match the source
volume when the copy is initiated. The resize operation does not require user intervention.
Figure 2-1 illustrates making a copy of volume xiv_vol_1. The target volume for this example
is xiv_vol_2. By right-clicking the source volume, a menu appears and you can then select
Copy this Volume. This action causes a dialog box to open.
The XIV Storage System instantly performs the update process and displays a completion
message. When the copy process is complete, the volume is available for use.
To create a volume copy with the XCLI, the source and target volumes must be specified in
the command. In addition, the -y parameter must be specified to provide an affirmative
response to the validation questions. See Example 2-1.
VMware allows the resources of a server to be separated into logical virtual systems, each
containing its own OS and resources. When creating the configuration, it is extremely
important to have the hard disk assigned to the virtual machine to be a mapped raw LUN. If
the hard disk is a VMware File System (VMFS), the volume copy fails because there are
duplicate file systems in VMware. In Figure 2-3, the mapped raw LUN is the XIV volume that
was mapped to the VMware server.
A demonstration of the process is simple using VMware. Starting with the VMware resource
window, power off the virtual machines for both the source and the target. The summary
described in Figure 2-4 shows that both XIV Source VM (1), the source, and XIV Source VM
(2), the target, are powered off.
Looking at the XIV Storage System before the copy (Figure 2-5), xiv_vmware_1 is mapped to
the XIV Source VM (1) in VMware and has utilized 1 GB of space. This information shows that
the OS is installed and operational. The second volume, xiv_vmware_2, is the target volume
for the copy and is mapped to XIV Source VM (2) and is 0 in size. At this point, the OS has not
been installed on the virtual machine and thus the OS is not usable.
Selecting xiv_vmware_1 as the source, copy the volume to the target xiv_vmware_2. The
copy completes immediately and is available for usage.
To verify that the copy is complete, the used area of the volumes must match, as shown in
Figure 2-6.
After the copy is complete, power up the new virtual machine to use the new operating
system. Both servers usually boot up normally with only minor modifications to the host. In
this example, the server name we had to changed because there were two servers on the
network with the same name. Refer to Figure 2-7.
Remote Mirroring can be a synchronous copy solution where write operations are completed
on both copies (local and remote sites) before they are considered to be complete (see
Chapter 4, “Synchronous Remote Mirroring” on page 101). This type of remote mirroring is
normally used for short distances to minimize the effect of I/O delays inherent to the distance
to the remote site.
Remote Mirroring can also be an asynchronous solution were consistent sets of data are
copied to the remote location at specified intervals and host I/O operations are complete after
writing to the primary (see Chapter 5, “Asynchronous Remote Mirroring” on page 125). This
is typically used for long distances between sites.
Note: For asynchronous mirroring over iSCSI links, a reliable, dedicated network must be
available. It requires consistent network bandwidth and a non-shared link.
Unless otherwise noted, this chapter describes the basic concepts, functions, and terms that
are common to both XIV synchronous and asynchronous mirroring.
XIV Remote Mirroring is application and operating system independent, and does not require
server processor cycle usage.
Host Server
2
4
1. Host Write to Master XIV 3
(data placed in cache of 2
Modules)
Local XIV Remote XIV
2. Master replicates to Slave (Master)
XIV (data placed in cache of (Slave)
2 Modules)
3. Slave acknowledges write
complete to Master
4. Master acknowledges write
complete to application
Application Server
3
2
4
1. Host Write to Master XIV
(data placed in cache of 2
Modules)) Local XIV
(Master) Remote XIV
2. Master acknowledges write (Slave)
complete to application
3. Master replicates to Slave
4. Slave acknowledges write
complete
XIV System B
XIV System A
Mirrored CG Mirrored
Master
Storage
Pool
Mirrored CG
Slave
XIV System D
XIV System C
Mirrored Vol
Slave
Mirrored Vol
Mirrored Vol Mirrored CG
Slave Slave
Master
Storage Storage
Pool Pool
Up to 16 targets can be referenced by a single system. A system can host replication sources
and separate replication targets simultaneously.
Figure 3-3 illustrates possible schemes for how mirroring can be configured.
Important: A single XIV can contain both master volumes and CGs (mirroring to another
XIV) and slave volumes and CGs (mirroring from another XIV). Peers in a master role and
peers in a slave role on the same XIV system must belong to different mirror couplings.
Consistency group
With mirroring (synchronous or asynchronous), the major reason for consistency groups is to
handle a large number of mirror pairs as a group (mirrored volumes are consistent). Instead
of dealing with many volume remote mirror pairs individually, consistency groups simplify the
handling of many pairs considerably.
Important: If your mirrored volumes are in a mirrored consistency group you cannot do
mirroring operations like deactivate or change_role on a single volume basis. If you want to
do this, you must remove the volume from the consistency group (refer to “Removing a
volume from a mirrored consistency group” on page 108 or “Removing a volume from a
mirrored consistency group” on page 135).
Consistency groups also play an important role in the recovery process. If mirroring was
suspended (for example, due to complete link failure), data on different slave volumes at the
remote XIV are consistent. However, when the links are up again and resynchronization is
started, data spread across several slave volumes is not consistent until the master state is
synchronized. To preserve the consistent state of the slave volumes, the XIV system
automatically creates a snapshot of each slave volume and keeps it until the remote mirror
volume pair is synchronized (the snapshot is kept until all pairs are synchronized in order to
enable restoration to the same consistent point in time). If the remote mirror pairs are in a
consistency group, then the snapshot is taken for the whole group of slave volumes and the
snapshots are preserved until all pairs are synchronized. Then the snapshot is deleted
automatically.
Link status
The link status reflects the connection from the master to the slave volume or CG. A link has
a direction (from local site to remote or vice versa). A failed link or a failed secondary system
both result in a link error status. The link state is one of the factors determining the mirror
operational status. Link states are as follows:
OK: link is up and functioning
Error: link is down
Figure 3-5 and Figure 3-6 depict how the link status is reflected in the XIV GUI, respectively.
If there are several links (at least two) in one direction and one link fails, this usually does not
affect mirroring as long as the bandwidth of the remaining link is high enough to keep up with
the data traffic.
The synchronization status reflects the consistency of the data between the master and slave
volumes. Because the purpose of the remote mirroring feature is to ensure that the slave
volumes are an identical copy of the master volumes, this status indicates whether this
objective is currently being achieved.
The XIV keeps track of the partitions that have been modified on the master volumes and
when the link is operational again or the remote mirroring is reactivated. These changed
partitions can be sent to the remote XIV and applied to the slave volumes there.
Remote Mirroring
Using snapshots
Snapshots can be used with Remote Mirroring to provide copies of production data for
business or IT purposes. Moreover, when used with Remote Mirroring, snapshots provide
protection against data corruption.
Remote Mirroring
Point in Time
Copy
IBMS stemStorageTM 8
Figure 3-13 Combining snapshots with Remote Mirroring
Note that recovery using a snapshot warrants deletion and recreation of the mirror.
XIV snapshot (within a single XIV system)
Protection for the event of software data corruption can be provided by a point-in-time
backup solution using the XIV snapshot function within the XIV system that contains the
production volumes. Figure 3-14 shows a single-system point-in-time online backup
configuration.
Target
During normal remote mirroring operation, one XIV system (at the DR site) will be active
as a mirroring target. The other XIV system (at the local production site) will be active as a
mirroring target only when it becomes available again after an outage and switch of
production to the DR site. Changes made while production was running at the DR site are
copied back to the original production site, as shown in Figure 3-18.
Target
Target
Target
Figure 3-20 Fan-out target configuration
Target
Target
Target
S Target
Target M
9 FC SAN 9
8 8
7 7
6 6
Data,
5 , FC SAN Data,
5 ,
Mgt
4
Data, , Mgt
4
Data, ,
Mgt Mgt
In Figure 3-24, the solid lines represent mirroring connections used during normal
operation (the mirroring target system is on the right), and the dotted lines represent
mirroring connections used when production is running at the disaster recovery site and
changes are being copied back to the original production site (mirroring target is on the
left.)
XIV Fibre Channel ports may be easily and dynamically configured as initiator or target
ports.
iSCSI ports
For iSCSI ports, connections are bi-directional.
Use a minimum of two connections (with each of these ports in a different module) using a
total of four ports to provide availability protection. In Figure 3-25 on page 68, the solid
lines represent data flow during normal operation and the dotted lines represent data flow
when production is running at the disaster recovery site and changes are being copied
back to the original production site.
Data, , Data, ,
IP Network
DMatgat , , DMatgat , ,
Mgt Mgt
Note: For asynchronous mirroring over iSCSI links, a reliable, dedicated network must
be available. It requires consistent network bandwidth and a non-shared link.
Before discussing actions involved in creating mirroring pairs, we must introduce the basic
XIV concepts used in the discussion.
An XIV volume is a logical volume that is presented to an external server as a logical unit
number (LUN). An XIV volume is allocated from logical and physical capacity within a single
XIV storage pool. The physical capacity on which data for an XIV volume is stored is always
spread across all available disk drives in the XIV system
The XIV system is data aware. It monitors and reports the amount of physical data written to
a logical volume and does not copy any part of the volume that has not been used yet to store
any actual data.
40TB
Storage
Pool
With Remote Mirroring, the concept of consistency group represents a logical container for a
group of volumes, allowing them to be managed as a single unit. Instead of dealing with many
volume remote mirror pairs individually, consistency groups simplify the handling of many
pairs considerably.
An XIV consistency group exists within the boundary of an XIV storage pool in a single XIV
system (in other words, you can have different CGs in different storage pools within an XIV
storage system, but a CG cannot span multiple storage pools). All volumes in a particular
consistency group are in the same XIV storage pool.
In Figure 3-27, an XIV storage pool with 40 TB capacity contains seven logical volumes. One
consistency group has been defined for the XIV storage pool, but no volumes have been
added to or created in the consistency group.
40TB CG
Storage
Pool
Volumes may be easily and dynamically (that is, without stopping mirroring or application
I/Os) added to a consistency group.
40TB CG
Storage
Pool
Volumes may also be easily and dynamically removed from an XIV consistency group. In
Figure 3-29, one of the five volumes has been removed from the consistency group, leaving
four volumes remaining in the consistency group. It is also possible to remove all volumes
from a consistency group.
40TB CG
Storage
Pool
2) Update Record
DB
1) Intend to update DB
3) DB updated Log
2) Update Record
x DB
1) Intend to update DB
3) DB updated Log
Just as the application or database manages dependent write consistency for the production
volumes, the XIV system must manage dependent write consistency for the mirror target
volumes.
XIV also supports creation of application-consistent data in the remote mirroring target
volumes, as discussed 3.5.4, “Creating application-consistent data at both local and the
remote sites” on page 85.
The two peers in the mirror coupling may be either two volumes (volume peers) or two
consistency groups (CG peers), as shown in Figure 3-32.
SITE 1 SITE 2
Production DR Test/Recovery Servers
Volume
Coupling/Mirror
M Defined S
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated M Defined S Designated
Primary Volume Secondary
Coupling/Mirror
M Defined S
CG
Consistency Group Peer Coupling/Mirror Consistency Group Peer
Primary Designation (P) P/M Defined S/S Secondary Designation (S)
Master Role (M) Slave Role (S)
Each of the two peers in the mirroring relationship is given a designation and a role. The
designation indicates the original or normal function of each of the two peers—either primary
or secondary. The peer designation does not change with operational actions or commands.
(If necessary, the peer designation may be changed by explicit user command or action.)
The role of a peer indicates its current (perhaps temporary) operational function (either
master or slave). The operational role of a peer may change as the result of user commands
or actions. Peer roles typically change during DR testing or a true disaster recovery and
production site switch.
When a mirror coupling is created, the first peer specified (for example, the volumes or CG at
site 1, as shown in Figure 3-32) is the source for data to be replicated to the target system, so
it is given the primary designation and the master role.
The second peer specified (or automatically created by the XIV system) when the mirroring
coupling is created is the target of data replication, so it is given the secondary designation
and the slave role.
Initialization may take a significant amount of time if a large amount of data exists on the
master when a mirror coupling is activated. As discussed earlier, the rate for this initial copy
of data can be specified by the user. The speed of this initial copy of data will also be affected
by the connectivity and bandwidth (number of links and link speed) between the XIV primary
and secondary systems.
As an option to remove the impact of distance on initialization, XIV mirroring may be initialized
with the target system installed locally, and the target system may be disconnected after
initialization, shipped to the remote site and reconnected, and mirroring reactivated.
If a remote mirroring configuration is set up when a volume is first created (that is, before any
application data has been written to the volume), initialization will be very quick.
When an XIV consistency group mirror coupling is created, the CG must be empty so there is
no data movement and the initialization process is extremely fast.
The mirror coupling status at the end of initialization differs for XIV synchronous mirroring and
XIV asynchronous mirroring (see “Synchronous mirroring states” on page 56 and “Storage
pools, volumes, and consistency groups” on page 68), but in either case, when initialization is
complete, a consistent set of data exists at the remote site. See Figure 3-33.
SITE 1 SITE 2
Production DR Test/Recovery Servers
Volume
Coupling/Mirror
M Active S
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated M Active S Designated
Primary Volume Secondary
Coupling/Mirror
M Active S
CG
Consistency Group Peer Coupling/Mirror Consistency Group Peer
Primary Designation (P) P/M Active S/S Secondary Designation (S)
Master Role (M) Slave Role (S)
In Figure 3-34, three active volume couplings that have completed initialization have been
moved into the active mirrored consistency group.
SITE 1 SITE 2
Production DR Test/Recovery Servers
P/M S/S
CG
Consistency Group Peer Coupling/Mirror Consistency Group Peer
Primary Designation (P) Active Secondary Designation (S)
Master Role (M) Slave Role (S)
One or more additional mirrored volumes may be added to a mirrored consistency group at a
later time in the same way.
It is also important to realize that in a CG all volumes have the same role. Also, consistency
groups are handled as a single entity and, for example, in asynchronous mirroring, a delay in
replicating a single volume affects the status of the entire CG.
Normal operation, statuses, and reporting differ for XIV synchronous mirroring and XIV
asynchronous mirroring. Refer to Chapter 4, “Synchronous Remote Mirroring” on page 101,
and Chapter 5, “Asynchronous Remote Mirroring” on page 125, for details.
Site 1 Site 2
Production Servers DR Test/Recovery Servers
Target
XIV 1 XIV 2
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated Primary M S Designated Secondary
Master Role Active Slave Role
CG
Coupling/Mirror
CG Peer CG Peer
Designated Primary Active Designated Secondary
Master Role M S Slave Role
Figure 3-35 Normal operations: volume mirror coupling and CG mirror coupling
Site 1 Site 2
Production Servers DR Test/Recovery Servers
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated Primary M S Designated Secondary
Standby
Master Role Master Role
CG
Coupling/Mirror
CG Peer CG Peer
Designated Primary Standby Designated Secondary
Master Role M S Master Role
During standby mode, a consistent set of data is available at the remote site (site 2, in our
example). The currency of the consistent data ages in comparison to the master volumes,
and the gap increases while mirroring is in standby mode.
Note that in asynchronous mirroring, metadata is not used and the comparison between the
most_recent and last_replicated snapshots indicates the data that must be replicated.
Planned deactivation of XIV remote mirroring may be done to suspend remote mirroring
during a planned network outage or DR test, or to reduce bandwidth during a period of peak
load.
Site 1 Site 2
Production Servers DR Test/Recovery Servers
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated Primary M M Designated Secondary
Standby
Master Role Master Role
CG
Coupling/Mirror
CG Peer CG Peer
Designated Primary Standby Designated Secondary
Master Role M M Master Role
Changing the role of a volume from slave to master allows the volume to be accessed. In
synchronous mirroring, changing the role also starts metadata recording for any changes
made to the volume. This metadata may be used for resynchronization (if the new master
volume remains the master when remote mirroring is reactivated). In asynchronous mirroring,
changing a peer's role automatically reverts the peer to its last_replicated snapshot.
When mirroring is in standby mode, both volumes may have the master role, as shown in the
following section. When changing roles, both peer roles must be changed if possible (the
In synchronous mirroring, changing a peer role from master to slave allows the slave to
accept mirrored data from the master and cause deletion of metadata that was used to record
any changes while the peer had the master role.
In asynchronous mirroring, changing a peer's role automatically reverts the peer to its
last_replicated snapshot. If at any point in time the command is run on the slave (changing
the slave to a master), the former master must first be changed to the slave role (upon
recovery of the primary site) before changing the secondary role back from master to slave.
Both peers may temporarily have the master role when a failure at site 1 has resulted in a true
disaster recovery production site switch from site 1 to site 2. When site 1 becomes available
again and there is a requirement to switch production back to site 1, the production changes
made to the volumes at site 2 must be resynchronized to the volumes at site 1. In order to do
this, the peers at site 1 must change their role from master to slave, as shown in Figure 3-38.
Site 1 Site 2
Production Servers DR Test/Recovery Servers
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated Primary S M Designated Secondary
Standby
Slave Role Master Role
CG
Coupling/Mirror
CG Peer CG Peer
Designated Primary Standby Designated Secondary
Slave Role S M Master Role
Site 1 Site 2
Production Servers DR Test/Recovery Servers
Target
XIV 1 XIV 2
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated Primary M S Designated Secondary
Master Role Active Slave Role
CG
Coupling/Mirror
CG Peer CG Peer
Designated Primary Active Designated Secondary
Master Role M S Slave Role
The rate for this resynchronization of changes can be specified by the user in MBps using the
XCLI target_config_sync_rates command.
When XIV mirroring is reactivated in the normal direction, changes recorded at the primary
peers are copied to the secondary peers.
SITE 1 SITE 2
Production Servers DR Test/Recovery Servers
Remote Target
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated Primary S Active
M Designated Secondary
Slave Role Master Role
CG
Coupling/Mirror
CG Peer Active CG Peer
Designated Primary Designated Secondary
Slave Role S M Master Role
A typical usage example of this scenario is when returning to the primary site after a true
disaster recovery with production switched to the secondary peers at the remote site.
To add a volume mirror to a mirrored consistency group (for instance, when an application
needs additional capacity):
1. Define XIV volume mirror coupling from the additional master volume at XIV 1 to the slave
volume at XIV 2.
2. Activate XIV remote mirroring from the additional master volume at XIV 1 to the slave
volume at XIV 2.
3. Monitor initialization until it is complete. Volume coupling initialization must be complete
before the coupling can be moved to a mirrored CG.
4. Add the additional master volume at XIV 1 to the master consistency group at XIV 1. (The
additional slave volume at XIV 2 will be automatically added to the slave consistency
group at XIV 2.)
In Figure 3-41, one volume has been added to the mirrored XIV consistency group. The
volumes must be in a volume peer relationship and must have completed initialization
SITE 1 SITE 2
Production DR Test/Recovery Servers
M/P S/S
CG
Coupling/Mirror
Active
Consistency Group Peer Consistency Group Peer
Primary Designation (P) Secondary Designation (S)
Master Role (M) Slave Role (S)
Refer also to 3.4.4, “Defining the XIV mirror coupling and peers: volume” on page 68, and
3.4.6, “Adding volume mirror coupling to consistency group mirror coupling” on page 73, for
additional details.
In Figure 3-42, one volume has been removed from the example mirrored XIV consistency
group with three volumes. After being removed from the mirrored CG, a volume will continue
to be mirrored as part of a volume peer relationship.
Site 1 Site 2
Production DR Test/Recovery Servers
Volume
Coupling/Mirror
P/M Active S/S
Volume
Coupling/Mirror
P/M Active S/S
P/M S/S
CG
Consistency Group Peer Coupling/Mirror Consistency Group Peer
Primary Designation (P) Active Secondary Designation (S)
Master Role (M) Slave Role (S)
Site 1 Site 2
Production Servers DR Test/Recovery Servers
Typical usage of mirror deletion is a one-time data migration using remote mirroring. This
includes deleting the XIV mirror couplings after the migration is complete.
Site 1 Site 2
Production Servers DR Test/Recovery Servers
Target
XIV 1 XIV 2
Volume
Volume Peer Coupling/Mirror Volume Peer
Designated Primary M S Designated Secondary
Master Role Active Slave Role
CG
Coupling/Mirror
CG Peer CG Peer
Designated Primary Active Designated Secondary
Master Role M S Slave Role
3.5.4 Creating application-consistent data at both local and the remote sites
This scenario begins with normal operation of XIV remote mirroring from XIV 1 to XIV 2. This
scenario may be used when the fastest possible application restart is required.
1. No actions are taken to change XIV remote mirroring.
2. Briefly quiesce the application at XIV 1 or place the database into hot backup mode.
3. Ensure that all data has been copied from the master peer at XIV 1 to the slave peer at
XIV 2.
4. Issue Create Mirrored Snapshot at the master peer. This creates an additional snapshot
at the master and slave.
5. Resume normal operation of the application or database at XIV 1.
6. Unlock the snapshot or volume copy.
7. Map the snapshot/volume copy to DR servers at XIV 2.
8. Bring the snapshot or volume copy at XIV 2 online to XIV 2 servers to begin disaster
recovery testing or other functions at XIV 2.
9. When DR testing or other use is complete, unmap the snapshot/volume copy from XIV 2
DR servers.
10.Delete the snapshot/volume copy if desired.
3.5.5 Migration
A migration scenario involves a one-time movement of data from one XIV system to another
(for example, migration to new XIV hardware.) This scenario begins with existing connectivity
between XIV 1 and XIV 2.
1. Define XIV remote mirroring from the master volume at XIV 1 to the slave volume at XIV 2.
2. Activate XIV remote mirroring from the master volume at XIV 1 to the slave volume at XIV
2.
3. Monitor initialization until it is complete.
3.6 Planning
The most important planning considerations for XIV Remote Mirroring are those related to
ensuring availability and performance of the mirroring connections between XIV systems, as
well as the performance of the XIV systems. Planning for snapshot capacity usage is also
extremely important.
To optimize availability, XIV remote mirroring connections must be spread across multiple
ports on different adapter cards in different modules, and must be connected to different
networks.
To optimize capacity usage, the number and frequency of snapshots (both those required for
asynchronous replication and any additional user-initiated snapshots) and the workload
change rates must be carefully reviewed. If not enough information is available, a snapshot
area that is 30% of the pool size may be used as a starting point. Storage pool snapshot
usage thresholds must be set to trigger notification (for example, SNMP, e-mail, SMS) when
the snapshot area capacity reaches 50%, and snapshot usage must be monitored continually
to understand long-term snapshot capacity requirements.
Thresholds for RPO and for link disruption may be specified by the user and trigger an event
when the threshold is reached.
Performance statistics from the FC or IP network components are also extremely useful.
3.10 Boundaries
With Version 10.2, the XIV Storage System has the following boundaries or limits:
Maximum remote systems: The maximum number of remote systems that can be
attached to a single primary is 16.
Number of remote mirrors: The combined number of master and slave volumes (including
in mirrored CG) cannot exceed 512.
Distance: Distance is only limited by the response time of the medium used. Use
asynchronous mirroring when the distance causes unacceptable delays to the host I/O in
synchronous mode.
Consistency groups are supported within Remote Mirroring. The maximum number of
consistency groups is 256.
Remote Mirroring can be set up on paths that are either direct or SAN attached via FC or
iSCSI protocols. For most disaster recovery solutions, the secondary system will be located
at a geographically remote site. The sites will be connected using either SAN connectivity
with Fibre Channel Protocol (FCP) or Ethernet with iSCSI. In certain cases, using direct
connect might be the option of choice if the machines are located near each other and could
be used for initialization before the target XIV Storage System is moved to the remote site.
Bandwidth considerations must be taken into account when planning the infrastructure to
support the Remote Mirroring implementation. Knowing when the peak write rate occurs for
systems attached to the storage will help with the planning for the number of paths needed to
support the Remote Mirroring function and any future growth plans.
When the protocol has been selected, it is time to determine which ports on the XIV Storage
System will be used. The port settings are easily displayed using the XCLI Session
environment and the command fc_port_list for Fibre Channel or ipinterface_list for
iSCSI.
There must always be a minimum of two paths configured within Remote Mirroring for FCP
connections, and these paths must be dedicated to Remote Mirroring. These two paths must
be considered a set. Use port 4 and port 2 in the selected interface module for this purpose.
For redundancy, additional sets of paths must be configured in different interface modules.
Fibre Channel paths for Remote Mirroring have slightly more requirements for setup, and we
look at this interface first.
The iSCSI connections are shown in Example 3-2 using the command ipinterface_list.
The output has been truncated to show just the iSCSI connections in which we are interested
here. The command also displays all Ethernet connections and settings. In this example we
have two connections displayed for iSCSI—one connection in module 7 and one connection
in module 8.
>> ipinterface_list
Name Type IP Address Network Mask Default Gateway MTU Module Ports
itso_m8_p1 iSCSI 9.11.237.156 255.255.254.0 9.11.236.1 4500 1:Module:8 1
itso_m7_p1 iSCSI 9.11.237.155 255.255.254.0 9.11.236.1 4500 1:Module:7 1
Click the connecting links between the systems of interest to view the ports.
Right-click a specific port and select Properties, the output of which is shown in Figure 3-46.
This particular port is configured as a target.
Similar information can be displayed for the iSCSI connections using the GUI, as shown in
Figure 3-48. This view can be seen either by right-clicking the Ethernet port (similar to the
Fibre Channel port shown in Figure 3-48) or by selecting the system, then selecting Hosts
and LUNs iSCSI Connectivity. This sequence displays the same two iSCSI definitions
that are shown with the XCLI command.
By default, Fibre Channel ports 2 and 4 (target and initiator, respectively) from every module
are designed to be used for Remote Mirroring. For example, port 4 module 8 (initiator) on the
local machine is connected to port 2 module 8 (target) on the remote machine. When setting
up a new system, it is best to plan for any Remote Mirroring and reserve these ports for that
purpose. However different ports could be used as needed.
In the event that a port role does need to be changed, you can change the port role with both
the XCLI and the GUI. Use the XCLI fc_port_config command to change a port, as shown in
Example 3-3. Using the output from fc_port_list, we can get the fc_port name to be used in
the command, changing the port role to be either initiator or target, as needed.
fc_port_list
Component ID Status Currently Functioning WWPN Port ID Role
1:FC_Port:4:3 OK yes 5001738000130142 00750029 Initiator
To perform the same function with the GUI, select the primary system, open the patch panel
view, and right-click the port, as shown in Figure 3-49.
Selecting Configure opens a configuration window, as shown in Figure 3-50, which allows
the port to be enabled (or disabled), its role defined as target or initiator, and, finally, the
speed for the port configured (Auto, 1 Gbps, 2 Gbps, or 10 Gbps).
Then define the type of mirroring to be used (mirroring or migration) and the type of
connection (iSCSI or FC), as shown in Figure 3-52.
Next, as shown in Figure 3-53 on page 95, connections are defined by clicking the line
between the two XIV systems to display the link status detail screen.
Connections can also be defined by clicking a port on the primary system and dragging the
the corresponding port on the target system. This is shown as a blue line in Figure 3-55 on
page 97.
Releasing the mouse button initiates the connection and then the status can be displayed, as
shown in Figure 3-56.
Right-click a path and you have options to Activate, Deactivate, and Delete the selected path,
as shown in Figure 3-57 on page 98.
To delete the connections between two XIV systems you have to delete all paths between the
two systems and afterwards in the Mirroring Connectivity display delete the target system as
shown in Figure 3-58.
XCLI commands can also be used to delete the connectivity between the primary XIV System
and the secondary XIV system (Figure 3-60).
target_connectivity_delete local_port="1:FC_Port:8:4"
fcaddress=50017380014B0181 target="WSC_1300331"
target_port_delete fcaddress=50017380014B0181 target="WSC_1300331"
target_connectivity_delete local_port="1:FC_Port:8:4"
fcaddress=50017380027F0180 target="WSC_6000639"
target_port_delete fcaddress=50017380027F0180 target="WSC_6000639"
target_connectivity_delete local_port="1:FC_Port:9:4"
fcaddress=50017380014B0191 target="WSC_1300331"
target_port_delete fcaddress=50017380014B0191 target="WSC_1300331"
target_connectivity_delete local_port="1:FC_Port:9:4"
fcaddress=50017380027F0190 target="WSC_6000639"
We assume that the links between the local and remote XIV storage systems have already
been established, as discussed in 3.11.2, “Remote mirror target configuration” on page 94.
When initially configured, one volume is considered the source (master role and resides at
the primary system) and the other is the target (slave role and resides at the secondary
system). This designation is associated with the volume and its XIV system and does not
change. During various operations the role may change (master or slave), but one system is
always the primary and the other is always the secondary.
To create a mirror you can use the XIV GUI or the XCLI.
To create a mirror:
1. Select Create Mirror, as shown in Figure 4-2, and specify the source volume or master for
the mirror pair (Figure 4-3 on page 103).
Tip: When working with the XCLI session or the XCLI from a command line, the windows
look similar and you could inadvertently address the wrong XIV system with your
command. Therefore, it is a good idea to issue a config_get command to verify that you
are addressing the intended XIV system.
3. To list the couplings on the secondary XIV, run the mirror_list command, as shown in
Example 4-3. Note that the status of Initializing is used when the coupling is in standby
(inactive) or initializing.
3. Repeat steps 1–2 until all required couplings are activated and are synchronized and
consistent.
2. On the primary XIV, run the mirror_list command to see the status of the couplings
(Example 4-5).
3. On the secondary XIV, run the mirror_list command to see the status of the couplings
(Example 4-6).
Setting a consistency group to be mirrored is done by first creating a consistency group, then
setting it to be mirrored, and only then populating it with volumes. A consistency group must
be created at the primary XIV and a corresponding consistency group at the secondary XIV.
The names of the consistency groups can be different. When creating a consistency group,
you also must specify the storage pool.
The Create Mirror dialog shown in Figure 4-10 is displayed. Be sure to specify the mirroring
parameters that match the volumes that will be part of that CG.
All volumes that you are going to add to the consistency group must be in that pool on the
primary XIV and in one pool on the secondary XIV. Adding a new volume pair to a mirrored
consistency group requires the volumes to be mirrored exactly as the other volumes within
this consistency group.
Also, mirrors for volumes must be activated before volumes can be added to a mirrored
consistency group.
It is possible to add a mirrored volume to a non-mirrored consistency group and have this
volume retain its mirroring settings.
Deletion
When a mirror pair (volume/CG) is inactive, the mirror relationship can be deleted. When a
mirror relationship has been deleted, the XIV forgets everything about the relationship. If you
want to set up the mirror again, the XIV must copy the entire volume from the source to the
target.
Note that when the mirror is deleted, the slave volume becomes a normal volume again, but
the volume is locked, which means that it is write protected. To enable writing to the volume
go to the Volumes list panel. Right-click the volume and select Unlock.
The slave volume must also be formatted before it can be part of a new mirror. Formatting
also requires that all snapshots of that volume be deleted.
Switching roles must be initiated on the master volume/CG when remote mirroring is
operational. As the task name implies, it switches the master role to the slave role and at the
same time the slave role to the master role.
Changing roles can be performed at any time (when a pair is active or inactive) for the slave,
and for the master when the coupling is inactive. A change role reverts only the role of that
peer.
Normally, switching the roles requires shutting down the servers at the primary site first,
changing SAN zoning and XIV LUN masking to allow access to the secondary site volumes,
and then restarting the servers with access to the secondary XIV. However, in certain
clustered environments, this takeover could be automated.
Assuming that the primary site is down and the secondary site will become the main
production site, changing roles is performed at the secondary (now production) site first.
Later, when the primary site is up again and communication is re-established, you also
change the role at the primary site to a slave to be able to establish remote mirroring from the
secondary site back to the normal production primary site. Once data has been synchronized
from the secondary site to the primary site, you can perform a switch role to once again make
the primary site the master.
The new master volume/CG (at the secondary site) starts to accept write commands from
local hosts. Because coupling is not active, in the same way as for any master volume,
metadata maintains a record of which write operations must be sent to the slave volume
when communication resumes.
After changing the slave to the master, an administrator must change the original master to
the slave role before communication resumes. If both peers are left with the same role
(master), mirroring cannot be restarted.
Upon re-establishing the connection, the primary volume/CG (current slave volume/CG)
updates the secondary volume/CG (new master volume/CG) with this uncommitted data, and
it is the responsibility of the secondary peer to synchronize these updates to the primary peer.
If the link is resumed and both sides have the same role, the coupling will not become
operational. To solve this problem, the user must use the change role function on one of the
volumes and then activate the coupling.
Resynchronization can be performed in any direction given that one peer has the master role
and the other the slave role. When there is a temporary failure of all links from the primary
XIV to the secondary XIV, you re-establish the mirroring in the original direction after the links
are up again.
Also, if you suspended mirroring for a disaster recovery test at the secondary site, you might
want to reset the changes made to the secondary site during the tests and re-establish
mirroring from the primary to the secondary site.
If there was a disaster and production is now running on the secondary XIV, re-establish
mirroring first from the secondary XIV to the primary XIV and later on switch mirroring to the
original direction from the primary XIV to the secondary XIV.
In any case, the slave peers usually are in a consistent state up to the moment when
resynchronization starts. During the resynchronization process, the peers (volume/CG) are
inconsistent. To preserve consistency, the XIV at the slave side automatically creates a
snapshot of the involved volumes or, in case of a consistency group, a snapshot of the entire
consistency group before transmitting any data to the slave volumes.
Important: You must delete the mirror relation at the secondary site before you can restore
the last consistent snapshot to the target volumes.
Note: When using the XCLI commands quotation marks (“ “) must be used to enclose
names that include spaces. If they are used for names without spaces the command still
works. The examples in this scenario contain a mixture of commands with and without
quotation marks.
After the couplings have been created and activated, as explained under 4.1, “Synchronous
mirroring configuration” on page 102, the environment will be as illustrated in Figure 4-13.
Primary Secondary
Site Site
Production Standby
Windows 2008 Windows 2008
Server Server
Data Flow
FC Link
FC Link
Data Mirroring
FC Link
Primary Secondary
XIV XIV
Primary Secondary
Site Site
Production Standby
Windows 2008 Windows 2008
Server Server
Data Flow
FC Link
FC Link
Primary Secondary
XIV XIV
We now change roles for the slave volumes at the secondary site and make them master
volumes so that the standby server can write to them.
1. On the secondary XIV open an XCLI session and run the mirror_change_role command
(Example 4-7).
2. To view the status of the coupling run the mirror_list command, as shown in
Example 4-8.
Example 4-8 shows that the synchronization status is still consistent for one of the
couplings that is yet to be changed. This is because this reflects the last known state.
When the role is changed, the coupling is automatically deactivated.
3. Repeat steps 1–2 to change roles on other volumes.
Primary Secondary
Site Site
Production Standby
Windows 2008 Windows 2008
Server Server
Data Flow
Data Flow
FC Link FC Link
Primary Secondary
XIV XIV
Primary Secondary
Site Site
Production Standby
Windows 2008 Windows 2008
Server Server
Data Flow
FC Link
FC Link
Mirroring Inactive
FC Link
Primary Secondary
XIV XIV
Figure 4-19 Change master volumes to slave volumes on the primary XIV
2. You will be prompted to confirm the role change. Select OK to confirm (Figure 4-20).
Once you have confirmed, the role is changed to slave, as shown in Figure 4-21.
3. Repeat steps 1–2 for all the volumes that must be changed.
Example 4-9 Change master volumes to slave volumes on the primary XIV
>> mirror_change_role vol=itso_win2008_vol2
Warning: ARE_YOU_SURE_YOU_WANT_TO_CHANGE_THE_PEER_ROLE_TO_SLAVE Y/N: Y
Command executed successfully.
2. To view the status of the coupling run the mirror_list command, as shown in
Example 4-10.
2. On the primary XIV go to the Remote Mirroring menu to check the statuses of the
couplings (Figure 4-24). Note that due to the time lapse between Figure 4-23 and
Figure 4-24 being taken they do show different statuses.
3. Repeat steps 1–2 until all required couplings are reactivated and synchronized.
2. On the secondary XIV run the mirror_list command to see the status of the couplings,
as illustrated in Example 4-12.
3. On the primary XIV run the mirror_list command to see the status of the couplings, as
shown in Example 4-13.
Primary Secondary
Site Site
Production Standby
Windows 2008 Windows 2008
Server Server
Data Flow
FC Link
FC Link
Data Mirroring
FC Link
Primary Secondary
XIV XIV
3. You are prompted for confirmation. Select OK. Refer to Figure 4-27 and Figure 4-28.
4. Go to the Remote Mirroring menu on the primary XIV and check the status of the coupling.
It must show the peer volume as a master volume (Figure 4-29).
5. Reassign volumes back to the production server at the primary site and power it on again.
Continue to work as normal. Figure 4-30 on page 124 shows that all the new data is now
back at the primary site.
Example 4-14 Switch from master volume to slave volume on secondary XIV
>> mirror_switch_roles vol=itso_win2008_vol2
Command executed successfully.
3. On the secondary XIV, to list the mirror coupling run the mirror_list command
(Example 4-15).
4. On the primary XIV run the mirror_list command to list the mirror couplings, as shown
in Example 4-16.
5. Reassign volumes back to the production server at the primary site and power it on again.
Continue to work as normal. Figure 4-30 shows that all the new data in now back at the
primary site.
Primary Secondary
Site Site
Production Standby
Windows 2008 Windows 2008
Server Server
Data Flow
FC Link
FC Link
Data Mirroring
FC Link
Primary Secondary
XIV XIV
Asynchronous mirroring enables replication between two XIV volumes or consistency groups
(CG) that does not suffer from the latency inherent to synchronous mirroring, thereby yielding
better system responsiveness and offering greater flexibility for implementing disaster
recovery solutions.
We assume that the links between the local and remote XIV storage systems have already
been established, as discussed in 3.11.2, “Remote mirror target configuration” on page 94.
When initially configured, one volume is considered the source (resides at the primary
system) and the other is the target (resides at the secondary system). This designation is
associated with the volume and its XIV system and does not change. During various
operations the role may change (master or slave) but one system is always the primary and
the other is always the secondary.
Asynchronous mirroring is initiated at defined intervals. This is the sync job schedule. A sync
job entails synchronization of data updates recorded on the master since the last successful
synchronization. The sync job schedule will be defined for both the primary and secondary
system peers in the mirror. This provides a schedule for each peer and will be used when the
peer takes on the role of master. The purpose of the schedule specification on the slave is to
set a default schedule for an automated failover scenario.
The system supports the following schedule intervals: 20s (min_interval), 30s, 1m, 2m, 5m,
10m, 15m, 30m, 1h, 2h, 3h, 6h, 8h, 12h, 24h. Consult your IBM representative to set the
optimum schedule interval based on your RPO requirements.
A schedule set as NEVER means that no sync jobs will be automatically scheduled. See 5.6,
“Detailed asynchronous mirroring process” on page 153. In addition to schedule-based
snapshots, a dedicated command to run a mirror snapshot can be issued manually. These
ad-hoc snapshots are issued from the master and initiate a sync job that is queued behind
outstanding sync jobs. See 5.5.4, “Ad hoc snapshots” on page 150.
The XIV GUI automatically creates schedules based on the RPO selected for the mirror being
created. The interval can be set in the mirror properties panel or must be explicitly specified
through the XCLI.
Tip: XIV allows you to set a specific RPO and schedule interval for each mirror coupling.
Slave volumes must be formatted before they are configured as part of a mirror. This means
that the volume must not have any snapshots and must be unlocked.
you can use either the XIV GUI or the XCLI to create a mirror; both methods are illustrated in
the following sections.
Then specify Sync Type as Async, select the slave peer (volume or CG), and specify an
RPO value. Set the Schedule Management field to XIV Internal to create automatic
synchronization using scheduled sync jobs, as shown in Figure 5-2.
When creating a mirror, the slave peer (volume or CG) can also be created automatically on
the target XIV System. To do this, select Create Slave and specify the slave pool name and
the slave volume or CG name, as shown in Figure 5-3.
If schedule type External is selected when creating a mirror, no sync jobs will run for this
mirror and the interval will be set to Never, as illustrated in Figure 5-4.
The Mirroring panel shows the current status of the mirrors. The synchronization of the mirror
must be initiated manually using the Activate action, as seen in Figure 5-7 on page 131.
In Figure 5-6, notice that the selected RPO is displayed for the mirror created.
Tip: When working with the XCLI session or the XCLI command, the windows look similar
and you could address the wrong XIV system with your command. Therefore, it might be
helpful to always first issue a config_get command to verify that you are working with the
right XIV system.
Example 5-1 illustrates the use of XCLI commands to set up a mirror volume.
As seen in Figure 5-8, the Mirroring panel now shows the status of the active mirrors as RPO
OK. All the async mirrors have the same mirroring status. Note that Sync Mirrors show the
status as synchronized.
IBM XIV Storage System leverages its consistency group capability to allow for mirroring
numerous volumes at once. The system creates snapshots of the master consistency groups
at user-configured intervals and synchronizes these point-in-time snapshots with the slave.
Setting the consistency group to be mirrored is done by first creating a consistency group,
then setting it to be mirrored, and only then populating it with volumes. A consistency group
must be created at the primary XIV and a corresponding consistency group at the secondary
XIV. The names of the consistency groups can be different. When creating a consistency
group, you also must specify the storage pool.
All volumes that you are going to add to the consistency group must be in that pool on the
primary XIV and in one pool at the secondary XIV. Adding a new volume pair to a mirrored
consistency group requires the volumes to be mirrored exactly like the other volumes within
this consistency group. Volume pairs with different mirroring parameters will be modified to
match those of the CG when attempting to add them to the CG with the GUI.
It is possible to add a mirrored volume to a non-mirrored consistency group and have this
volume retain its mirroring settings.
To create a mirrored consistency group first create a CG on the primary and secondary XIV
Storage System. Then select the primary CG and specify Create Mirror (Figure 5-9).
The consistency group must not contain any volume when you create the mirror, and be sure
to specify mirroring parameters that match the volumes that will be part of this CG, as shown
in Figure 5-10. The status of the new mirrored CG is now displayed in the Mirroring panel.
Also, mirrors for volumes must be activated before volumes can be added to a mirrored
consistency group. This activation results in the initial copy being completed and sync jobs
being run to create the special last-replicated snapshots (refer to Figure 5-7 on page 131).
As seen in Figure 5-11, the Mirror panel now shows the status of the active mirrors as RPO OK.
All the async mirrors and the mirrored CG have the same mirroring status. Note that sync
mirrors shows the status as synchronized.
To add volumes to the mirrored CG, the mirroring parameters must be identical, including the
last-replicated timestamps. The RPO and schedule will be changed to match the values set
for the mirrored consistency group. The volumes must have the same status (RPO OK). It is
possible that during the process the status might change or the last-replicated timestamp
might not yet be updated. If an error occurs, verify the status and repeat the operation.
Go to the Mirroring panel and verify the RPO and status for the volumes to be added to the
CG. Select each volume and specify to Add To Consistency Group (Figure 5-12).
The Mirroring panel now shows the consistency group as a group of volumes, all with the
same status for both the master CG (Figure 5-14) and the slave CG (Figure 5-15 on
page 134).
-- Activate mirror CG
mirror_activate cg=itso_volume_cg
>> sync_job_list
Job Object Local Peer Source Target
State Part of CG Job Type
Volume itso_volume_1 last-replicated-itso_volume_1 most-recent-itso_volume_1
active yes scheduled
Volume itso_volume_2 last-replicated-itso_volume_2 most-recent-itso_volume_2
active yes scheduled
Volume itso_volume_3 last-replicated-itso_volume_3 most-recent-itso_volume_3
active yes scheduled
CG itso_volume_cg last-replicated-itso_volume_cg most-recent-itso_volume_cg
active no scheduled
The mirroring is terminated by deactivating the mirror and is required for the following actions:
Terminating or deleting the mirroring
Stopping the mirroring process
– For a planned network outage
– To reduce network bandwidth
– For a planned recovery test
The deactivation pauses a running sync job and no new sync jobs will be created as long as
the active state of the mirroring is not restored. However, the deactivation does not cancel the
status check by the master and the slave. The synchronization status of the deactivated
mirror is calculated as though the mirror was active.
Deactivating a mirror results in the synchronization status becoming RPO_Lagging via XCLI
when the specified RPO time is exceeded. This means that the last-replicated snapshot is
older than the specified RPO. The GUI will show the mirror as Inactive.
Example 5-3 XCLI commands for changing RPO and schedule interval
-- change RPO to 2 min and adjust schedule time interval
mirror_change_rpo cg=itso_volume_cg rpo=120 remote_rpo=120
schedule_change schedule=forty_sec interval=00:00:50 -y
schedule_rename schedule=forty_sec new_name=fifty_sec
---- on secondary
schedule_create schedule=fifty_sec interval=00:00:50
mirror_change_schedule cg=itso_volume_cg schedule=fifty_sec
schedule_delete schedule=forty_sec
The activation state changes to inactive, as shown in Figure 5-21. Subsequently, the
replication pauses (and records where it paused). Upon activation, the replication resumes.
Note that an ongoing sync job resumes upon activation. No new sync job will be created until
the next interval.
Note that for consistency group mirroring, deactivation pauses all running sync jobs
pertaining to the consistency group.
-- Activate mirrored CG
mirror_activate cg=itso_volume_cg
Deletion
When a mirror pair (volume pairs or a consistency group) is inactive, the mirror relationship
can be deleted. When the mirror is deleted, the XIV forgets everything about the mirror. If you
want to set up the mirror again, the XIV must do an initial copy again from the source to the
target volume.
When the mirror is part of a consistency group, the mirror must first be removed from the
mirrored CG. For a CG, the last-replicated snapgroup for the master and the slave CG must
be deleted or disbanded (making all snapshots directly accessible) after deactivation and
mirror deletion. This CG snapgroup is recreated with only the current volumes after the next
interval completes. The last-replicated snapshots for the mirror can now be deleted, allowing
a new mirror to be created. All existing volumes in the CG need to be removed before a new
mirrored CG can be created.
Note that when the mirror is deleted, the slave volume becomes a normal volume again, but
the volume is locked, which means that it is write protected. To enable writing to the volume
go to the Volumes list panel, select the volume, right-click it, and select Unlock.
The slave volume must also be formatted before it can be part of a new mirror. Formatting
also requires all snapshots of that volume to be deleted.
Change role
In a disaster at the primary site, a role change at the secondary site is the normal recovery
action.
Assuming that the primary site is down and that the secondary site will become the main
production site, changing roles is performed at the secondary (now production) site first.
Later, when the primary site is up again and communication is re-established, you also
change the role at the primary site to slave to be able to establish mirroring from the
secondary site back to the primary site. This completes a switch role operation.
As shown in Figure 5-23 on page 143, you are then prompted to confirm the role change (role
reverse).
The coupling remains in inactive mode (Figure 5-25). This means that remote mirroring is
deactivated. This ensures an orderly activation when the role of the peer on the other site
is changed.
The new master volume or consistency group starts to accept write commands from local
hosts. Since coupling is not active in the same way as for any master volume, a log is
maintained of which write operations must be sent to the slave volume when communication
resumes.
After changing the slave to the master, an administrator must also change the original master
to the slave role before communication resumes (Figure 5-26). If both peers are left in the
same role (master), mirroring cannot be restarted.
If the link is resumed and both sides have the same role, the coupling does not become
operational. The user must use the change role function on one of the volumes and then
activate the mirroring.
Peer reverts to the last-replicated snapshot. See 5.5.5, “Mirroring special snapshots” on
page 152.
Switch roles
Switch roles is a useful command when performing a planned site switch by reversing
replication direction. It is only available when both the master and slave XIV systems are
accessible. Mirroring must be active and synchronized (RPO OK) in order to issue the
command via the GUI.
The command to switch roles can only be issued for a master volume or CG as shown in
Figure 5-28.
As shown in Figure 5-29 on page 146 you are then prompted to confirm the switch roles. In
our example, the async mirrored itso_volume_cg has now returned to its original state and
remains Active and RPO OK (Figure 5-30 on page 146).
When recovering from a link failure, the following steps are taken to synchronize the data:
Asynchronous mirroring sync jobs proceed as scheduled. Sync jobs are restarted and a
new most-recent snapshot is taken. See 5.5.5, “Mirroring special snapshots” on page 152.
The primary system copies the changed data to the secondary volume. Depending on
how much data must be copied, this operation could take a long time, and the status
remains RPO_Lagging.
However, within these broad categories there are a number of situations that might exist.
Some of these and the recovery procedures are considered here:
A disaster that makes the XIV at the primary site unavailable, but leaves the site itself and
the servers there still available
In this scenario the volumes/CG on the XIV at the secondary site can be switched to
master volumes/CG, servers at the primary site can be redirected to the XIV at the
secondary site, and normal operations can start again. When the XIV at the primary site is
recovered the data can be mirrored from the secondary site back to the primary site. A full
initialization of the data is usually not needed.
Only changes that take place at the secondary site are transferred to the primary site. If
desired, a planned site switch can then take place to resume production activities at the
primary site. See 5.2, “Role reversal” on page 142, for details related to this process.
A disaster that makes both the primary site and data unavailable.
In this scenario, the standby (inactive) servers at the secondary site are activated and
attached to the secondary XIV to continue normal operations. This requires changing the
role of the slave peers to become master peers.
After the primary site is recovered, the data at the secondary site can be mirrored back to
the primary site. This most likely requires a full initialization of the primary site because the
local volumes may not contain any data. See 5.1, “Asynchronous mirroring configuration”
on page 126, for details related to this process.
When initialization completes the peer roles can be switched back to master at the primary
site and the slave at secondary site. The servers are then redirected back to the primary
site. See 5.2, “Role reversal” on page 142, for details related to this process.
A disaster that breaks all links between the two sites but both sites remain running
In this scenario the primary site continues to operate as normal. When the links are
reestablished the data at the primary site can be resynchronized with the secondary site.
Only the changes since the previous last-replicated snapshot are sent to the secondary
site.
last-replicated
Data to be replicated
Primary Secondary
site site
ast-replicated
last-replicated
5. Sync jobs can now be run to create periodic consistent copies of the master volumes or
consistency groups on the slave system. See 5.6, “Detailed asynchronous mirroring
process” on page 153.
Details of this process can be found in 5.6, “Detailed asynchronous mirroring process” on
page 153.
The following characteristics apply to the manual initiation of the asynchronous mirroring
process:
Multiple mirror snapshot commands can be issued – there is no maximum limit, aside from
space limitations, on the number of mirror snapshots that can be issued manually.
A mirror snapshot running when a new interval arrives delays the start of the next
interval-based mirror scheduled to run, but does not cancel the creation of this sync job.
The interval-based mirror snapshot will be cancelled only if the running snapshot mirror
(ad-hoc) has never finished.
Other than these differences, the manually initiated sync job is identical to a regular
interval-based sync job.
For this example, you can now verify the ad-hoc snapshot group has been created on the
master and slave system by looking under the Consistency Groups window of the GUI as
shown in Figure 5-35 and Figure 5-36 respectively.
Table 5-1 indicates which snapshot is created for a given sync job phase.
Sync Jo b
last-replicated last-replicated
Data to be replicated
Primary Secondary
site site
last-replicated last-replicated
Primary Secondary
site site
The next sync job can now be run at the next defined interval.
Master
Slave
If RPO is equal to or lower than the If RPO is higher than the difference
difference between the current time between the current time (when the
(when the check is run) and the check is calculated) and the timestamp of
timestamp of the the last_replicated_snapshot, then the
last_replicated_snapshot, then the status will be set to RPO_LAGGING
status will be set to RPO_OK
A new snapshot is created with the same timestamp as the last-replicated snapshot
(Figure 5-44).
Any changes made during the testing are removed by restoring the last-replicated snapshot,
and new updates from the primary site will be transferred to the secondary site when the
mirror is activated again (Figure 5-51 through Figure 5-53).
Tip: Set appropriate pool alert thresholds to be warned ahead of time and be able to take
proactive measures to avoid any serious pool space depletion situations.
If the pool’s snapshot reserve space has been consumed, replication snapshots will gradually
use the remaining available space in the pool. Note that once a single replication snapshot
has been written in the regular pool’s space, any new snapshot (replication snapshot or
regular snapshot) will start consuming space outside the snapshot reserve. The XIV system
has a sophisticated built-in multi-step process to cope with pool space depletion on the slave
or on the master before it eventually deactivates the mirror. If a pool does not have enough
free space to accommodate the storage requirements warranted by a new host write, the
system progressively deletes snapshots within that pool until enough space is made available
for successful completion of the write request.
The multi-step process is outlined in this section; the system will proceed to the next step only
if there is still insufficient space to support the write request after execution of the current
step. Upon depletion of space in a pool with mirroring, the following takes place:
STEP 2: Deletion of the snapshot of any outstanding (pending) scheduled sync job.
STEP 5: Deletion of the most_recent snapshot created when activating the mirroring in
Change Tracking state.
(*) The XIV system introduces the concept of protected snapshots. With the command
pool_config_snapshots a special parameter is introduced that sets a protected priority value
for snapshots in a specified pool. Pool snapshots with a delete priority value smaller than this
parameter value are treated as protected snapshots and will generally be deleted only after
unprotected snapshots are (with the only exception being a snapshot mirror (ad-hoc)
snapshot when its corresponding job is in progress). Notably, two mirroring-related snapshots
will never be deleted: the last-consistent snapshot (synchronous mirroring) and the
last-replicated snapshot on the Slave (asynchronous mirroring).
Note: The deletion priority of mirroring-related snapshots is set implicitly by the system
and cannot be customized by the user.
The deletion priority of the asynchronous mirroring last-replicated and most-recent
snapshots on the master is set to 1.
The deletion priority of the asynchronous mirroring last-replicated snapshot on the
slave and the synchronous mirroring last-consistent snapshot is set to 0.
By default the parameter protected_snapshot_priority in pool_config_snapshots is 0.
Non-mirrored snapshots are created by default with a deletion priority 1.
We explain how to bring snapshot target volumes online to the same host as well as to a
second host. This chapter covers various UNIX® platforms and VMware.
For AIX LVM, it is currently not possible to activate a Volume Group with a physical volume
that contains a VGID and a PVID that is already used in a Volume Group existing on the
same server. The restriction still applies even if the hdisk PVID is cleared and reassigned with
the two commands listed in Example 6-1.
Therefore, it is necessary to redefine the Volume Group information about the snapshot
volumes using special procedures or the recreatevg command. This will alter the PVIDs and
VGIDs in all the VGDAs of the snapshot volumes so that there are no conflicts with existing
PVIDs and VGIDs on existing Volume Groups that reside on the source volumes. If you do
not redefine the Volume Group information prior to importing the Volume Group, then the
importvg command will fail.
If the host is using LVM or MPIO definitions that work with hdisks only, follow these steps:
1. The snapshot volume (hdisk) is new to AIX, and therefore the Configuration Manager
should be run on the specific Fibre Channel adapter:
#cfgmgr -l <fcs#>
2. Determine which of the physical volumes is your snapshot volume:
#lsdev -C |grep 2107
3. Certify that all PVIDs in all hdisks that will belong to the new Volume Group were set.
Check this information using the lspv command. If they were not set, run the following
command for each one to avoid failure of the importvg command:
#chdev -l <hdisk#> -a pv=yes
4. Import the snapshot Volume Group:
#importvg -y <volume_group_name> <hdisk#>
The data is now available and you can, for example, back up the data residing on the
snapshot volume to a tape device.
The disks containing the snapshot volumes might have been previously defined to an AIX
system, for example, if you periodically create backups using the same set of volumes. In this
case, there are two possible scenarios:
If no Volume Group, file system, or logical volume structure changes were made, use
procedure 1 to access the snapshot volumes from the target system.
If some modifications to the structure of the Volume Group were made, such as changing
the file system size or modifying logical volumes (LV), then it is recommended to use
procedure 2.
Procedure 1
To access the snapshot volumes from the target system if no Volume Group, file system, or
logical volume structure changes were made follow these steps:
1. Unmount all the source file systems:
#umount <source_filesystem>
2. Unmount all the snapshot file systems:
#umount <snapshot_filesystem>
3. Deactivate the snapshot Volume Group:
#varyoffvg <snapshot_volume_group_name>
4. Create the snapshots on the XIV.
5. Mount all the source file systems:
#mount <source_filesystem>
6. Activate the snapshot Volume Group:
#varyonvg <snapshot_volume_group_name>
7. Perform a file system consistency check on the file systems:
#fsck -y <snapshot_file_system_name>
8. Mount all the file systems:
#mount <snapshot_filesystem>
Procedure 2
If some modifications have been made to the structure of the Volume Group, use the
following steps to access the snapshot volumes:
1. Unmount all the snapshot file systems:
#umount <snapshot_filesystem>
If you are using the same host to work with source and target volumes, you have to use the
recreatevg command.
The recreatevg command overcomes the problem of duplicated LVM data structures and
identifiers caused by a disk duplication process such as snapshot. It is used to recreate an
AIX Volume Group (VG) on a set of target volumes that are copied from a set of source
volumes belonging to a specific VG. The command will allocate new physical volume
identifiers (PVIDs) for the member disks and a new Volume Group identifier (VGID) to the
Volume Group. The command also provides options to rename the logical volumes with a
prefix you specify, and options to rename labels to specify different mount points for file
systems.
The target volume group will be tgt_snap_vg; it will contain the snapshots of hdisk2 and
hdisk3.
6. Create the target volume group and prefix all file system path names with /backup, and
prefix all AIX logical volumes with bkup:
recreatevg -y tgt_flash_vg -L /backup -Y bkup vpath4 vpath5
You must specify the hdisk names of all disk volumes participating in the volume group.
The output from lspv, shown in Example 6-3, illustrates the new volume group definition.
7. An extract from /etc/filesystems in Example 6-4 shows how recreatevg generates a new
file system stanza. The file system named /prodfs in the source Volume Group is
renamed to /bkp/prodfs in the target volume group. Also, the directory /bkp/prodfs is
created. Notice also that the logical volume and JFS log logical volume have been
renamed. The remainder of the stanza is the same as the stanza for /prodfs.
8. Perform a file system consistency check for all target file systems:
#fsck -y <target_file_system_name>
9. Mount the new file systems belonging to the target volume group to make them
accessible.
When the volumes are in a consistent state, the secondary volumes can be configured
(cfgmgr) into the target system’s customized device class (CuDv) of the ODM. This will bring
If the secondary volumes were previously defined on the target AIX system, but the original
Volume Group was removed from the primary volumes, the old volume group and disk
definitions must be removed (exportvg and rmdev) from the target volumes and redefined
(cfgmgr) before running importvg again to get the new volume group definitions. If this is not
done first, importvg will import the volume group improperly. The volume group data
structures (PVIDs and VGID) in ODM will differ from the data structures in the VGDAs and
disk volume super blocks. The file systems will not be accessible.
When the updates have been imported into the secondary AIX host’s ODM, you can establish
the Remote Mirror and Copy pair again. As soon as the Remote Mirroring pair has been
established, immediately suspend the Remote Mirroring. Because there was no write I/O to
the primary volumes, both the primary and secondary are consistent.
The following example shows two systems, host1 and host2, where host1 has the primary
volume hdisk5 and host2 has the secondary volume hdisk16. Both systems have had their
ODMs populated with the volume group itsovg from their respective Remote Mirror and Copy
volumes and, prior to any modifications, both systems’ ODM have the same time stamp, as
shown in Example 6-5.
Volumes hdisk5 and hdisk16 are in the synchronized state, and the volume group itsovg on
host1 is updated with a new logical volume. The time stamp on the VGDA of the volumes gets
updated and so does the ODM on host1, but not on host2.
To update the ODM on the secondary server, it is advisable to suspend the Remote Mirror
and Copy pair prior to performing the importvg -L command to avoid any conflicts from LVM
actions occurring on the primary server. When the importvg -L command has completed,
you can reestablish the Remote Mirror.
If you want to make both the source and target available to the machine at the same time, it is
necessary to change the private region of the disk, so that VERITAS Volume Manager allows
the target to be accessed as a different disk. Here we explain how to simultaneously mount
snapshot source and target volumes to the same host without exporting the source volumes
It is assumed that the sources are constantly mounted to the SUN host, the snapshot is
performed, and the goal is to mount the copy without unmounting the source or rebooting.
Use the following procedure to mount the targets to the same host. The process, shown in
Example 6-7, refers to these names:
– vgsnap2: The name of the disk group that is being created.
– vgsnap: The name of original disk group.
1. To discover the newly available disks issue the command:
# vxdctl enable
2. Check that the new disks are available. The new disks should be presented in output as
online disks with mismatch uids.
# vxdisk list
3. Import an available disk onto the host in a new disk group using the vxdg command.
# vxdg -n <name for the new disk group> -o useclonedev=on,updateid -C import
name of the original disk group>
4. Apply the journal log to the volume located in the disk group.
#vxrecover -g <name of new disk group> -s <name of the volume>
5. Mount the file system located in disk groups.
# mount /dev/vx/dsk/<name of new disk group/<name of the volume> /<mount point>
Example 6-7 Importing the snapshot on same host simultaneously with using of original disk group
# vxdctl enable
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
xiv0_0 auto:cdsdisk vgxiv02 vgxiv online
xiv0_4 auto:cdsdisk vgsnap01 vgsnap online
xiv0_5 auto:cdsdisk vgsnap02 vgsnap online
xiv0_8 auto:cdsdisk - - online udid_mismatch
xiv0_9 auto:cdsdisk - - online udid_mismatch
xiv1_0 auto:cdsdisk vgxiv01 vgxiv online
# vxdg -n vgsnap2 -o useclonedev=on,updateid -C import vgsnap
VxVM vxdg WARNING V-5-1-1328 Volume lvol: Temporarily renumbered due to conflict
# vxrecover -g vgsnap2 -s lvol
# mount /dev/vx/dsk/vgsnap2/lvol /test
# ls /test
VRTS_SF_HA_Solutions_5.1_Solaris_SPARC.tar
VRTSaslapm_Solaris_5.1.001.200.tar
VRTSibmxiv-5.0-SunOS-SPARC-v1_307934.tar.Z
lost+found
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
xiv0_0 auto:cdsdisk vgxiv02 vgxiv online
xiv0_4 auto:cdsdisk vgsnap01 vgsnap online
xiv0_5 auto:cdsdisk vgsnap02 vgsnap online
xiv0_8 auto:cdsdisk vgsnap01 vgsnap2 online clone_disk
xiv0_9 auto:cdsdisk vgsnap02 vgsnap2 online clone_disk
xiv1_0 auto:cdsdisk vgxiv01 vgxiv online
After the secondary volumes have been assigned, reboot the Solaris server using reboot --
-r or, if a reboot is not immediately possible, then issue devfsadm. However, a reboot is
recommended for reliable results.
Use the following procedure to mount the secondary volumes to another host:
1. Scan devices in the operating system device tree:
#vxdctl enable
2. List all known disk groups on the system:
#vxdisk -o alldgs list
3. Import the Remote Mirror disk group information:
#vxdg -C import <disk_group_name>
4. Check the status of volumes in all disk groups:
#vxprint -Ath
5. Bring the disk group online:
#vxvol -g <disk_group_name> startall
or
#vxrecover -g <disk_group_name> -sb
6. Perform a consistency check on the file systems in the disk group:
#fsck -V vxfs /dev/vx/dsk/<disk_group_name>/<volume_name>
7. Mount the file system for use:
#mount -V vxfs /dev/vx/dsk/<disk_group_name>/<volume_name> /<mount_point>
When you have finished with the mirrored volume, we recommend that you perform the
following tasks:
1. Unmount the file systems in the disk group:
#umount /<mount_point>
2. Take the volumes in the disk group offline:
#vxvol -g <disk_group_name> stopall
3. Export disk group information from the system:
#vxdg deport <disk_group_name>
This procedure must be repeated each time you perform a snapshot and want to use the
target physical volume on the same host where the snapshot source volumes are present in
the Logical Volume Manager configuration. The procedure can also be used to access the
target volumes on another HP-UX host.
Target preparation
Follow these steps to prepare the target system:
1. If you did not use the default Logical Volume Names (lvolnn) when they were created,
create a map file of your source volume group using the vgexport command with the
preview (-p) option:
#vgexport -p -m <map file name> -p /dev/<source_vg_name>
Tip: If the target volumes are accessed by a secondary (or target) host, this map file
must be copied to the target host.
2. If the target volume group exists, remove it using the vgexport command. The target
volumes cannot be members of a Volume Group when the vgimport command is run.
#vgexport /dev/<target_vg_name>
3. Shut down or quiesce any applications that are accessing the snapshot source.
Snapshot execution
Follow these steps to execute and access the snapshot:
1. Quiesce or shut down the source HP-UX applications to stop any updates to the primary
volumes.
2. Perform the XIV snapshot.
3. When the snapshot is finished, change the Volume Group ID on each DS Volume in the
snapshot target. The volume ID for each volume in the snapshot target volume group
must be modified on the same command line. Failure to do this will result in a mismatch of
Volume Group IDs within the Volume Group. The only way to resolve this issue is to
perform the snapshot again and reassign the Volume Group IDs using the same
command line:
vgchgid -f </dev/rdsk/c#t#d#_1>...</dev/rdsk/c#t#d#_n>
When access to the snapshot target volumes is no longer required, unmount the file systems
and deactivate (vary off) the volume group:
#vgchange -a n /dev/<target_vg_name>
If no changes are made to the source volume group prior to the subsequent snapshot, then
all that is needed is to activate (vary on) the volume group and perform a full file system
consistency check, as shown in steps 7 and 8.
Follow these steps to bring Remote Mirror target volumes online to secondary HP-UX hosts:
1. Quiesce the source HP-UX application to cease any updates to the primary volumes.
2. Change the role of the secondary volumes to master to enable host access.
3. Rescan for hardware configuration changes using the ioscan -fnC disk command. Check
that the disks are CLAIMED using ioscan -funC disk. The reason for doing this is that the
volume group might have been extended to include more physical volumes.
4. Create the Volume Group for the Remote Mirror secondary. Use the lsdev -C lvm
command to determine what the major device number should be for Logical Volume
Manager objects. To determine the next available minor number, examine the minor
number of the group file in each volume group directory using the ls -l command.
If changes are made to the source volume group, they should be reflected in the /etc/lvmtab
of the target server. Therefore, it is recommended that periodic updates be made to make the
lvmtab on both source and target machines consistent.
Use the procedure just described, but include the following steps after step 5 before
activating the volume group:
a. On the source HP-UX host export the source volume group information into a map file
using the preview option:
#vgexport -p -m <map file name>
b. Copy the map file to the target HP-UX host.
c. On the target HP-UX host export the volume group.
d. Re-create the volume group using the HP-UX mkdir and mknod commands.
e. Import the Remote Mirror target volumes into the newly created volume group using
the vgimport command.
When access to the Remote Mirror target volumes is no longer required, unmount the file
systems and deactivate (vary off) the volume group:
#vgchange -a n /dev/<target_vg_name>
Where appropriate, reactivate the XIV Remote Mirror in normal or reverse direction. If copy
direction is reversed, the master and slave roles and thus the source and target volumes are
also reversed.
When using Copy Services with the guest operating systems, the restrictions of the guest
operating system still apply. In some cases, using Copy Services in a VMware environment
might impose additional restrictions.
For the target machine, typically the target volumes must be unmounted. This prevents the
operating system from accidentally corrupting the target volumes with buffered writes, as well
as preventing users from accessing the target LUNs until the snapshot is logically complete.
With VMware, there is an additional restriction that the target virtual machine must be shut
down before issuing the snapshot. VMware also performs caching, in addition to any caching
the guest operating system might do. To be able to use the FlashCopy target volumes with
ESX Server, you need to ensure that the ESX Server can see the target volumes. In addition
to checking the SAN zoning and the host attachment within the XIV, you might need a SAN
rescan issued by the Virtual Center.
If the snapshoted LUNs contain a VMFS file system, the ESX host will detect this on the
target LUNs and add them as a new datastore to its inventory. The VMs stored on this
datastore can then be opened on the ESX host. To assign the existing virtual disks to new
VMs, in the Add Hardware Wizard panel, select Use an existing virtual disk and choose the
.vmdk file you want to use. See Figure 6-1 on page 178.
If the snapshoted LUNs were assigned as RDMs, the target LUNs can be assigned to a VM
by creating a new RDM for this VM. In the Add Hardware Wizard panel, select Raw Device
Mapping and use the same parameters as on the source VM.
Note: If you do not shut down the source VM, reservations might prevent you from using
the target LUNs.
Because a VMFS datastore can contain more than one LUN, the user has to make sure all
participating LUNs are mirrored using snapshot to get a complete copy of the datastore.
For details and restrictions, check the SAN Configuration Guide, at:
http://www.vmware.com/support/pubs/vi_pubs.html
The following paragraphs are valid for both compatibility modes. However, keep in mind that
extra work on the ESX host or VMs might be required for the virtual compatibility mode.
For virtual disks, this can be achieved simply by copying the .vmdk files on the VMFS
datastore. However, the copy is not available instantly as with snapshot; instead, you will
have to wait until the copy job has finished duplicating the whole .vmdk file.
Figure 6-3 Using snapshot within a VM - HDD1 is the source for target HDD2
After issuing the snapshot job, LUN 1’ can be assigned to a second VM, which then can work
with a copy of VM1's HDD1. (See Figure 6-4).
Figure 6-4 Using snapshot between two different VMs - VM1's HDD1 is the source for HDD2 in VM2
As illustrated in Figure 6-5, we are using snapshot on a consistency group that includes 2
volumes. LUN 1 is used for a VMFS datastore whereas LUN 2 is assigned to VM2 as an
RDM. These two LUNs are then copied with snapshot and attached to another ESX Server
host. In ESX host 2 we now assign the vdisk that is stored on the VMFS partition on LUN 1' to
VM3 and attach LUN 2' via RDM to VM4. By doing this we create a copy of ESX host 1’s
virtual environment and use it on ESX host 2.
Note: If you use snapshot on VMFS volumes and assign them to the same ESX Server
host, the server does not allow the target to be used because the VMFS volume identifiers
have been duplicated. To circumvent this, VMware ESX server provides the possibility of
VMFS Volume Resignaturing. For details about resignaturing, check page 112 and onward
in the SAN Configuration Guide, available at:
http://www.vmware.com/support/pubs/vi_pubs.html
As with snapshots, using VMware with Remote Mirror contains all the advantages and
limitations of the guest operating system. See the individual guest operating system sections
for relevant information.
However, it might be possible to use raw System LUNs in physical compatibility mode. Check
with IBM on the supportability of this procedure.
In Figure 6-6 we have a scenario similar to one shown in Figure 6-5, but now the source and
target volumes are located on two different XIVs. This setup can be used for disaster recovery
solutions where ESX host 2 would be located in the backup data center.
In addition, we support integration of VMware Site Recovery Manager with IBM XIV Storage
System over IBM XIV Site Replication Adapter for VMware SRM.
A partition of POWER server can host one of the following operating systems: IBM i, Linux, or
AIX, that is configured and managed through a Hardware Management Console (HMC) that
is connected to the IBM i via an Ethernet connection.
In the remainder of this chapter, we refer to an IBM i partition in a POWER server or Blade
server simply as a partition.
Main Memory
When the application performs an input/output (I/O) operation, the portion of the program that
contains read or write instructions is first brought into main memory, where the instructions
are then executed.
Similarly, writing a new record or updating an existing record is done in main memory, and
the affected pages are marked as changed. A changed page remains in main memory until it
is swapped to disk as a result of a page fault. Pages are also written to disk when a file is
closed or when write to disk is forced by a user through commands and parameters. Also,
database journals are written to the disk.
An object in IBM i is anything that exists and occupies space in storage and on which
operations can be performed. For example, a library, a database file, a user profile, a
program are all objects in IBM i.
However, for many years, customers have found the need for additional storage granularity,
including the need to sometimes isolate data into a separate disk pool. This is possible with
User ASPs. User ASPs provide the same automation and ease-of-use benefits as the System
ASP, but provide additional storage isolation when needed. With software level Version 5,
IBM i takes this storage granularity option a huge step forward with the availability of
Independent Auxiliary Storage Pools (IASPs).
For requirements for IBM i Boot from SAN with XIV refer to the paper IBM XIV Storage
System with the Virtual I/O Server and IBM i, REDP-4598.
Boot from SAN support enables IBM i customers to take advantage of Copy Services
functions in XIV. These functions allow users to perform an instantaneous copy of the data
held on XIV logical volumes. Therefore, when they have a system that only has external
LUNs with no internal drives, they are able to create a clone of their IBM i system.
Important: When we refer to a clone, we are referring to a copy of an IBM i system that
only uses external LUNs. Boot (or IPL) from SAN is therefore a prerequisite for this
function.
Note: Besides cloning, IBM i provides another way of using Copy Services on external
storage: copying of an Independent Auxiliary Storage Pool (IASP) in a cluster.
Implementations with IASP are not supported on XIV.
Figure 7-2 shows the Display Disk Configuration Status screen in IBM i System Service Tools
(SST).
As was noted in “Single-level storage” on page 186, IBM i data is kept in the main memory
until it is swapped to disk as a result of a page fault. Before cloning the system with snapshots
it is necessary to make sure that the data was flushed from memory to disk. Otherwise the
backup system that is then IPLed from snapshots would not be consistent (up-to-date) with
the production system; even more important, the backup system would not use consistent
data, which can cause the failure of IPL.
Some IBM i customers prefer to power-down their systems before creating or overwriting the
snapshots, to make sure that the data is flushed to disk. Or, they force the IBM i system to a
restricted state before creating snapshots.
When cloning the IBM i, you should use this function to quiesce the SYSBAS which means
quiescing all ASPs except independent ASPs. If there are independent ASPs in your system,
they should be varied-off before cloning. When using this function, it is recommended to set
up the XIV volumes in a consistency group.
The details concerning both methods (powering down the IBM i and quiescing the ASP) are
provided later in this section.
Snapshots must have at least 34 GB allocated. Since the space needed depends on the size
of LUNs and the location of write operations, we recommend allocating initially a very
conservative estimate of about 80% of the source capacity to the snapshots. Then monitor
how snapshot space is growing during the backup. If snapshots do not use all the of the
allocated capacity, you can adjust the snapshot capacity to a lower value.
For an explanation of how to monitor the snapshot capacity, refer to the IBM Redbooks
publication, IBM XIV Storage System: Architecture, Implementation, and Usage, SG24-7659.
Bottom
F3=Exit F4=Prompt F5=Refresh F10=Additional parameters F12=Cancel
F13=How to use this display F24=More keys
The backup LPAR now hosts the clone of the production IBM i. Before using it for backups
make sure that it is not connected to the same IP addresses and network attributes as the
production system. For more information, refer to IBM i and IBM System Storage: A Guide to
Implementing External Disks on IBM i, SG24-7120.
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
1. User tasks
2. Office tasks
3. General system tasks
4. Files, libraries, and folders
5. Programming
6. Communications
7. Define or change the system
8. Problem handling
9. Display a menu
10. Information Assistant options
11. IBM i Access tasks
Selection or command
===>
Bottom
F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display
F24=More keys
Look for the IBM i message Access to ASP *SYSBAS successfully resumed, to be sure
that the command was successfully performed.
5. Unlock the snapshots in the consistency group.
This action is needed only after you create snapshots. The created snapshots are locked,
which means that a host server can only read data from them, but their data cannot be
modified. Before IPLing IBM i from the snapshots you have to unlock them to make them
accessible for writes as well. For this, use the Consistency Groups screen in the XIV GUI,
right-click the snapshot group and select Unlock from the pop-up menu.
Note that after overwriting the snapshots you do not need to unlock them again.
For details on overwriting snapshots, refer 1.2.5, “Overwriting snapshots” on page 15.
6. Connect the snapshots to the backup LPAR.
Map the snapshot volumes to VIO Systems and map the corresponding virtual disks to
IBM i adapters only the first time you use this solution. For subsequent operations the
existing mappings are used; you just have to rediscover the devices in each VIOS using
the cfgdev command.
Use the following steps to connect the snapshots in the snapshot group to a backup
partition:
a. In the Consistency Groups panel select the snapshots in the snapshot group,
right-click any of them and select Map selected volumes, as is shown in Figure 7-11.
b. In the next panel select the host or cluster of hosts to map the snapshots to. In our
example we mapped them to the two VIO Systems that connect to IBM i LPAR.
Note: Here the term cluster refers just to the host names and their WWPNs in XIV; it
does not mean that VIO Systems would be in an AIX cluster.
In each VIOS, rediscover the mapped volumes and map the corresponding devices to
the VSCSI adapters in IBM i.
7. IPL the IBM i backup system from snapshots.
IPL the backup LPAR as described in step 5 on page 193.
Note that when you power down the production system before taking snapshots, the IPL
of the backup system shows the previous system end as normal, while with quiescing
data to disk before taking snapshots, the IPL of the backup LPAR shows the previous
system end as abnormal, as can be seen in Figure 7-12 on page 199.
Automation for such a scenario can be provided in an AIX or Linux system, using XCLI scripts
to manage snapshots, and Secure Shell (SSH) commands to IBM i LPAR and the HMC.
Note: IBM i must be set up for receiving SSH commands. For instructions on how to set it
up refer to the paper Securing Communications with OpenSSH on IBM i5/OS, available at:
http://www.redbooks.ibm.com/redpapers/pdfs/redp4163.pdf
XCLI=/usr/local/XIVGUI/xcli
XCLIUSER=itso
XCLIPASS=password
XIVIP=1.2.3.4
CG_NAME=ITSO_i_CG
SNAP_NAME=ITSO_jj_snap
[email protected]
hmc_ibmi_name=IBMI_BACKUP
hmc_ibmi_prof=default_profile
hmc_ibmi_hw=power570
[email protected]
[email protected]
# Suspend IO activity
ssh ${ssh_ibmi} 'system "CHGASPACT ASPDEV(*SYSBAS) OPTION(*SUSPEND) SSPTIMO(30)"'
# overwrite snapshot
${XCLI} -u ${XCLIUSER} -p ${XCLIPASS} -m ${XIVIP} cg_snapshots_create
cg=${CG_NAME} overwrite=${SNAP_NAME}
# resume IO activity
ssh ${ssh_ibmi} 'system "CHGASPACT ASPDEV(*SYSBAS) OPTION(*RESUME)"'
# rediscover devices
ssh ${ssh_vios1} 'ioscli cfgdev'
ssh ${ssh_vios2} 'ioscli cfgdev'
In the backup IBM i LPAR you must change the IP addresses and network attributes so that
they do not collide with the ones in the production LPAR. For this you can use the startup CL
program in the backup IBM i; this is explained in detail in IBM i and IBM System Storage: A
Guide to Implementing External Disks on IBM i, SG24-7120.
You might also want to automate saving to tape in BRMS, by scheduling the save in BRMS.
After the save the library QUSRBRM must be transferred to the production system.
A stand-by IBM i LPAR is needed at the DR site. After the switchover of mirrored volumes
during planned or un-planned outages, perform an IPL of the stand-by partition from the
mirrored volumes at the DR site. This ensures continuation of the production workload in the
clone.
Typically, synchronous mirroring is used for DR sites located at shorter distances, and for
IBM i centers that require a near zero Recovery Point Objective (RPO). On the other hand,
clients that use DR centers located at long distance and who can cope with a little longer
RPO might rather implement Asynchronous Remote Mirroring.
It is recommended to use consistency groups with synchronous mirroring for IBM i to simplify
management of the solution and to provide consistent data at the DR site after
re-snychronization following a link failure.
Note: When adding the mirrored volumes to the consistency group, all volumes and the
CG must have the same status. Therefore the mirrored volumes should be
synchronized before you add them to the consistency group, and the CG should be
activated, so that all of them have status Synchronized.
In the primary XIV system select the IBM i mirrored volumes, right-click and select Add to
Consistency Group.
Figure 7-14 shows the consistency group in synchronized status for our scenario.
Note: Switching mirroring roles is suitable for planned outages during which the IBM i
system is powered-down. For planned outages with IBM i running, changing the mirroring
roles is more appropriate.
Confirm switching the roles in your consistency group by clicking OK in the Switch Roles
pop-up dialog. Once the switch is performed, the roles of mirrored volumes are reversed:
the IBM i mirroring consistency group on the primary XIV is now the Slave and the
consistency group on the secondary XIV is now the Master. This is shown in Figure 7-16,
where you can also observe that the status of CG at the primary site is now Consistent,
and at the secondary site, it is Synchronized.
After the production site is available again, you can switch back to the regular production site,
by executing the following steps:
1. Power-down the DR IBM i system as is described previously (step 1, “Power-down the IBM
i production system.” on page 191).
2. Switch the mirroring roles of XIV volumes as is described previously (step 2, “Switch the
XIV volumes mirroring roles.” on page 204).
Note: When switching back to the production site you must initiate the role switching on
the secondary (DR) XIV, since role switching must be done on the master peer.
3. In each VIOS at the primary site, rediscover the mirrored primary volumes by issuing the
cfgdev command.
4. Perform an IPL of the production IBM i LPAR as is described previously (step 5, “IPL the
IBM i backup system from snapshots.” on page 193). Since the DR IBM i was
powered-down, the IPL of its clone in the production site is now normal (previous system
shutdown was normal).
Item:
Current / Total . . . . . . :
Sub Item:
Identifier . . . . . . . . :
Current / Total . . . . . . :
Bottom
F3=Exit F11=Remove a message F12=Cancel
F13=Remove all F16=Remove all except unanswered F24=More keys
If both production and DR IBM i are in the same IP network it is necessary to change the
IP addresses and network attributes of the clone at the DR site. For more information
about this refer to IBM i and IBM System Storage: A Guide to Implementing External Disks
on IBM i, SG24-7120.
When the production site is back, failover to the normal production system as follows:
1. Change the role of primary peer to slave.
On the primary XIV, select Remote Mirroring in the GUI, right-click the consistency
group of IBM i volumes, and select Deactivate from the pop-up menu, then right-click
again and select Change Role. Confirm the change of the peer role from master to slave.
Now the mirroring is still inactive, and the primary peer became the slave, so the scenario
is prepared for mirroring from the DR site to the production site. The primary peer status is
shown in Figure 7-22.
In this solution, the entire disk space of the production IBM i LPAR resides on the XIV to allow
boot from SAN. Asynchronous Remote Mirroring for all XIV volumes belonging to the
production partition is established with another XIV located at the remote site.
In case of an outage at the production site a remote stand-by IBM i LPAR takes over the
production workload with the capability to IPL from Asynchronous Remote Mirroring
secondary volumes.
Because of the XIV Asynchronous Remote Mirroring design, the impact on production
performance is minimal; however, the recovered data at the remote site is typically lagging
production data due to the asynchronous nature, although usually just slightly behind. For
more information about the XIV Asynchronous Remote Mirroring design and implementation,
refer to “Remote Mirroring” on page 47.
3. Create a consistency group for mirroring on both the primary and secondary XIV systems,
and activate mirroring on the CG, as described previously (step 4 on page 203).
Note that when activating the asynchronous mirroring for the CG, you must select the
same options selected when activating the mirroring for the volumes.
4. Add the mirrored IBM i volumes to the consistency group as is described previously (step
5, “Add the mirrored volumes to the consistency group.” on page 203).
When you need to switch to the DR site for planned outages or as a result of a disaster,
perform the following steps:
1. Change the role of the secondary peer from slave to master.
Select Remote Mirroring in the GUI for the secondary XIV. Right-click the mirrored
consistency group and select Change Role. Confirm changing the role of the slave peer
to master.
2. Make the mirrored secondary volumes available to the stand-by IBM i.
We assume that the physical connections from XIV to the POWER server on the DR site
are already established at this point. Re-discover the XIV volumes in each VIOS with the
cfgdev command, then map them to the virtual adapter of IBM i, as described previously
(step 3, “Make the mirrored secondary volumes available to the stand-by IBM i.” on
page 205.
3. IPL the IBM i and continue the production workload at the DR site, as described previously
(step 3, “IPL the IBM i at the DR site.” on page 207).
At a high level, the steps to migrate to XIV using the XIV Data Migration function are:
1. Establish connectivity between the source device and XIV. The source storage device
must have Fibre Channel or iSCSI connectivity with the XIV.
2. Identify in detail the configuration of the LUNs to be migrated.
3. Perform the data migration:
– Stop and unconfigure all I/O from source-original LUNs.
– Start data migration in XIV.
– Map new LUNs to the host and discover new LUNs through XIV.
– Start all I/O on the new XIV LUNs.
The IBM XIV Data Migration solution offers a smooth data transfer because it:
Requires only a single short outage to switch LUN ownership. This enables the immediate
connection of a host server to the XIV Storage System, providing the user with direct
access to all the data before it has been copied to the XIV Storage System.
Synchronizes data between the two storage systems using transparent copying to the XIV
Storage System as a background process with minimal performance impact.
Supports data migration from practically all storage vendors.
Can be using Fibre Channel or iSCSI.
Can be used to migrate SAN boot volumes.
The XIV Storage System manages the data migration by simulating host behavior. When
connected to the storage device containing the source data, XIV looks and behaves like a
SCSI initiator, which in common terms means that it acts like a host server. After the
connection is established, the storage device containing the source data believes that it is
receiving read or write requests from a host, when in fact it is the XIV Storage System doing a
block-by-block copy of the data, which the XIV is then writing onto an XIV volume.
It is important that the connections between the two storage systems remain intact during the
entire migration process. If at any time during the migration process the communication
between the storage systems fails, the process also fails. In addition, if communication fails
after the migration reaches synchronised status, writes from the host will fail if the source
updating option was chosen. The situation is further explained in the 8.2, “Handling I/O
requests” on page 215. The process of migrating data is performed at a volume level, as a
background process.
The data migration facility in XIV firmware revisions 10.1 and later supports the following:
Up to four migration targets can be configured on an XIV (where a target is either one
controller in an active/passive storage device or one active/active storage device). XIV
firmware revision 10.2.2 increased the number of targets to 8. The target definitions are
used for both Remote Mirroring and data migration. Both Remote Mirroring and data
migration functions can be active at the same time. An active/passive storage device with
two controllers can use two target definitions unless only one of the controllers is used for
the migration.
The XIV can communicate with host LUN IDs ranging from 0 to 512 (in decimal). This
does not necessarily mean that the non-XIV disk system can provide LUN IDs in that
range. You might be restricted by the ability of the non-XIV storage controller to use only
16 or 256 LUN IDs depending on hardware vendor and device.
Up to 4000 LUNs can be concurrently migrated.
Important: During the discussion in this chapter, the source system in a data migration
scenario is referred to as a target when setting up paths between the XIV Storage System
and the donor storage (the non-XIV storage). This terminology is also used in Remote
Mirroring, and both functions share the same terminology for setting up paths for
transferring data.
The XIV Storage System handles all host server write requests and the non-XIV disk system
is now transparent to the host. All write requests are handled using one of two user-selectable
methods, chosen when defining the data migration. The two methods are known as source
updating and no source updating.
Source updating
This method for handling write requests ensures that both storage systems (XIV and non-XIV
storage) are updated when a write I/O is issued to the LUN being migrated. By doing this the
source system remains updated during the migration process, and the two storage systems
remain in sync after the background copy process completes. Similar to synchronous Remote
Mirroring, the write commands are only acknowledged by the XIV Storage System to the host
after writing the new data to the local XIV volume, then writing to the source storage device,
and then receiving an acknowledgement from the non-XIV storage device.
An important aspect of selecting this option is that if there is a communication failure between
the target and the source storage systems or any other error that causes a write to fail to the
source system, the XIV Storage System also fails the write operation to the host. By failing
the update, the systems are guaranteed to remain consistent. Change management
requirements determine whether you choose to use this option.
No source updating
This method for handling write requests ensures that only the XIV volume is updated when a
write I/O is issued to the LUN being migrated. This method for handling write requests
decreases the latency of write I/O operations because write requests are only written to the
XIV volume and are not written to the non-XIV storage system. It must be clearly understood
that this limits your ability to back out a migration, unless you have another way of recovering
updates that were written to the volume being migrated after migration began. If the host is
being shut down for the duration of the migration then this risk is mitigated.
Note: It is not recommended to “Keep source updated” if migrating a boot LUN. This is so
you can quickly back out of a migration of the boot device if a failure occurs.
Important: If multiple paths are created between an XIV and an active/active storage
device, the same SCSI LUN IDs must be used for each LUN on each path, or data
corruption might occur. It is also recommended that a maximum of two paths per target is
configured. Defining more paths will not increase throughput. With some storage arrays,
defining more paths adds complexity and increases the likelihood of configuration issues
and corruption.
Note: If your controller has two target ports (DS4700, for example) both can be defined as
links for that controller target. Make sure that the two target links are connected to
separate XIV modules. It will then make one redundant in case of a module failure.
Note: Certain examples shown in this chapter are from a DS4000® active/passive
migration with each DS4000 controller defined independently as a target to the XIV
Storage System. If you define a DS4000 controller as a target, do not define the alternate
controller as a second port on the first target. Doing so causes unexpected issues such as
migration failure, preferred path errors on the DS4000, or slow migration progress.
It is also possible that the host might be attached via one medium (such as iSCSI), whereas
the migration occurs via the other (such as Fibre Channel). The host-to-XIV connection
method and the data migration connection method are independent of each other.
Depending on the non-XIV storage device vendor and device, it might be easier to zone the
XIV to the ports where the volumes being migrated are already present. In this manner no
reconfiguration of the non-XIV storage device might be required. For example, in EMC
Symmetrix/DMX environments, it is easier to zone the fiber adapters (FAs) to the XIV where
the volumes are already mapped.
Figure 8-4 depicts a fabric-attached configuration. It shows that module 4 port 4 is zoned to a
port on the non-XIV storage via fabric A. Module 7 port 4 is zoned to a port on the non-XIV
storage via fabric B.
If you have already zoned the XIV to the non-XIV storage device, then the WWPNs of the XIV
initiator ports (which end in the number 3) will appear in the WWPN drop-down list. This is
dependent on the non-XIV storage device and storage management software. If they are not
there you must manually add them (this might imply that you need to map a LUN0, or that the
SAN zoning has not been done correctly).
The XIV must be defined as a Linux or Windows host to the non-XIV storage device. If the
non-XIV device offers several variants of Linux, you can choose SuSE Linux or RedHat Linux
or Linux x86. This defines the correct SCSI protocol flags for communication between the XIV
and non-XIV storage device. The principal criterion is that the host type must start LUN
numbering with LUN ID 0. If the non-XIV storage device is active/passive, check to see
Note: If Create Target is disabled and cannot be clicked you have reached the
maximum number of targets; targets are both migration targets and mirror targets.
3. Click the gray line to access the migration connectivity from DS4700-ctrl-B view
(Figure 8-7).
4. Right-click the dark box that is part of the defined target and select Add Port (Figure 8-8).
a. Enter the WWPN of the first (fabric A) port on the non-XIV storage device zoned to the
XIV. There is no drop-down menu of WWPNs, so you must manually type or paste in
the correct WWPN. Be careful not to make a mistake. It is not necessary to use colons
to separate every second number. It makes no difference if you enter a WWPN as
10:00:00:c9:12:34:56:78 or 100000c912345678.
b. Click Add.
Tip: Depending on the storage controller, ensuring that LUN0 is visible on the non-XIV
storage device down the controller path that you are defining helps ensure proper
connectivity between the non-XIV storage device and the XIV. Connections from XIV to
DS4000 or EMC DMX or Hitachi HDS devices require a real disk device to be mapped as
LUN0. However, the IBM ESS 800, for instance, does not need a LUN to be allocated to
the XIV for the connection to become active (turn green in the GUI). The same is true for
EMC CLARiiON.
Note: In clustered environments you could choose to work with only one node until the
migration is complete; if so, consider shutting down all other nodes in the cluster.
3. Perform a point-in-time copy of the volume on the non-XIV storage device (if that function
is available on the non-XIV storage). This point-in-time copy is a gold copy of the data that
is quiesced prior to starting the data migration process. Do this before changing any host
drivers or installing new host software, particularly if you are going to migrate boot from
SAN volumes.
4. Unzone the host from non-XIV storage. The host must no longer access the non-XIV
storage system after the data migration is activated. The host must perform all I/O through
the XIV.
When mapping volumes to the XIV it is important to note the LUN IDs allocated by the
non-XIV storage. The methodology to do this varies by vendor and device and is
documented in greater detail in 8.12, “Device-specific considerations” on page 252.
Important: You cannot use the XIV data migration function to migrate data to a source
volume in an XIV remote mirror pair. If you need to do this, migrate the data first and
then create the remote mirror after the migration is completed.
If you want to manually create the volumes on the XIV, go to 8.5, “Manually creating the
migration volume” on page 236. Preferably, instead continue with the next step.
Important: If the non-XIV device is active/passive, then the source target system
must represent the controller (or service processor) on the non-XIV device that
currently owns the source LUN being migrated. This means that you must check,
from the non-XIV storage, which controller is presenting the LUN to the XIV.
– Source LUN: Enter the decimal value of the host LUN ID as presented to the XIV from
the non-XIV storage system. Certain storage devices present the LUN ID as hex. The
number in this field must be the decimal equivalent. Ensure that you do not
accidentally use internal identifiers that you might also see on the source storage
systems management panels. In Figure 8-11 on page 225, the correct values to use
are in the LUN column (numbered 0 to 3).
– Keep Source Updated: Check this if the non-XIV storage system source volume is to be
updated with writes from the host. In this manner all writes from the host will be written
to the XIV volume, as well as the non-XIV source volume, until the data migration
object is deleted.
Note: It is recommended that you not select Keep Source Updated if migrating the
boot LUN. This is so you can quickly back out of a migration of the boot device if a
failure occurs.
Note: Define Data Migration will query the configuration of the non-XIV storage system
and create an equal sized volume on XIV. To check if you can read from the non-XIV
source volume you need to Test Data Migration. On some active/passive non-XIV
storage systems the configuration can be read over the passive controller, but Test
Data Migration will fail.
4. Test the data migration object. Right-click to select the created data migration object and
choose Test Data Migration. If there are any issues with the data migration object the test
fails and the issues encountered are reported (Figure 8-14 on page 228).
Tip: If you are migrating volumes from a Microsoft® Cluster Server (MSCS) that is still
active, testing the migration might fail due to the reservations placed on the source LUN by
MSCS. You must bring the cluster down properly to get the test to succeed. If the cluster is
not brought down properly, errors will occur either during the test or when activated. The
SCSI reservation must then be cleared for the migration to succeed.
Note: After it is activated, the data migration can be deactivated, but after deactivating the
data migration the host is no longer able to read or write to the migration volume and all
host I/O stops. Do not deactivate the migration with host I/O running. If you want to
abandon the data migration prior to completion consult the back-out process described in
section 8.10, “Backing out of a data migration” on page 248.
Figure 8-15 shows the menu choices when right-clicking the data migration. Note the Test
Data Migration, Delete Data Migration, and Activate menu items, as these are the most-used
commands.
Important: The host cannot read the data on the non-XIV volume until the data migration
has been activated. The XIV does not pass through (proxy) I/O for a migration that is
inactive. If you use the XCLI dm_list command to display the migrations, ensure that the
word Yes appears in the Active column for every migration.
Perform host administrative procedures. The host must be configured using the XIV host
attachment procedures. These include removing any existing/non-XIV multi-pathing software
and installing the native multi-pathing drivers, recommended patches, and the XIV Host
attachment kit, as identified in the XIV Host Attachment Guides. Install the most current HBA
driver and firmware at this time. One or more reboots might be required. Documentation and
other software can be found at:
http://www.ibm.com/support/search.wss?q=ssg1*&tc=STJTAG+HW3E0&rs=1319&dc=D400&dtm
When volume visibility has been verified, the application can be brought up and operations
verified.
Note: In clustered environments, it is usually recommended that only one node of the
cluster be initially brought online after the migration is started, and that all other nodes be
offline until the migration is complete. After complete, update all other nodes (driver, host
attachment package, and so on), in the same way the primary node was during the initial
outage (see step 5 in “Perform pre-migration tasks for the host being migrated” on
page 224).
To complete the data migration, perform the steps described in this section.
After all of a volume’s data has been copied, the data migration achieves synchronization
status. After synchronization is achieved, all read requests are served by the XIV Storage
System. If source updating was selected the XIV will continue to write data to both itself and
the outgoing storage system until the data migration is deleted. Figure 8-17 shows a
completed migration.
Important: If this is an online migration, do not deactivate the data migration prior to
deletion, as this causes host I/O to stop and possibly causes data corruption.
Right-click to select the data migration volume and choose Delete Data Migration, as shown
in Figure 8-18. This can be done without host/server interruption.
Note: For safety purposes, you cannot delete an inactive or unsynchronized data
migration from the Data Migration panel. An unfinished data migration can only be deleted
by deleting the relevant volume from the Volumes Volumes & Snapshots section in the
XIV GUI.
Every command issued in the XIV GUI is logged in a text file with the correct syntax. This is
helpful for creating scripts. If you are running the XIV GUI under Microsoft Windows, look for
a file titled guicommands_< todays date >.txt, which will be found in the following folder:
C:\Documents and Settings\ < Windows user ID >\Application Data\XIV\GUI10\logs
All of the commands given on the next few pages are effectively in the order in which you
must execute them, starting with the commands to list all current definitions (which will also
be needed when you start to delete migrations).
List targets.
Syntax target_list
List target ports.
Syntax target_port_list
List target connectivity.
Syntax target_connectivity_list
List clusters.
Syntax cluster_list
List hosts.
Syntax host_list
List volumes.
Syntax vol_list
List data migrations.
Syntax dm_list
Define target (Fibre Channel only).
Syntax target_define target=<Name> protocol=FC xiv_features=no
Example target_define target=DMX605 protocol=FC xiv_features=no
Define target port (Fibre Channel only).
Syntax target_port_add fcaddress=<non-XIV storage WWPN>
target=<Name>
Example target_port_add fcaddress=0123456789012345 target=DMX605
Define target connectivity (Fibre Channel only).
Syntax target_connectivity_define
local_port=1:FC_Port:<Module:Port> fcaddress=<non-XIV
storage WWPN> target=<Name>
To make these changes permanent update the relevant profile, making sure that you export
the variables to make them environment variables.
Note: It is also possible to run XCLI commands without setting environment variables with
the -u and -p switches.
With the data migration defined via the script or batch job, an equivalent script or batch job to
execute the data migrations then must be run, as shown in Example 8-4.
For an example of the process to determine exact volume size, consider ESS 800 volume
00F-FCA33 depicted in Figure 8-26 on page 247. The size reported by the ESS 800 web GUI
is 10 GB, which suggests that the volume is 10,000,000,000 bytes in size (because the ESS
800 displays volume sizes using decimal counting). The AIX bootinfo -s hdisk2 command
reports the volume as 9,536 GiB, which is 9,999,220,736 bytes (because there are
If you are confident that you have determined the exact size, then when creating the XIV
volume, choose the Blocks option from the Volume Size drop-down menu and enter the size
of the XIV volume in blocks. If your sizing calculation was correct, this creates an XIV volume
that is the same size as the source (non-XIV storage device) volume. Then you can define a
migration:
1. In the XIV GUI go to the floating menu Remote Migration.
2. Right-click and choose Define Data Migration (Figure 8-12 on page 227). Make the
appropriate entries and selections, then click Define.
– Destination Pool: Choose the pool from the drop-down menu where the volume was
created.
– Destination Name: Chose the pre-created volume from the drop-down menu.
– Source Target System: Choose the already defined non-XIV storage device from the
drop-down menu.
Important: If the non-XIV device is active/passive, the source target system must
represent the controller (or service processor) on the non-XIV device that currently
owns the source LUN being migrated. This means that you must check from the
non-XIV storage, which controller is presenting the LUN to the XIV.
– Source LUN: Enter the decimal value of the LUN as presented to the XIV from the
non-XIV storage system. Certain storage devices present the LUN ID as hex. The
number in this field must be the decimal equivalent.
– Keep Source Updated: Check this if the non-XIV storage system source volume is to be
updated with writes from the host. In this manner all writes from the host will be written
to the XIV volume, as well as the non-XIV source volume until the data migration object
is deleted.
3. Test the data migration object. Right-click to select the created data migration volume and
choose Test Data Migration. If there are any issues with the data migration object the test
fails reporting the issue that was found. See Figure 8-14 on page 228 for an example of
the panel.
If the volume that you created is too small or too large you will receive an error message
when you do a test data migration, as shown in Figure 8-19. If you try and activate the
migration you will get the same error message. You must delete the volume that you manually
created on the XIV and create a new correctly sized one. This is because you cannot resize a
volume that is in a data migration pair, and you cannot delete a data migration pair unless it
has completed the background copy. Delete the volume and then investigate why your size
calculation was wrong. Then create a new volume and a new migration and test it again.
Increasing the max_initialization_rate parameter might decrease the time required to migrate
the data. However, doing so might impact existing production servers on the non-XIV storage
device. By increasing the rate parameters, more outgoing disk resources will be used to
serve migrations and less for existing production I/O. Be aware of how these parameters
affect migrations as well as production. You could always choose to only set this to a higher
value during off-peak production periods.
The rate parameters can only be set using XCLI, not via the XIV GUI. The current rate
settings are displayed by using the -x parameter, so run the target_list -x command. If the
setting is changed, the change takes place on the fly with immediate effect so there is no
need to deactivate/activate the migrations (doing so blocks host I/O). In Example 8-5 we first
display the target list and then confirm the current rates using the -x parameter. The example
shows that the initialization rate is still set to the default value (100 MBps). We then increase
the initialization rate to 200 MBps. We could then observe the completion rate, as shown in
Figure 8-16 on page 230, to see whether it has improved.
While the migration background copy is being processed, the value displayed in the Used
column of the Volumes and Snapshots panel drops every time that empty blocks are
detected. When the migration is completed, you can check this column to determine how
much real data was actually written into the XIV volume. In Figure 8-22 the used space on the
Windows2003_D volume is 4 GB. However, the Windows file system using this disk shown in
Figure 8-24 on page 243 shows only 1.4 GB of data. This could lead you to conclude wrongly
that the thick-to-thin capabilities of the XIV do not work.
The reason that this has occurred is that when file deletions occur at a file-system level, the
data is not removed. The file system re-uses this effectively free space but does not write
zeros over the old data (as doing so generates a large amount of unnecessary I/O). The end
result is that the XIV effectively copies old and deleted data during the migration. It must be
clearly understood that this makes no difference to the speed of the migration, as these
blocks have to be read into the XIV cache regardless of what they contain.
In a Windows environment you can use a Microsoft tool known as sdelete to write zeros
across deleted files. You can find this tool in the sysinternals section of Microsoft Technet.
Here is the current URL:
http://technet.microsoft.com/en-us/sysinternals/bb897443.aspx
If you instead choose to write zeros to recover space after the migration, you must initially
generate large amounts of empty files, which might initially appear to be counter-productive. It
takes several days for the used space value to decrease after the script or application is run.
This is because recovery of empty space runs as a background task.
What this means is that we can resize that volume to 68 GB (as shown in the XIV GUI) and
make the volume 15 GB larger without effectively consuming any more space on the XIV. In
Figure 8-24 we can see that the migrated Windows2003_D drive is 53 GB in size
(53,678,141,440 bytes).
Because this example is for a Microsoft Windows 2003 basic NTFS disk, we can use the
diskpart utility to extend the volume, as shown in Example 8-9.
We can now confirm that the volume has indeed grown by displaying the volume properties.
In Figure 8-25 we can see that the disk is now 68 GB (68,713,955,328 bytes).
In terms of when to do the re-size, a volume cannot be resized while it is part of a data
migration. This means that the migration process must have completed and the migration for
8.9 Troubleshooting
This section lists common errors that are encountered during data migrations using the XIV
data migration facility.
The volume on the source non-XIV storage device might not have been initialized or
low-level formatted. If the volume has data on it then this is not the case. However, if you
are assigning new volumes from the non-XIV storage device then perhaps these new
volumes have not completed the initialization process. On ESS 800 storage the
initialization process can be displayed from the Modify Volume Assignments panel. In
Figure 8-26 the volumes are still 0% background formatted, so they will not be accessible
by the XIV. So for ESS 800, keep clicking Refresh Status on the ESS 800 web GUI until
the formatting message disappears.
Note: This might also happen in a cluster environment where the XIV is holding a SCSI
reservation. Make sure all nodes of a cluster are shutdown prior to starting a migration.
The XCLI command reservation_list will list all scsi reservations held by the XIV. Should
a volume be found with reservations where all nodes are offline, the reservations might be
removed using the XCLI command reservation_clear. See XCLI documentation for
further details.
Refer to “Multi-pathing with data migrations” on page 217 and 8.12, “Device-specific
considerations” on page 252, for additional information.
If migrating from an EMC Symmetrix or DMX there are special considerations. Refer to
8.12.2, “EMC Symmetrix and DMX” on page 254.
8.10.3 Back-out after a data migration has been activated but is not complete
If the data migration shows in the GUI with a status of initialization or the XCLI shows it as
active=yes, then the background copy process has been started. If you deactivate the
migration in this state you will block any I/O passing through the XIV from the host server to
the migration LUN on the XIV and to the LUN on the non-XIV disk system. You must shut
down the host server or its applications first. After doing this you can deactivate the data
migration and then if desired you can delete the XIV data migration volume. Then restore the
original LUN masking and SAN fabric zoning and bring your host back up.
Important: If you chose to not allow source updating and write I/O has occurred after the
migration started, then the contents of the LUN on the non-XIV storage device will not
contain the changes from those writes. Understanding the implications of this is important
in a back-out plan.
8.10.4 Back-out after a data migration has reached the synchronised state
If the data migration shows in the GUI as having a status of synchronised, then the
background copy has completed. In this case back-out can still occur because the data
migration is not destructive to the source LUN on the non-XIV storage device. Simply reverse
the process by shutting down the host server or applications and restore the original LUN
masking and switch zoning settings. You might need to also reinstall the relevant host server
multi-path software for access to the non-XIV storage device.
Important: If you chose to not allow source updating and write I/O has occurred during the
migration or after it has completed, then the contents of the LUN on the non-XIV storage
device do not contain the changes from those writes. Understanding the implications of
this is important in a back-out plan.
When all data on the non-XIV disk system has been migrated, perform site clean up:
1. Delete all SAN zones related to the non-XIV disk.
2. Delete all LUNs on non-XIV disk and remove it from the site.
2 Site Run fiber cables from SAN switches to XIV for host connections
and migration connections.
3 Non-XIV storage Select host ports on the non-XIV storage to be used for migration
traffic. These ports do not have to be dedicated ports. Run new
cables if necessary.
4 Fabric switches Create switch aliases for each XIV Fibre Channel port and any
new non-XIV ports added to the fabric.
5 Fabric switches Define SAN zones to connect hosts to XIV (but do not activate the
zones). You can do this by cloning the existing zones from host to
non-XIV disk and swapping non-XIV aliases for new XIV aliases.
6 Fabric switches Define and activate SAN zones to connect non-XIV storage to XIV
initiator ports (unless direct connected).
8 Non-XIV storage Define the XIV on the non-XIV storage device, mapping LUN0 to
test the link.
9 XIV Define non-XIV storage to the XIV as a migration target and add
ports. Confirm that links are green and working.
11 XIV Define all the host servers to the XIV (cluster first if using clustered
hosts). Use a host listing from the non-XIV disk to get the WWPNs
for each host.
After the site setup is complete, the host migrations can begin. Table 8-2 shows the host
migration checklist. Repeat this checklist for every host. Task numbers identified with an
asterisk and colored red must be performed with the host application offline.
1 Host From the host, determine the volumes to be migrated and their relevant LUN
IDs and hardware serial numbers or identifiers.
2 Host If the host is remote from your location, confirm that you can power the host
back on after shutting it down (using tools such as an RSA card or
BladeCenter® manager).
3 Non-XIV Get the LUN IDs of the LUNs to be migrated from non-XIV storage device.
Storage Convert from hex to decimal if necessary.
5* Host Set the application to not start automatically at reboot. This helps when
performing administrative functions on the server (upgrades of drivers,
patches, and so on).
6* Host UNIX servers: Comment out disk mount points on affected disks in the mount
configuration file. This helps with system reboots while configuring for XIV.
8* Fabric Change the active zoneset to exclude the SAN zone that connects the host
server to non-XIV storage and include the SAN zone for the host server to XIV
storage. The new zone should have been created during site setup.
10* Non-XIV Map source volumes to the XIV host definition (created during site setup).
storage
11* XIV Create data migration pairing (XIV volumes created on the fly).
13* XIV Start XIV migration and verify it. If you want, wait for migration to finish.
14* Host Boot the server. (Be sure that the server is not attached to any storage.)
15* Host Co-existence of non-XIV and XIV multi-pathing software is supported with an
approved SCORE(RPQ) only. Remove any unapproved multi-pathing software
16* Host Install patches, update drivers, and HBA firmware as necessary.
17* Host Install the XIV Host Attachment Kit. (Be sure to note prerequisites.)
18* Host At this point you might need to reboot (depending on operating system)
19* XIV Map XIV volumes to the host server. (Use original LUN IDs.)
20* Host
21* Host Verify that the LUNs are available and that pathing is correct.
22* Host UNIX Servers: Update mount points for new disks in the mount configuration
file if they have changed. Mount the file systems.
24* Host Set the application to start automatically if this was previously changed.
26 XIV When the volume is synchronized delete the data migration (do not deactivate
the migration).
27 Non-XIV Un-map migration volumes away from XIV if you must free up LUN IDs.
Storage
28 XIV Consider re-sizing the migrated volumes to the next 17 GB boundary if the host
operating system is able to use new space on a re-sized volume.
29 Host If XIV volume was re-sized, use host procedures to utilize the extra space.
30 Host If non-XIV storage device drivers and other supporting software were not
removed earlier, remove them when convenient.
When all the hosts and volumes have been migrated there are several site cleanup tasks left,
as shown in Table 8-3.
Given that the XIV supports migration from almost any storage device, it is impossible to list
the methodology to get LUN IDs from each one.
Note: Some of the newer CLARiiONs (CX3, CX4) use ALUA when presenting LUNS to
the host and therefore appear to be an active/active storage device. ALUA is effectively
masking which SP owns a LUN on the back end of the CLARiiON. Though this appears
as an active/active storage device, ALUA could cause performance issues with XIV
migrations if configured using active/active storage device best practices (that is, two
paths for each target). This is because LUN ownership could be switching from one SP
to another in succession during the migration, with each switch taking processor and
I/O cycles.
Note: You can configure two paths to the SAME SP to two different XIV interface
modules for some redundancy. This will not protect against a trespass, but might
protect from an XIV hardware or SAN path failure.
Failover mode 0
LUN0
There is a requirement for the EMC Symmetrix or DMX to present a LUN ID 0 to the XIV in
order for the XIV Storage System to communicate with the EMC Symmetrix or DMX. In many
installations, the VCM device is allocated to LUN-0 on all FAs and is automatically presented
to all hosts. In these cases, the XIV connects to the DMX with no issues. However, in newer
installations, the VCM device is no longer presented to all hosts and therefore a real LUN-0 is
required to be presented to the XIV in order for the XIV to connect to the DMX. This LUN-0
can be a dummy device of any size that will not be migrated or an actual device that will be
migrated.
LUN numbering
The EMC Symmetrix and DMX, by default, does not present volumes in the range of 0 to 512
decimal. The Symmetrix/DMX presents volumes based on the LUN ID that was given the
volume when the volume was placed on the FA port. If a volume was placed on the FA with a
LUN ID of 90, this is how it is presented to the host by default. The Symmetrix/DMX also
presents the LUN IDs in hex. Thus, LUN ID 201 equates to decimal 513, which is greater than
512 and is outside of the XIV's range. There are two disciplines for migrating data from a
Symmetrix/DMX where the LUN ID is greater than 512 (decimal).
Multipathing
The EMC Symmetrix and DMX are active/active storage devices.
LUN0
There is a requirement for the HDS TagmaStore Universal Storage Platform (USP) to present
a LUN ID 0 to the XIV in order for the XIV Storage System to communicate with the HDS
device.
LUN numbering
The HDS USP uses hexadecimal LUN numbers.
Multipathing
The HDS USP is an active/active storage device.
8.12.4 HP EVA
The following requirements were determined after migration from a HP EVA 4400 and 8400.
LUN0
There is no requirement to map a LUN to LUN ID 0 for the HP EVA to communicate with the
XIV. This is because by default the HP EVA presents a special LUN known as the Console
LUN as LUN ID 0.
Multipathing
The HP EVA 4000/6000/8000 are active/active storage devices. For HP EVA 3000/5000, the
initial firmware release was active/passive, but a firmware upgrade to VCS Version 4.004
made it active/active capable. For more details see the following website:
http://h21007.www2.hp.com/portal/site/dspp/menuitem.863c3e4cbcdc3f3515b49c108973a8
01?ciid=aa08d8a0b5f02110d8a0b5f02110275d6e10RCRD
LUN0
There is a requirement for the DS4000 to present a LUN on LUN ID 0 to the XIV to allow the
XIV to communicate with the DS4000. It might be easier to create a new 1 GB LUN on the
DS4000 just to satisfy this requirement. This LUN does not need to have any data on it.
LUN numbering
For all DS4000 models, the LUN ID used in mapping is a decimal value between 0 to 15 or 0
to 255 (depending on model). This means that no hex-to-decimal conversion is necessary.
Figure 8-11 on page 225 shows an example of how to display the LUN IDs.
In Example 8-10 the XCLI commands show that the target called ITSO_DS4700 has two ports,
one from controller A (201800A0B82647EA) and one from controller B (201900A0B82647EA).
This is not the correct configuration and should not be used.
Instead, two targets should have been defined, as shown in Example 8-11 on page 258. In
this example, two separate targets have been defined, each target having only one port for
the relevant controller.
Note: Although some of the DS4000 storage devices (for example, DS4700) have multiple
target ports on each controller, it will not help you to attach more target ports from the
same controller because XIV does not have multipathing capabilities. Only one path per
controller should be attached.
In Example 8-13 choose the host type that specifies RDAC (Windows 2000).
LUN0
There is no requirement to map a LUN to LUN ID 0 for the ESS to communicate with the XIV.
LUN numbering
The LUN IDs used by the ESS are in hexadecimal, so they must be converted to decimal
when entered as XIV data migrations. It is not possible to specifically request certain LUN
IDs. In Example 8-14 there are 18 LUNs allocated by an ESS 800 to an XIV host called
NextraZap_ITSO_M5P4. You can clearly see that the LUN IDs are hex. The LUN IDs given in
the right column were added to the output to show the hex-to-decimal conversion needed for
use with XIV. An example of how to view LUN IDs using the ESS 800 web GUI is shown in
Figure 8-26 on page 247.
Restriction: The ESS can only allocate LUN IDs in the range 0 to 255 (hex 00 to FF). This
means that only 256 LUNs can be migrated at one time on a per-target basis. In other
words, more than 256 LUNs can be migrated if more than one target is used.
Multipathing
The ESS 800 is an active/active storage device. You can define multiple paths from the XIV
to the ESS 800 for migration. Ideally, connect to more than one host bay in the ESS 800.
Because each XIV host port is defined as a separate host system, ensure that the LUN ID
used for each volume is the same. There is a check box on the Modify Volume Assignments
panel titled “Use same ID/LUN in source and target” that will assist you. Figure 8-31 on
page 266 shows a good example of two XIV host ports with the same LUN IDs.
LUN0
There is no requirement to map a LUN to LUN ID 0 for a DS6000 or DS8000 to communicate
with the XIV.
LUN numbering
The DS6000 and DS8000 use hexadecimal LUN IDs. These can be displayed using DSCLI
with the showvolgrp -lunmap xxx command, where xxx is the volume group created to assign
volumes to the XIV for data migration. Do not use the web GUI to display LUN IDs.
Using XIV DM to migrate an AIX file system from ESS 800 to XIV
In this example we migrate a file system on an AIX host using ESS 800 disks to XIV. First we
select a volume group to migrate. In Example 8-16 we select a volume group called
ESS_VG1. The lsvg command shows that this volume group has one file system mounted on
/mnt/redbk. The df -k command shows that the file system is 20 GiB in size and is 46%
used.
We now determine which physical disks must be migrated. In Example 8-17 we use the lspv
commands to determine that hdisk3, hdisk4, and hdisk5 are the relevant disks for this VG.
The lsdev -Cc disk command confirms that they are located on an IBM ESS 2105. We then
use the lscfg command to determine the hardware serial numbers of the disks involved.
Because we now know the source hardware we can create connections between the ESS
800 and the XIV and the XIV and Dolly (our host server). First, in Example 8-18 we identify
the existing zones that connect Dolly to the ESS 800. We have two zones, one for each AIX
HBA. Each zone contains the same two ESS 800 HBA ports.
We now create two new zones. The first zone connects the initiator ports on the XIV to the
ESS 800. The second and third zones connects the target ports on the XIV to Dolly (for use
after the migration). These are shown in Example 8-19. All six ports on the XIV clearly must
have been cabled into the SAN fabric.
We then create the migration connections between the XIV and the ESS 800. An example of
using the XIV GUI to do this was shown in “Define target connectivity (Fibre Channel only).”
on page 232. In Example 8-20 we use the XCLI to define a target, then the ports on that
target, then the connections between XIV and the target (ESS 800). Finally, we check that the
links are active=yes and up=yes. We can use two ports on the ESS 800 because it is an
active/active storage device.
Example 8-20 Connecting ESS 800 to XIV for migration using XCLI
>> target_define protocol=FC target=ESS800 xiv_features=no
Command executed successfully.
>> target_port_add fcaddress=50:05:07:63:00:c9:0c:21 target=ESS800
Command executed successfully.
>> target_port_add fcaddress=50:05:07:63:00:cd:0c:21 target=ESS800
Command executed successfully.
>> target_connectivity_define local_port=1:FC_Port:5:4
fcaddress=50:05:07:63:00:c9:0c:21 target=ESS800
Command executed successfully.
>> target_connectivity_define local_port=1:FC_Port:7:4
fcaddress=50:05:07:63:00:cd:0c:21 target=ESS800
Command executed successfully.
>> target_connectivity_list
Target Name Remote Port FC Port IP Interface Active Up
ESS800 5005076300C90C21 1:FC_Port:5:4 yes yes
ESS800 5005076300CD0C21 1:FC_Port:7:4 yes yes
We now define the XIV as a host to the ESS 800. In Figure 8-30 we have defined the two
initiator ports on the XIV (with WWPNs that end in 53 and 73) as Linux (x86) hosts called
Nextra_Zap_5_4 and NextraZap_7_4.
After the zoning changes have been done and connectivity and correct definitions confirmed
between XIV to ESS and XIV to AIX host, we take an outage on the volume group and related
file systems that are going to be migrated. In Example 8-22 we unmount the file system, vary
off the volume group, and then export the volume group. Finally, we rmdev the hdisk devices.
If the Dolly host no longer needs access to any LUNs on the ESS 800 we remove the SAN
zoning that connects Dolly to the ESS 800. In Example 8-18 on page 263 this was the zone
called ESS800_dolly_fcs0.
We now allocate the ESS 800 LUNS to the XIV, as shown in Figure 8-31 on page 266, where
volume serials 00FFCA33, 010FCA33, and 011FCA33 have been unmapped from the host
called Dolly and remapped to the XIV definitions called NextraZap_5_4 and NextraZap_7_4.
We do not allow the volumes to be presented to both the host and the XIV. Note that the LUN
IDs in the Host Port column are correct for use with XIV because they start with zero and are
the same for both NextraZap Initiator ports.
We now create the DMs and run a test on each LUN. The XIV GUI or XCLI could be used. In
Example 8-23 the commands to create, test, and activate one of the three migrations is
shown. We must run each command for hdisk3 and hdisk4 also.
After we create and activate all three migrations, the Migration panel in the XIV GUI looks as
shown in Figure 8-32. Note that the remote LUN IDs are 0, 1, and 2, which must match the
LUN numbers seen in Figure 8-31.
Now that the migration has been started we can map the volumes to the AIX host definition
on the XIV, as shown in Figure 8-33, where the AIX host is called Dolly.
A final check before bringing the volume group back ensures that the Fibre Channel pathing
from the host to the XIV is set up correctly. We can use the AIX lspath command against
each hdisk, as shown in Example 8-25. Note that in this example the host can connect to
port 2 on each of the XIV modules 4, 5, 6, and 7 (which is confirmed by checking the last two
digits of the WWPN).
We can also use a script provided by the XIV Host Attachment Kit for AIX, called
xiv_devlist. An example of the output is shown in Example 8-26.
Non-XIV devices
===============
Device Size Paths
-----------------------------------
hdisk1 N/A 1/1
hdisk2 N/A 1/1
Having confirmed that the disks have been detected and that the paths are good, we can now
bring the volume group back online. In Example 8-27 we import the VG, confirm that the
PVIDs match those seen in Example 8-17 on page 262, and then mount the file system.
After the sync is complete it is time to delete the migrations. Do not leave the migrations in
place any longer than they need to be. We can use multiple selection to perform the deletion,
as shown in Figure 8-35, taking care to delete and not deactivate the migration.
Now at the ESS 800 web GUI we can un-map the three ESS 800 LUNs from the Nextra_Zap
host definitions. This frees up the LUN IDs to be reused for the next volume group migration.
We must unmount any file systems and vary off the volume group before we start. Then we
go to the volumes section of the XIV GUI, right-click to select the 10 GB volume, and select
the Resize option. The current size appears. In Figure 8-36 the size is shown in 512 byte
blocks because the volume was automatically created by the XIV based on the size of the
source LUN on the ESS 800. If we multiply 19531264 by 512 bytes we get 10,000,007,168
bytes, which is 10 GB.
We change the sizing methodology to GB and the size immediately changes to 17 GB, as
shown in Figure 8-37. If the volume was already larger than 17 GB, then it will change to the
next interval of 17 GB. For example, a 20 GB volume shows as 34 GB.
We then get a warning message. The volume is increasing in size. Click OK to continue.
Now the volume is really 17 GB and no space is being wasted on the XIV. The new size is
shown in Figure 8-38.
We can now resize the file system to take advantage of the extra space. In Example 8-29 the
original size of the file system in 512 byte blocks is shown.
[Entry Fields]
File system name /mnt/redbk
NEW mount point [/mnt/redbk]
SIZE of file system
Unit Size 512bytes
Number of units [41943040]
We change the number of 512 byte units to 83886080 because this is 40 GB in size, as
shown in Example 8-30.
The file system has now grown. In Example 8-31 we can see the file system has grown from
20 GB to 40 GB.
The combination of SVC and XIV allows a client to benefit from the high-performance grid
architecture of the XIV while retaining the business benefits delivered by the SVC (such as
higher performance via disk aggregation, multivendor and multi-device copy services, and
data migration functions).
The order of the sections in this chapter address each of the requirements of an
implementation plan in the order in which they arise. This chapter does not, however, discuss
physical implementation requirements (such as power requirements), as they are already
addressed in the book IBM XIV Storage System: Architecture, Implementation, and Usage,
SG24-76599, found here:
http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg247659.html?Open
SVC firmware
The first SVC firmware version that supported XIV was 4.3.0.1. However, the SVC cluster
should be on at least SVC firmware Version 4.3.1.4 or more preferably the most recent level
available from IBM. You can display the SVC firmware version by viewing the cluster
properties in the SVC GUI or by using the svcinfo lscluster command specifying the name
of the cluster. The SVC in Example 9-1 is on SVC code level 4.3.1.5.
Example 9-1 Displaying the SVC cluster code level using SVC CLI
IBM_2145:SVCSTGDEMO:admin> svcinfo lscluster SVCSTGDEMO
code_level 4.3.1.5 (build 9.16.0903130000)
XIV firmware
The XIV should be on at least XIV firmware Version 10.0.0.a. The XIV firmware version is
shown on the All Systems front page of the XIV GUI. The XIV in Figure 9-1 is on version
10.0.1.b (circled on the upper right in red).
Note that an upgrade from XIV 10.0.x.x code levels to 10.1.x.x code levels is not concurrent
(meaning that the XIV is unavailable for I/O during the upgrade).
6 27.26 8 4 4:5 6
11 54.65 20 10 4:5:7:8:9 6
12 61.74 20 10 4:5:7:8:9 6
13 66.16 24 12 4:5:6:7:8:9
14 73.24 24 12 4:5:6:7:8:9
15 79.11 24 12 4:5:6:7:8:9
Another way to view the activation state of the XIV interface modules is shown in Table 9-2.
As additional capacity is added to an XIV, additional XIV host ports become available. Where
a module is shown as inactive, this refers only to the host ports, not the data disks.
Module 9 Not present Inactive Inactive Active Active Active Active Active
host ports
Module 8 Not present Active Active Active Active Active Active Active
host ports
Module 7 Not present Active Active Active Active Active Active Active
host ports
91 93
Module 9
90 92
91
81 93
83
Module 8
90
80 92
82
71 73
Module 7
70 72
61 63
Module 6
60 62
91
51 93
53
Module 5
90
50 92
52
41 43
Module 4
40 42
Port 2 Port 4
Port 1 Port 3
In Figure 9-2, the MP value (module/port, which make up the last two digits of the WWPN) is
shown in each small box. The diagram represents the patch panel found at the rear of the XIV
rack.
To display the XIV WWPNs use the back view on the XIV GUI or the XCLI fc_port_list
command.
In the output example shown in Example 9-3 the four ports in module 4 are listed.
For availability and performance use ports 1 and 3 for SVC and general host traffic. If you
have two fabrics, place port 1 in the first fabric and port 3 in the second fabric.
You can instead choose to use ports 2 and 4, although in principle these are reserved for data
migration and remote mirroring. For that reason port 4 on each module is by default in initiator
mode. If you want to change the mode of port 4 to target mode, you can do so easily from the
XIV GUI or XCLI. However, you might also need an RPQ from IBM. Contact your IBM XIV
representative to discuss this.
Ideally, the number of MDisks presented by the XIV to the SVC should be a multiple of the
number of XIV host ports, from one to four. However, there is good math to support this.
The XIV can handle a queue depth of 1400 per Fibre Channel host port and a queue depth of
256 per mapped volume per host port:target port:volume tuple. However, the SVC sets the
following internal limits:
The maximum queue depth per MDisk is 60.
The maximum queue depth per target host port on an XIV is 1000.
Based on this knowledge, we can determine an ideal number of XIV volumes to map to the
SVC for use as MDisks by using this algorithm:
Q = ((P x C) / N) / M
If a 2-node SVC cluster is being used with 12 ports on IBM XIV System and 48 MDisks, this
yields a queue depth that as follows:
Q = ((12 ports*1000)/2 nodes)/48 MDisks = 125
Because 125 is greater than 60, the SVC uses a queue depth of 60 per MDisk. If a 4-node
SVC cluster is being used with 12 host ports on the IBM XIV System and 48 MDisks, this
yields a queue depth that as follows:
Q = ((12 ports*1000)/4 nodes)/48 MDisks = 62
Because 62 is greater than 60, the SVC uses a queue depth of 60 per MDisk.
If you have a 6-node or 8-node cluster, the formula suggests that you must use much larger
XIV volumes. However, currently available SVC firmware does not support an MDisk larger
than 2 TB, so it is simpler to continue to use the 1632 GB volume size. When using 1632 GB
volumes, there is leftover space. That space could be used for testing or for non-SVC
direct-attach hosts. If you map the remaining space to the SVC as an odd sized volume then
VDisk striping is not balanced, meaning that I/O is not be evenly striped across all XIV host
ports.
Tip: If you only provision part of the usable space of the XIV to be allocated to the SVC,
then the calculations no longer work. You should instead size your MDisks to ensure that
at least two (and up to four) MDisks are created for each host port on the XIV.
Thus, XIV is using binary sizing when creating volumes, but displaying it in decimal and then
rounding it down.
The recommended volume size for XIV volumes presented to the SVC is 1632 GB (as viewed
on the XIV GUI). There is nothing special about this volume size, it simply divides nicely to
create on average four XIV volumes per XIV host port (for queue depth purposes).
The size of a 1632 GB volume (as viewed on the XIV GUI) can be stated in four ways:
GB 1632 GB (decimal), as shown in the XIV GUI, but rounded down to the
nearest GB (see the number of bytes).
GiB 1520 GiB (binary counting where 1 GiB = 230 bytes). This is exactly
1520 GiB.
Bytes 1,632,087,572,480 bytes.
Blocks 3,187,671,040 blocks (each block being 512 bytes).
Note that the SVC reports each MDisk presented by XIV as 1520 GiB. Figure 9-3 shows what
the XIV reports.
If you right-click the volume in the XIV GUI and display properties, you will be able to see that
this volume is 3,187,671,040 blocks. If you multiply 3,187,671,040 by 512 (because there are
512 bytes in a SCSI block) you will get 1,632,087,572,480 bytes. If you divide that by
1,073,741,824 (the number of bytes in a binary GiB), then you will get 1520 GiB, which is
exactly what the SVC reports for the same volume (MDisk), as shown in Example 9-4.
IBM_2145:SVCSTGDEMO:admin>svcinfo lsmdisk
id name status mode capacity ctrl_LUN_# controller_name
9 mdisk9 online unmanaged 1520.0GB 0000000000000007 XIV
In Figure 9-4 there are three volumes that will be mapped to the SVC. The first volume is
2199 GB (2 TiB), but the other two are larger than that.
When presented to the SVC, the SVC reports all three as being 2 TiB (2048 GiB), as shown
in Example 9-5.
Because there was no benefit in using larger volume sizes do not follow this example. Always
ensure that volumes presented by the XIV to the SVC are 2199 GB or smaller (when viewed
on the XIV GUI or XCLI).
In terms of the available SVC extent sizes and the effect on maximum SVC cluster size, see
Table 9-4.
16 MB 64 TB
32 MB 128 TB
64 MB 256 TB
128 MB 512 TB
256 MB 1024 TB
512 MB 2048 TB
1024 MB 4096 TB
2048 MB 8192 TB
To determine whether removing a managed disk controller requires quorum disk relocation,
run a script to find the MDisks that are being used as quorum disks, as shown in
Example 9-6. This script can be run safely without modification. Example 9-6 shows two
MDisks on the DS6800 and one MDisk on the DS4700.
Example 9-7 Using the svcinfo lsquorum command on SVC code level 5.1 and later
IBM_2145:mycluster:admin>svcinfo lsquorum
quorum_index status id name controller_id controller_name active
0 online 0 mdisk0 0 DS6800_1 yes
1 online 1 mdisk1 1 DS6800_1 no
2 online 2 mdisk2 2 DS4700 no
To move the quorum disk function, we specify three MDisks that will become quorum disks.
Depending on your MDisk group extent size, each selected MDisk must have between 272
MB and 1024 MB of free space. Execute the svctask setquorum commands before you start
migration. If all available MDisk space has been allocated to VDisks then you will not be able
to use that MDisk as a quorum disk. Table 9-5 shows the amount of space needed on each
MDisk.
Table 9-5 Quorum disk space requirements for each of the three quorum MDisks
Extent size (in MB) Number of extents Amount of space per MDisk
needed by quorum needed by quorum
16 17 272 MB
32 9 288 MB
64 5 320 MB
128 3 384 MB
256 2 512 MB
1024 1 1024 MB
2048 1 2048 MB
In Example 9-8 there are three free MDisks. They are 1520 GiB in size (1632 GB).
In Example 9-9 the MDisk group is created using an extent size of 1024 MB.
All three MDisks are set to be quorum disks, as shown in Example 9-11.
The MDisk group has now lost free space, as shown in Example 9-12.
This means that free capacity fell by 3,221,225,472 bytes, which is 3 GiB or 1 GiB per quorum
MDisk.
Note: In this example all three quorum disks were placed on a single XIV. This might not
be an ideal configuration. The web tip referred to at the start of this section has more
details about best practice, but in short you should try and use more than one managed
disk controller if possible.
3. Add the SVC host ports to the host definition of the first SVC node, as shown in
Example 9-15.
4. Add the SVC host ports to the host definition of the second SVC node, as shown in
Example 9-16.
5. Repeat steps 3 and 4 for each SVC I/O group. If you only have two nodes then you only
have one I/O group.
6. Create a storage pool. In Example 9-17 the command shown creates a pool with 8160 GB
of space and no snapshot space. The total size of the pool is determined by the volume
size that you choose to use. We do not need snapshot space because we cannot use XIV
snapshots with SVC MDisks.
Important: You must not use XIV thin provisioning pools with SVC. You must only use
regular pools. The command shown in Example 9-17 creates a regular pool (where the
soft size is the same as the hard size. This does not stop you from using thin
provisioned VDisks on the SVC.
Important: Only map volumes to the SVC cluster (not to individual nodes in the
cluster). This ensures that each SVC node sees the same LUNs with the same LUN
IDs. You must not allow a situation where two nodes in the same SVC cluster have
different LUN mappings.
Tip: The XIV GUI normally reserves LUN ID 0 for in-band management. The SVC
cannot take advantage of this, but is not affected either way. In Example 9-19 we
started the mapping with LUN ID 0, but if you used the GUI you will find that by default
you start with LUN ID 1.
9. If necessary, change the system name for XIV so that it matches the controller name used
on the SVC. In Example 9-20 we use the config_get command to determine the machine
type and serial number. Then we use the config_set command to set the system_name.
Whereas the XIV allows a long name with spaces, SVC can only use 15 characters with
no spaces.
3. Create an MDisk group, as shown in Example 9-22, where an MDisk group is created
using an extent size of 1024 MB.
Important: Adding a new managed disk group to the SVC might result in the SVC
reporting that you have exceeded the virtualization license limit. Whereas this does not
affect operation of the SVC, you continue to receive this error message until the
situation is corrected (by either removing the MDisk Group or increasing the
virtualization license). If the non-XIV disk is not being replaced by the XIV then ensure
that an additional license has been purchased. Then increase the virtualization limit
using the svctask chlicense -virtualization xx command (where xx specifies the
new limit in TB).
4. Relocate quorum disks if required as documented in “Using an XIV for SVC quorum disks”
on page 281.
5. Rename the controller from its default name. A managed disk controller is given a name
by the SVC such as controller0 or controller1 (depending on how many controllers have
already been detected). Because the XIV can have a system name defined for it, aim to
closely match the two names. Note, however, that the controller name used by SVC
cannot have spaces and cannot be more than 15 characters long. In Example 9-23
controller number 2 is renamed to match the system name used by the XIV itself (which
was set in Example 9-20 on page 285).
Now we must follow one of the migration strategies, as described in the 9.7, “Data movement
strategy overview” on page 287.
We discuss this method in greater depth in 9.8, “Using SVC migration to move data to XIV” on
page 289.
We discuss this method in greater depth in 9.9, “Using VDisk mirroring to move the data” on
page 291.
In these cases we can migrate the VDisks to image mode and take an outage to do the
relocation and extent re-size. There will be a host outage, although it can kept short
(potentially in the order of seconds or minutes).
This method is detailed in greater depth in 9.10, “Using SVC migration with image mode” on
page 295.
We then must identify the VDisks that we are migrating. We can filter by MDisk Group ID, as
shown in Example 9-26, where there is only one VDisk that must be migrated.
We then create an Mdisk group called XIV_Target using the new XIV MDisks, with the same
extent size as the source group. In Example 9-28 it is 256.
We confirm the new MDisk group is present. In Example 9-29 we are filtering by using the
new ID of 2.
9.8.3 Migration
Now we are ready to migrate the VDisks. In Example 9-30 we migrate VDisk 5 into MDisk
group 2 and then confirm that the migration is running.
Important: Scripts that use VDisk names or IDs will not be affected by the use of VDisk
migration, as the VDisk names and IDs do not change.
We then must identify the VDisks that we are migrating. In Example 9-33 we filter by ID.
We then create an MDisk group called XIV_Target using the new XIV MDisks (with the same
extent size as the source group, in this example 256), as shown in Example 9-35.
We confirm that the new MDisk group is present. In Example 9-36 we are filtering by using
the new ID of 2.
In Example 9-40 we can see the two copies (and also that they are not yet in sync).
If copying is going too slowly, you could choose set a higher syncrate when you create the
copy.
You can also increase the syncrate from the default value of 50 (which equals 2 MBps) to 100
(which equals 64 MBps). This change affects the VDisk itself and isvalid for any future copies.
Example 9-43 shows the syntax.
After the estimated completion time passes, we can confirm that the copy process is
complete for VDisk 5. In Example 9-44 the sync is complete.
Important: Scripts that use VDisk names or IDs should not be affected by the use of VDisk
mirroring, as the VDisk names and IDs do not change. However, if you choose to split the
VDisk copies and continue to use copy 0, it will be a totally new VDisk with a new name
and a new ID.
Now to make a matching XIV volume we can either make an XIV volume that is larger than
the source VDisk or one that is exactly the same size. The easy solution is to create a larger
volume. Because the XIV creates volumes in 16 GiB portions (that display in the GUI as
rounded decimal 17 GB chunks), we could create a 17 GB LUN using the XIV and then map
it to the SVC (in this example the SVC host is defined by the XIV as svcstgdemo) and use the
next free LUN ID, which in Example 9-49 is LUN ID 12 (it is different every time).
The drawback of using a larger volume size is that we eventually end up using extra space.
So it is better to create a volume that is exactly the same size. To do this we must know the
size of the VDisk in bytes (by default the SVC shows the VDisk size in GiB, even though it
says GB). In Example 9-50 we first choose to display the size of the VDisk in GB.
Now that we know the size of the source VDisk in bytes, we can divide this by 512 to get the
size in blocks (there are always 512 bytes in a standard SCSI block). 10,737,418,240 bytes
divided by 512 bytes per block is 20,971,520 blocks. This is the size that we use on the XIV to
create our image mode transitional volume.
Example 9-52 shows an XCLI command run on an XIV to create a volume using blocks.
Having created the volume, on the XIV we now map it to the SVC (using the XIV GUI or
XCLI).
Then, on the SVC, we can detect it as an unmanaged MDisk using the svctask detectmdisk
command.
In Example 9-53 we first identify the source VDisk number (by listing VDisks per MDisk
group) and then identify the candidate MDisk (by looking for unmanaged MDisks).
In Example 9-53 we identified a source VDisk(5) sized 10 GiB and a target MDisk(9) sized
16 GiB.
Now we migrate the VDisk into image mode without changing MDisk groups (we stay in
group 1, which is where the source VDisk is currently located). The target MDisk must be
unmanaged to be able to do this. If we migrate to a different MDisk group, the extent size of
the target group must be the same as the source group. The advantage of using the same
group is simplicity, but it does mean that the MDisk group contains MDisks from two different
controllers (which is not the best option for normal operations). Example 9-54 shows the
command to start the migration.
We must confirm that the VDisk is in image mode or data loss will occur in the next step. At
this point we must take an outage.
The commands shown in Example 9-56 apply to a host whose Host ID is 2 and the VDisk ID
is 5.
The MDisk is now unmanaged (even though it contains customer data) and could be mapped
to a different SVC cluster or simply mapped directly to a non-SVC host.
We now use the unmanaged MDisk to create an image mode VDisk in the new MDisk group
and map it to the relevant host. Notice in Example 9-58 that the host ID is 2 and the VDisk
number changed to 10.
We can now reboot the host (or scan for new disks) and the LUN will return with data intact.
Important: The VDisk ID and VDisk names were both changed in this example. Scripts
that use the VDisk name or ID (such as those used to automatically create flashcopies)
must be changed to reflect the new name and ID.
First we create a new managed disk group using volumes on the XIV intended to be used as
the final destination. In Example 9-59, five volumes, each 1632 GB, were created on the XIV
and mapped to the SVC. These are detected as 1520 GiB (because 1632 GB on the XIV GUI
equals 1520 GiB on the SVC GUI). At a certain point the MDisks must also be renamed from
the default names given by the SVC using the svctask chmdisk -name command.
We create a MDisk group using an extent size of 1024 MB with the five free MDisks. In
Example 9-60 MDisk group 3 is created.
We then migrate the image mode VDisk (in our case VDisk 5) into the new MDisk group (in
our case group 3), as shown in Example 9-61.
In Example 9-63, we monitor the migration and wait for it to complete (no response means
that it is complete).
We can clean up the transitional MDisk (which should now be unmanaged), as shown in
Example 9-64.
Note: If the XIV has the Capacity on Demand (CoD) feature, then no hardware change
or license key is necessary to use available capacity that has not yet been purchased.
The customer simply starts using additional capacity as required until all available
usable space is allocated. The billing process to purchase this capacity occurs
afterwards.
2. From the Pools section of the XIV GUI, right-click the relevant pool and resize it depending
on how the new capacity will be split between any pools. If all the space on the XIV is
dedicated to a single SVC then there must be only one pool.
3. From the Volumes by Pools section of the XIV GUI, add new volumes of 1632 GB until no
more volumes can be created. (There will be space left over, which can be used as
scratch space for testing and for non-SVC hosts.)
4. From the Host section of the XIV GUI, map these new volumes to the relevant SVC
cluster. This completes the XIV portion of the upgrade.
5. From the SVC, detect and then add the new MDisks to the existing managed disk group.
Alternatively, a new managed disk group could be created. Remember that every MDisk
uses a different XIV host port, so a new MDisk group ideally contains several MDisks to
spread the Fibre Channel traffic.
6. If new volumes are added to an existing managed disk group, it might be desirable to
rebalance the existing extents across the new space.
To explain why an extent rebalance might be desirable, the SVC uses one XIV host port as a
preferred port for each MDisk. If a VDisk is striped across eight MDisks, then I/O from that
VDisk will be potentially striped across eight separate I/O ports on the XIV. If the space on
these eight MDisks is fully allocated, then when new capacity is added to the MDisk group,
new VDisks will only be striped across the new MDisks. If additional capacity supplying only
two new MDisks is added, then I/O for VDisks striped across just those two MDisks is only
directed to two host ports on the XIV. This means that the performance characteristics of
these VDisks might be slightly different, despite the fact that all XIV volumes effectively have
the same back end disk performance. The extent rebalance script is located here:
http://www.alphaworks.ibm.com/tech/svctools
The suggested host port to capacity ratios are shown in Table 9-6.
6 27 4
9 43 8
10 50 8
11 54 10
12 61 10
13 66 12
14 73 12
15 79 12
To use additional XIV host ports, run a cable from the SAN switch to the XIV and attach to the
relevant port on the XIV patch panel. Then zone the new XIV host port to the SVC cluster via
the SAN switch. No commands must be run on the XIV.
We can confirm is that the SVC is utilizing all six XIV interface modules. In Example 9-66 XIV
interface modules 4 through 9 are all clearly zoned to the SVC (because the WWPN ending in
71 is from XIV module 7, the module with WWPN ending in 61 is from XIV module 6, and so
on. To decode the WWPNs use the process described in 9.3.2, “Determining XIV WWPNs”
on page 275.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
Other publications
These publications are also relevant as further information sources:
IBM XIV Storage System Application Programming Interface, GA32-0788
IBM XIV Storage System User Manual, GC27-2213
IBM XIV Storage System: Product Overview, GA32-0791
IBM XIV Storage System Planning Guide, GA32-0770
IBM XIV Storage System Pre-Installation Network Planning Guide for Customer
Configuration, GC52-1328-01
Online resources
These Web sites are also relevant as further information sources:
IBM XIV Storage System Information Center:
http://publib.boulder.ibm.com/infocenter/ibmxiv/r2/index.jsp
IBM XIV Storage Web site:
http://www.ibm.com/systems/storage/disk/xiv/index.html
System Storage Interoperability Center (SSIC):
http://www.ibm.com/systems/support/storage/config/ssic/index.jsp
G M
GiB 278–280 managed mode 295, 299–300
map_vol host 296
H master 48, 53
Host Attachment Procedure 230 master peer 54, 83–85, 109, 127, 144
host server 214–215, 230 actual data 84
I/O requests 215 Deactivate XIV remote mirroring 84
host_add_port host 284 remote mirroring 84
HP-UX 174–175 XIV remote mirroring 84
master role 50, 52–53, 72, 102, 109–110, 112, 137, 142
master volume 5, 9–10, 14–16, 49, 54, 57, 77, 108,
I 110–112, 143–144
IBM XIV actual data 74
data migration solution 214 additional Snapshot 86
storage 215 Changing role 77
image mode 273, 280, 288, 295 Deactivate XIV remote mirroring 86–87
Migration 299 duplicate snapshot 10, 19
SVC migration 295 identical copy 56
importvg 170 periodic consistent copies 149
initialization 54, 73 Reactivate XIV remote mirroring 87
initialization rate 66, 238–239 XIV remote mirroring 80
initializing state 105 max_initialization_rate 238–240
initiator 89–90, 92, 214, 218–220 max_resync_rate 238–239
interval 49 max_syncjob_rate 238–239
IO group 292 MDisk 277, 279
ipinterface_list 89–90 MDisk group 280–282
iSCSI 89 free capacity 283
iSCSI ports 67 image mode VDisk 298
MDisks 273, 277–278
metadata 5, 11, 76
K Microsoft Excel 253
KB 5
migration
Keep Source Updated 216, 227, 237
speed 217, 238
mirror
L activation 73
last consistent snapshot 112 delete 82
timestamp 112 initialization 54
last_consistent 111 ongoing operation 54
last_replicated 76 reactivation 78
last_replicated snapshot 76–77, 133, 143–144, 152 resynchronization 78
license 286, 301, 303 mirror coupling 68, 72–73
link deactivate 75
mirror_activate 120
N Q
naming convention 9 queue depth 277, 279
no source updating 215–216
normal operation 63, 67, 109, 147
data flow 67 R
RDAC 257–258
reactivation 87
O Recovery Point Objective (RPO) 49, 125
ongoing operation 54 recreatevg 168
Open Office 253 Redbooks Web site
open systems Contact us xiv
AIX and FlashCopy 166 RedHat 220, 261
AIX and Remote Mirror and Copy 169 redirect on write 5, 20, 42
Copy Services using VERITAS Volume Manager, remote mirror 30, 47–48, 89–90
SUN Solaris 171 activate 120
HP-UX and Copy Services 174 Remote Mirror and Copy 169, 175
HP-UX and FlashCopy 174 HP-UX 175
HP-UX with Remote Mirror and Copy 175 remote mirror pair 226
SUN Solaris and Copy Services 171 remote mirroring 7, 21, 47–49, 56, 92, 101–102, 105,
Windows and Remote Mirror and Copy 171 108, 135
original snapshot 9–10, 12 actions 62
duplicate snapshot points 9 activate 131
mirror copy 20 consistency groups 48, 88
outage delete 141
unplanned 205 Fibre channel paths 89
first step 57
function 89
P implementation 89
page fault 187 planning 87
peer 48, 52 single unit 72
designations 53 synchronous 101
role 53 usage 58
point-in-time (PIT) 50 XIV systems 62
point-in-time (PiT) 50 remote site 47–48, 54, 103, 109, 113, 142, 147, 157, 160
point-in-time copy 225 secondary peers 79
port configuration 92 Slave volumes 114
port role 92 standby server 113
portperfshow 240 resize 89, 237, 243–244
ports 92, 156 resize operation 42
power loss consistency 70 resynchronization 78, 83, 112, 147
Index 309
role 48, 52–53, 76 naming convention 9
change 54 restore 34–35
changing 203 size 28
switching 54 snapshot group 25, 27
role reversal 142 snapshot volume 4–6, 16
RPO 49, 129, 138 snapshot_delete 19
RPO_Lagging 57, 137 snapshot/volume copy 85
RPO_OK 57 SNMP traps 88
source LUN 227, 237, 246
source MDisk group
S extent size 287
SAN boot 43, 214 Image Mode MDisks 289
SAN connectivit 89 Source Target System 226, 237
SAN LUNs 187 source updating 215–216, 231, 249
schedule 126, 138 source updating option 215
schedule interval 49 SRC codes 192
Schedule Management 127 standby 104
schedule_create schedule 130, 139 state 53
SCSI initiator 219 consistent 106
SDDPCM 267 initializing 105
Secondary 53 storage pool 7, 11, 19, 69, 103, 106, 108, 131, 133, 303
secondary 48, 53 additional allocations 20
secondary site 108–111, 132, 142, 147 consistency group 70
mirror relation 113 different CG 69
remote mirroring 110 existing volumes 70
Role changeover 115–116 storage system 109, 147, 213–214, 216
secondary XIV 48–49, 57, 104, 112, 131, 137 SUN Solaris
corresponding consistency group 131 Copy Services 171
Mirror statuses 106, 121 Remote Mirror and Copy with VERITAS Volume Man-
Slave volume 122 ager 173
single XIV SuSE 220
footprint 281 SVC 271–273
rack 281 firmware version 272, 282, 288
Storage Pool 68 MDisk group 280, 287, 289
system 58, 61, 65 mirror 273, 293
single-level storage 186 quorum disk 281
slave 48, 53 zone 273
slave peer 50, 76, 108–109, 112, 127–128, 137 zoning 273
consistent data 50 SVC cluster 272–274
slave pool 103 svcinfo lsmdisk 279–281
slave role 49, 72, 102, 110–112, 143, 159–160 svctask 282–283, 285
slave volume 52–55, 57, 103, 105, 108–109, 126, 128 svctask chlicense 286
Changing role 76 switch role 79, 110
consistent state 53 switch_role command 53
whole group 53 sync job 48–50, 126–127, 135, 137
snap_group 28, 34 most_recent snapshot 152
snap_group_duplicate 158 schedule 126
Snapshot sync type 103
automatic deletion 7, 9, 20 synchronization 231, 238
deletion priority 8, 12–13 rate 66
last_replicated 152 status 56
most_recent 152 synchronized 106, 215, 234, 238
snapshot 1, 7 synchronous 47
creation 9, 28 synchronous mirroring 49
delete 8, 18 syncrate 294
details 28 System i
duplicate 9–10 Disk Pool 187
last consistent 112 System i5
last_consistent 111 external storage 186
lock 32 Hardware Management Console (HMC) 186
locked 8, 10
Index 311
312 IBM XIV Storage System: Copy Services and Migration
IBM XIV Storage System: Copy
Services and Migration
IBM XIV Storage System: Copy
Services and Migration
IBM XIV Storage System: Copy Services and Migration
(0.5” spine)
0.475”<->0.873”
250 <-> 459 pages
IBM XIV Storage System: Copy Services and Migration
IBM XIV Storage System: Copy
Services and Migration
IBM XIV Storage System: Copy
Services and Migration
Back cover ®
Learn details of the This IBM Redbooks publication provides a practical understanding of
the XIV Storage System copy and migration functions. The XIV Storage INTERNATIONAL
copy services and
System has a rich set of copy functions suited for various data TECHNICAL
migration functions
protection scenarios, which enables clients to enhance their business SUPPORT
continuance, data migration, and online backup solutions. These ORGANIZATION
Explore practical functions allow point-in-time copies, known as snapshots and full
scenarios for volume copies, and also include remote copy capabilities in either
snapshot and synchronous or asynchronous mode. These functions are included in
mirroring the XIV software and all their features are available at no additional
charge. BUILDING TECHNICAL
Review host platform The various copy functions are reviewed under separate chapters that INFORMATION BASED ON
include detailed information about usage, as well as practical PRACTICAL EXPERIENCE
specific
illustrations.
considerations IBM Redbooks are developed
This book also explains the XIV built-in migration capability, and by the IBM International
presents migration alternatives based on the San Volume Controller Technical Support
(SVC). Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.